# Short-Term Forecasting of Exchange Rates In The Forex

Novel high accuracy system predicting future rates of exchange in the **Foreign Exchange Market** (FOREX) is offered. The project initiator is Prof. Leonid

**
Object.** Peshes who is a specialist in the theory and practice of reliability, accelerated life testing, technical and bio-cybernetics, the analysis of the random processes and chaotic dynamic systems, pattern recognition, intelligent systems and applied statistics.

**FOREX** is a giant mechanism with a present daily turnover about USD 5 trillion which covers the whole world. This currency amount grows 20-25 % each year. FOREX is a unique market actively operating around the clock. It is the most dynamic, liquid, accessible and quickly growing market. FOREX solely determines exchange rates values for main convertible currencies. Currencies movement depends only on the freely floating rates of exchange without any essential influence on the governmental institutions and extraordinary events.

Important advantage of **FOREX** comparatively with other markets is use of margin trades. The margin trade means the possibility to buy and sell currencies when the full money amount necessary for transaction is not available. To conclude the deal the trader must deposit only an initial margin and then he may deal with the currency volume, which exceeds the initial amount 50 – 200 times. Another FOREX advantage is the possibility to make a profit at any direction of the exchange rates change.

Fast** currencies** movement, low cost and simplicity of realization of the transactions, high liquidity allow considering FOREX as the most attractive object for business. The principle of getting the profit from currency trading which is considered here as an usual commodity is a very simple. A trader repeatedly sells currencies that fall and buys currencies that rise. If these actions correspond to the actual trend in FOREX so each such transaction will generate profit. An ability to predict the exchange rates in accordance with the actual change of rates provides an unlimited profit potential. FOREX is a basis for an unique business, which makes it possible to take acute of the profit in the shortest way: money makes money.

In this particular project the *developer (producer) and the consumer (buyer)* is actually the same entity. Taking into account this fact and also that FOREX is an open market with free access and without any competitors’ counteractions it precludes the necessity to look for the customers (buyers) and to spare much money, time and effort usually required for marketing, promotion, public relations, sales etc. in contrast to all other businesses.

As a mathematical object FOREX represents a multivariate times series. It is a very sophisticated historical process having dual nature. It is a random non-stationary process with aftereffect and other features, which create difficulties for its behavior forecasting. On the other hand it is a chaotic (non-linear) dynamic process reflecting a collective psychology of the market participants (mainly leaders). For each given short time interval of this process exists a parameter – fractal which characterize the amount of previous history (memory) influencing future value of the process which follows the considered time period.

## Generic Technology

The development of the project is based on the generic original multidimensional Bootstrap technology. This data-based simulation resampling technique was initially introduced at the end of 60-th by Prof. L. Peshes. It was described in several his articles and the book “*Fundamentals of Accelerated Reliability Testing*”, **Science & Technics, Minsk, 1972.**

By using the resampling procedure (multiple reproduction of the original sample in samples of the same size with the identical statistical properties) it was obtained a solution of several tasks, which were earlier inaccessible. In particular the following problems were solved using this technique:

– definition of the required confidence intervals for any characteristic of interest,

– selection of the adequate distribution function for the observed sampled data securing serious errors elimination in the use of statistical models.

The software support for the new paradigm was represented by the package known under the name PARUS (1968).

In 1979 this paradigm was newly discovered by Prof. *Bradley Efron* (Stanford University) and he named it Bootstrap. Due to his works this technology gained popularity and got wide application.

The Bootstrap technology is capable to consider the observed raw data as a pattern of investigation of the real situation. By utilizing the automatic Bootstrap procedures this technology allows to extract the necessary information and knowledge mining in the required direction.

“ It gives us another way to get empirical information in circumstances that almost defy mathematical analysis. It is a nice way of letting the data speak almost for themselves.” –said the world known statistician Prof. F. Mosteller from Harward University (New York Times, 08.11.1988).

This approach makes it possible to find a highly accurate solution of the problem without any need to assume or impose a convenient model that doesn’t have a strong scientific basis. The Bootstrap theory offers the most powerful and efficient implementation tool in a modern statistical data analysis. It is a world breakthrough in the field of statistics. The Bootstrap technique is the best applicable one for areas of activity which accumulated great volumes of the actual data and which demand new effective examination, classification, diagnostics, prediction etc. methods in order to make authentic conclusion.

The most attractive application that ensures maximal profit is a correct short-term prediction of exchange rates in the FOREX. The chaos dynamic and fractal approach were successfully integrated in the frame of the Bootstrap technique in this project.

**The main advantages of the proposed technology are:**

1. Industrial technological approach to data processing in the real-time mode.

2. Automatic creation of the required knowledge bases on the observed actual data.

3. Automatic generation and modification of inference engines.

4. Ability of self-analysis, self-training and self-development in the course of the system use.

5. Natural empirical way of data presentation.

Novelty and originality of the prediction system under consideration is determined by the use of the principally new patterns, methods, algoritms and know-how. It secures its patentability and competitiveness.

**Proposed Solution **

The input information for a prediction is the historical data of the exchange rates movements of the major convertible currencies for the period of last 15-20 years. This original multivariate time series are exposed to some transformation and mapping into vector spaces with various dimensions. The values of the components of each vector are the values related to time intervals of identical duration. It is equal to the prediction period (day, intraday, hour, half hour, quarter hour etc.) and corresponding components define the state of the currency movements process during the considered time-interval.

Each vector is transformed into a new vector whose components are the meanings (quantative, qualification and qualitative) reflected by the most inherent peculiarities of the exchange rates behaviour for time intervals under consideration. These values are determined accordingly to the specific set of functions. The new vector sequence and the serial numbers uniquely conform to the initial vector time series and serial numbers of the corresponding vectors of both time series coincide.

As mentioned above it is important to estimate duration of the previous history influencing the prediction – a fractal for the time point immediately preceding the prediction period. If the investigated time series are created by the dynamic chaotic system (non-linear dynamic system), then a fractal is characterized by some quantity representing effective number of freedom degrees or the immersion depth which ensures uniquely prediction.

And if it is considered as a random vector sequence (time series) a fractal is determined accordingly to the maximal number of dependent vectors immediately preceding to the prediction period. This number characterizes the aftereffect length (memory) of the investigated random process on the forecast moment and determines the immersion depth for a prediction.

For estimation of the mentioned fractal blocks are tested consecutively. These blocks consist of two, three, four and more last vectors of the new time series under consideration. Initially the block consisting of two vectors (the last and before last ones) is tested. All adjacent vectors pairs of this random sequence form the multivariate empirical probability distribution. By means of selection the closest encirlement for the investigated pair of vectors the empirical density of their joint distribution in the determined range of component values is estimated. This area determines uniquely the subranges of these values separately for each of the vectors of the tested pair. It allows to estimate in them two empirical corresponding densities. If the quotient of the product of the last ones to the joint density is close to one, so the vectors of the examined block are independent (equality to one of the mentioned ratio is the independence criterion).

In a similar way a block consisting of three last vectors is tested. As a result the joint empirical densities for these three vectors and two last vectors and also the individual densities for each of these vectors are estimated. Appropriate relations for the independence criterion are defined.

Densities estimated on this stage for the encirlement of the last and before last vectors and together for this vectors pair are identical to the previous case but are related to the more narrow range of their components change.

This testing process is fulfiled identically also for the expanding blocks of last vectors (with length of three, four etc.) of the considered time series. Until the near neighbours may be found for him. The calculated estimations of the independence criterion for various lengths blocks in various ranges of the corresponding vectors components change enable to determine the required fractal.

To give the required prediction all blocks (subsequences of adjacent vectors) having length equal to the found fractal are grouped in the several sets. The rule of grouping is determined by the relative change of exchange rates in the immediately following after the block vector of the investigated time series. For example, blocks corresponding to the rate of increase more than 0.1% are included into a group, corresponding to the rate of decrease more than 0.1% are included into another group and blocks with exchange rates change between -0.1% and +0.1% are included into third group.

Using the near neighbours for the block of the considered length consisting of last vectors of the analyzed time series which are present in every of the indicated sets the appropriate empiric density is estimated. The selection of sets defining the prediction is realized on the biggest value of the weighted estimates equal to the product of corresponding densities on their weighted coefficient, which is directly proportional to the elements number in sets.

The reliability of the achieved conclusion is evaluated on the base of the Bootstrap technique. For each of the sets, determining the prediction, 200-300 Bootstrap samples on its base are reproduced. The corresponding estimates calculated by the mentioned above way for each of these Bootstrap samples make it possible to realize the statistical evaluation of the probability of the fixed prediction correctness.

## Feasibility and Profit.

The proposed project even being on the early stage of development allowed to show the possibility of the essential increase of the forecast accuracy comparatively to the existing means. On the base of the initial pilot prototype several real time trial calculations were conducted with various duration periods in the range 2 -– 6 business weeks (10 – 30 business days). A daily FOREX prediction of the closing price on the major markets in London and New York was realized. The results of the forecasting were made available 15 – 20 hours before the actual values of the closing exchange rates were really formed on the FOREX.

Several Israeli companies such as Koor Future Markets Ltd., Menorah Insurance Company, Eventus Ltd. etc. took part in the various time periods in appraisal of forecasts of the USD respectively to three currencies: JPY, GBP, SFr (JPY/US D, USD/GBP, SFr/US D). The success rate recorded for these experimental predictions was above 70 %.

For some of these time periods the net profit was calculated and also the annual net profit was evaluated. This annual net profit corresponding to each currency pair was about 250% and in the implementation of transactions for pairs with the biggest predicted change the net profit corresponds 480 %. We want to notice that the highest achievable (hypothetical ideal) annual net profit corresponding to the transactions for currency pairs with the biggest actual change of exchange rates might be 1100%. The credit when the credit leverage is used will secure the net profit 50 –200 times more than the net profit respectively the own marging of the trader.

To realize the considered project the reliable software for the forecasting system should be developed on the first stage (0.5 – 1.0 year). This software must decrease the working time for predictions and increase their accuracy and reliability, implement exchange rates’ predictions based on the combined pool of the hystorical data for the currency pairs allowing to take into account the mutual influence of these currencies trajectories’ movements (including pair USD/Euro), transition to the automatic generation of predictions, possibility to predict in the Daily and Intraday time frames, graphical representation and to provide the regular on-line currency trading on FOREX. The historical data for Euro till 2002 will be formed by means of the simulation on the base of the European currencies which were replaced by Euro, first of all DM.

On the second stage (one year) it will be realised the further improvement of quality and reliability of the system and the increase of the predictions accuracy, the possibility to predict in the Hourly, Half-Hourly, Quarterly (15 min.) and other shorter time frames representation, wide use.

The FOREX prediction project’s team consists of the leading scientists and specialists in the fields of mathematics, statistics,. computers and management.

SHORT TERM FORECASTING OF EXCHANGE RATES IN THE FOREX (FOREX PREDICTION PROJECT)

State of Affairs

Supplement to the Executive Summary –

**1.Common Situation. **The Mathematical Support which determines the New Generic Original Multidimensional Technology has already been developed. The proposed project is entirely based on this technology. Also the software for the Initial Pilot Prototype has been developed. Although this Prototype was initially built to demonstrate the advantageous approach to the predictions problem used in the Project but nevertheless it was used as an operating system for several real time trial daily forecasts of the closing prices with various duration periods in the range from 10 to 30 business days on FOREX. Daily predictions were given separately for each one of three currency pairs – JP Y/US $, US $/GB L, CH F/US $. For calculations a large pool of past data for the period of the last 10-15 years from the major foreign currency markets in London and New York was used. A portion of correct predictions exceedeed 70 % for each of three indicated pairs. Because the software for some algoritms and procedures lacked in the mentioned prototype, getting the results (predictions) required an essential working time. An additional analysis was carried out and it took 5-8 hours.

**2. Stages in the Project’s Implementation and their Purposes. **For decrease the working time for predictions and increase their accuracy and reliability; for exchange rates’ predictions based on the combined pool of the hystorical data for the currency pairs allowing to take into account the mutual influence of these currencies movements’ trajectories; transition to the automatic generation of predictions; possibility to predict in the Intraday, Hourly, Half Hourly, Quarterly (15 min.) and other shorter time frames; graphical representation; knowledge base creation; design and modification of inference engines; organization of interaction with SAS; development and design means for self-checking, self-training, protection, support, advancement etc. it is necessary to implement several stages of design and development, mainly software design and development. The long term purpose of the project is the building of the Universal Artificial Intelligence Systems of wide use. Initially works are carried out in two stages, each one six months long. The first stage implementation will guarantee the working time for predictions not more than an hour; in the second half of this stage occasional real time daily currency trading on FOREX may take place; also the finalization and testing of the system for the automatic generation predictions; predictions within Intraday (3-4 hours) mode and also predictions taking into account the mutual interactions of the movements’ trajectories for the considered currencies’ exchange rates on FOREX. Unfortunately there is not the long period (10-20 years) hystorical data for movements of the international European currency EURO(ECU), introduced on FOREX from 01.01.1999 and in circulation from 01.01.2001. So for such currency pair as US $/EURO predictions will be investigated by using simulation data on the base of the European currencies’ movements (for currencies replaced by EURO, first of all DM). The second stage is for realization of the options found at the tests, which took place on the first stage. Also the transition to the regular on-line currency trading on FOREX in the Daily or Interday time intervals will be realized on the base of the automatically generated predictions. Gradually will be provided the possibility to predict in Hourly, Half Hourly and Quarterly modes. The building of the Initial Knowledge Base and the Principal Inference Engines is supposed to be done on this stage and also the reliability calculation for each offered prediction. The development of the self-checking and the self-training systems also will be started.

**3.Required Investment.** Investment on the first stage is US $250 000 for the following expenses:

- Hardware, Software, Historical Data Supply, Professional Literature…………………………………………..US $40 000
- Professional Staff Payroll…………………US$150 000
- Rent, Utilities, Office Equipment, Various Services………………………………… US $20 000
- Operating Expenses (General & Computer Communication, Hardware Maintenance, Information Services, Insurance)….………………………… US $40 000

The value of investment required on the second stage is smaller despite the bigger volume of the design and development work to be carried out. Partial self-sufficiency may be reached due to the profits from the on-line trading on FOREX. Intelligent Bootstrap System for Medical Diagnostics. By Prof. L.Peshes and Dr. B.Feldman Methods and principles of the medical diagnostic as a result of the long experience accumulated by mankind is actually the description of the empirical associations and the recognition of any illness. This knowledge allows to the physician to set a proper diagnosis using the available information anamnesis, signs, symptoms, analysis and inspections’ results, history of the illness etc. Formally, to set a true diagnosis each illness has to be defined distinctly. The illness A is a one, which is defined by a set of signs and symptoms not coinciding in collection for various illnesses. The medicine is still very far from giving on accurate definition to each illness. A physician while setting a diagnosis can’t be absolutely sure in truth of his diagnosis. Such a situation when he doesn’t doubt in his decision means that the probability of the made diagnosis is essentially higher than another one. Understandably any doctor doesn’t calculate the probability as assumed disease. He intuitively determines the preference relation of one or another diagnosis on the base of his knowledge, experience, qualification and ability. These properties are different for various physicians. Therefore the diagnosis made by various doctors in the same situation may distinguish one from another. Number of signs and symptoms for the diagnosis setting can reach several tens. As the experimental psychology specialists found it the human being’s memory can use productively not more than 7-8. Experiments conducted on the physicians prove this statement. If number of signs and symptoms taking into account grows up to 7-8, the probability of the true diagnosis will grow. Further increase of signs and symptoms amount results in the indicated probability going down. But in reality it should be vice-versa. When the number of signs and symptoms taken into account grows the accurate diagnosis’ probability should grow also. This contradiction determines the necessity to work out mathematical methods ensuring to give an accurate diagnosis in accordance with available information. Hundreds of the scientific teams are involved in the development of the diagnostic systems at the present time. The quality of such system is estimated by the probability of the correct diagnosis. Diagnostic systems can be split into types on the building principle. The first ones are based on the mathematical models of the applied statistics (perceptrons and neuron networks, classification and regression trees [CART], decisive rules and potential functions methods, discriminant analysis and image recognition methods). There are systems built on the base of the separate methods and also big statistical systems such as BMDP, SAS etc., where the software support for the principal methods of the applied statistics is realized. The medical diagnostic systems of the second type are built on the base of the logic heuristic approach modeling the doctor’s activity. These systems are called expert systems and this name emphasizes the fact that the system uses the knowledge of experts. For development of such system the crucial point is the ability of the developer to build the knowledge base and inference engines based on the descriptions and explanations of the task solution submitted by the experts. But no expert (doctor) can formulate in detail all rules and algorithms which he acts accordingly to. Knowledge acquisition is the bottleneck at the expert’s system design. The problem becomes even more complicated when knowledge concerning wide range of diseases should be inserted into the expert diagnostic system. Below are given the examples of such systems: MYCIN-first expert system for the bacterial infections blood diseases, PUFF-for lung diseases, PIP-for kidney diseases, MODIS-2 for symptomatic hypertension etc. Two reviewed above systems are different from the point of view how to teach them to classify the situation. Two different training methods exist: training from examples and training on tutor (expert) explanations. Mentioned systems belong to the first or second type correspondingly. The first type systems can be also called expert systems. In this case “the expert” is represented by the learning samples consisting of the vectors set of the inspected data (input coordinates) having accurately made (verified) diagnoses (output coordinates). Both types of the system are based on the knowledge expressed in various forms. The common feature is the knowledge base and the inference engines creation that determines their belonging to the artificial intelligence class of systems. As it was mentioned above the application range of the expert systems, imitating the physician’s work is limited by narrow specialized field. At the same time the wide profile diagnostic system’s development is most desirable. For example, the most difficult task to recognize the patient’s illness has to be solved by the wide range of specialization physician- therapeutist, who made the diagnosis choice of a long list of the internal diseases. Moreover the most common information is available to the therapeutist. Building of such systems can be realized on the base of the systems learning on examples. Diagnostic in the wide range is not a problem for them. As far as the choice of the effective mathematical means is concerned the leading specialists agree that only the Bootstrap Paradigm is able to provide the maximum level of diagnostic quality. The Bootstrap technology allows arranging the computational process in such a way to extract almost all necessary information from the experimental data (learning samples) in the desired direction (diagnosis setting on the data of the patient) without dubious assumptions. Possibilities to realize the Bootstrap solutions based on the definite combination of algorithms are numerous. They allow solving the problems as for the classic settings so for spontaneous descriptive formulations, both for parametric and non-parametric approach. Let us suppose a way of solution for the reviewed problem there is the learning sample consisting of n vectors. The first coordinate for each vector is the fixed kinds of illness, the rest of coordinates are the meanings of the patient’s signs and symptoms, which corresponds this vector. We number all various kinds of diseases for discussed sample y= 1,2…k. The i – th vector of the learning sample has a form ( yi , Xi )Т, where the superscript means the transposition operation, Xi = (xi (1) , xi (2) ,.., xi (p) )T , is a set of signs and symptoms’ meanings symptomocomplex of the i-th vector, i=1,2,…,n. Signs and symptoms can have as quantitative so qualitative meanings. A full characteristic of the training sample is the distribution function: f( y=r, Х)= f( X) P( y=r/X) = P( y=r) f( X/y=r), where f(Х) is the density function of the symptomocomplex X in p-dimensional space of signs and symptoms, P( y=r/X) -conditional probability r of the disease at symptomocomplex X, P( y=r) -probability of the disease r among reviewed k diseases, f( X/y=r) -conditional density function X at disease r, r=1,2,…,k. Density function for symptomocomplex X can be given as: k f( X) = S f( X/y=r)P( y=r) . r= 1 It is evident from these equalities that probability of the illness r at symptomocomplex X is defined by ratio: P( y=r) f( X/y=r), P( y=r/X) = —————————- ( Bayes formula) k S P( y=r) f( X/y=r) r= 1 Let us assume that the patient’s symptomocomplex is represented by vector Xpat. Our task is to make diagnosis for this patient. The optimal diagnosis with minimum error probability corresponds to the illness r 0 for which probability P(y=r0 / X pat ) is bigger than probabilities P (y=r / X pat ) for all r, which is not equal r 0. To solve the task we should know in accordance with the Bayes formula the conditional distributions f( X/y=r) and probabilities P( y=r). The latter ones are usually known for a definite region. So in a classic formulation of this it reduced to the determination (restoration) of the corresponding distribution functions f( X/y=r). This task is essentially more complicated than the initial one, because the restored distributions carry the full information about disease y for the used set of signs and symptoms. At the same time it is required for the common solution of the diagnostic task to find the mutual location of the multidimensional areas for the symptomocomplex points and to find boundaries – discriminant functions selecting out of signs and symptoms’ spaces for each illness. But the mathematical description of the separating boundaries is again represented the functional of the mentioned distributions. These circumstances are the principal ones because the choice of the suitable distributions based on the experimental data (before the Bootstrap discovery) is a non-solved problem. What do the systems developers do in such particular case? They suppose that a kind of the required distributions is known exactly and is usually the multivariate Gauss (normal) distribution. Further, parameters of the corresponding distributions (average and covariance matrixes for normal distributions) are estimated accordingly to the experimental data of the simptomocomplex for the training sample. From the geometric point of view the discriminant functions form some hypersurfaces in p – space of signs and symptoms. Building of such surfaces higher than first order, even on the condition of the signs and symptoms’ distribution normality, is too complicated task in order to deal with it. Therefore to simplify the task separating surfaces are found as the linear discriminate functions (p-dimensional hyperplanes). As a not seldom result this one and other doubtful assumptions are resulted in wrong solutions (diagnosis). The Bootstrap approach allows getting this task solution without additional assumptions. We may note that to estimate the required probabilities P (y=r/X pat) for the vector of the patient’s signs and symptoms X pat it is enough to know not the density functions f( X/y=r) for each kind of illness but values of the functions in the some vicinity of the point X pat. Set of all vectors of the learning sample is split on the k subsets of the symptomocomplex vectors, corresponding to each kind of illness. Let us assume that for the illness r a subset contains nr vectors k S nr = n r=1 We take one of these subsets r and number its vectors arbitrarily. We put in the ascending order values of each quantitative coordinate of this set indicating the vector’s number to whom is belongs. Qualitative components are presented as a sequence where vectors follow in accordance with their numbers, e.g. the ordinal sign number in this series equal to the his vector’s number. We put into each numerical signs series the identical in accordance with value sign from the patient’s symptomocomplex. Taking some number M1< a min{ nr}, a < 1, 1<=r<=k for example a =2/3,we build the most informative interval around each numerical sign of the patient, consisting of M1 members of the corresponding row. This interval is build in such a way that the difference between maximum (right) and minimum (left) values was the smallest one. If there are in such interval to the right from the right boundary or to the left from the left boundary equal to them values, so they are added to the formed combination of signs for singled out enclosing. To each collection of the values corresponds some set of vectors’ numbers. Corresponding numbers’ set for the each qualitative sign is found on the base of its coincidence with the analogous sign of the patient’s symptomocomplex. We find subset on all formed sets and this subset is the intersection of these sets, i.e. it is a collection of common numbers, which are contained, in the found sets. Let us designate the quantity of these common numbers mr . If mr >0 we may find in the previously built ordered series for quantitative signs a length of interval Drj equal to the difference between maximum and minimum value of this j-th sign for the established set of the common numbers, including the value of the patient’s sign. Without loss of generality we can first s<=p coordinates of the symptomocomplex are quantitative ones in the p -space of signs. Then if mr isn’t equal 0 it will be found s values of Drj , j=1, 2, …,s as a result of the given procedure. The estimation of the unknown density value f( X/y=r) in the vicinity of the point X pat is defined accordingly to the following formula: ~ mr f ( Xpat/y=r) = —————– . nr Пsj=1 Dr j If mr =0 the required estimation is equal to zero. After realization of these procedures for r =1,2,…, k, we’ll find all k values of the estimated densities. If we have all indicated equal to zero as a result , that is m1= m2=…= mk= 0, we should increase the number M1, and if we have that is two or more values mr >5, the indicated procedure is performed for smaller values M1. As a result of the made computations we calculate the multiplication results: ~ Cr = P(y=r) f ( Xpat/y=r), r =1,2,…,k. The diagnosis is defined by the disease ro, for which the value Cro is bigger than all the rest of Cr, where r isn’t equal ro. Probability of the fixed diagnosis concerning all reviewed diseases is defined on the given Bayes formula for densities: ~ f( X/y=r) = f( Xpat/y=r) In order to estimate the reliability of the received conclusion the realized calculations may be not sufficient. If the closed to Cro value of Cr1 differs negligibly, it is possible to have an error in the made diagnosis because of the values’ scattering, which has all received above statistical estimations. Reliability of the made diagnosis correctness is characterized by the probability P(Cro >Cr1 ). Solution of the last task is made by the following way. We take the subset of the symptomocomplex vectors, corresponding to the illness ro. It contains nro and defines some empirical p- dimensional distribution of signs for indicated diseases ro. We reproduce B=500-1000 Bootstrap-samples on its base. Each Bootstrap-samples contains nro vectors of the same space of signs and represents the same distribution. We find the estimation of the densities’ values for each of these samples found by the given above method: ~ f *j( Xpat/y=ro), j=1,2,…,B. We find B estimations for vectors’ subset of the illness r1. Because the inequality Cro >Cr1 is equivalent to the inequality: f ( Xpat/y=ro) > { P(y=r1):P(y=ro) } f ( Xpat/y=r1), ~ so all found B meaning f *j( Xpat/y=r1) for illness r1, j=1,2,..,B are multiplied on the quotient P(y=r1): P(y=ro). We form one variational series, consisting of 2B members from both series of the received values, marking members belonging to first and second series. Let us designate the numbered sequences v of the first series v1<= v2<=…<= vB and w of the second series w1<= w2<= …<= wB . Then the statistical estimation of probability P( Cr0>Cr1), will be defined from ~ B P( Cr0>Cr1) = B-2 S# ( w < vi ), i=1 where # ( w < vi ) designates the second series quantities of values w small than value vi from the first series. This evaluation characterizes the given diagnosis reliability. BootstMedEng. Applied Research and Design, Special Issue, dec.1999 Non-Disclosure Agreement This Non-Disclosure Agreement (hereinafter referred to as the NDA) is duly made and executed as of __________, 2003 by and between: INDUSTRIAL MATHEMATICS (1995) Co. LTD., in Beer-Sheva, Israel and ___________________________________________________, which hereinafter shall both be referred to as the Parties to the NDA. 1. The Parties to the NDA are willing to disclose to each other information regarding their respective activities and business, including, without limitation, intellectual properties, patents, concepts, techniques, processes, methods, systems designs, computer programs, formulas, equations, numerical data, graphs, development or experimental work, work in progress, inventions, cost data, marketing plans, product plans, business strategies, financial information, forecasts, personnel information and customer or supplier lists (the “Information”). For the avoidance of doubt, nothing herein shall be deemed to impose on any of the Parties to the NDA a duty or obligation to disclose any such information to the other Party, and such disclosure shall be at all times at the respective Party’s sole and absolute discretion. 2. The Information will be disclosed by the Parties to the NDA within the framework of the Agreement between the parties, dated ___________ regarding the development of ________________ 3. The Parties to the NDA hereby acknowledge that the Information is highly confidential, and undertake that, for a period of ten (10) years from the date set forth above, they: (i) shall treat and maintain the Information (regardless of whether or not the Information is embodied in a physical object) as confidential, and hold all such Information in confidence for each other, utilizing at least the same degree of care the respective Party to the NDA uses to protect its own confidential information; (ii) shall not disclose the Information (or any portion or copy thereof) to any third party, and (iii) shall not use the Information in order to register or apply for patent rights of whatsoever kind unless authorized in writing by the other Party, and shall not claim any right on patents registered by the other Party and regarding the disclosed Information (iv) shall not use the Information or any part thereof for any purpose other than the limited purpose mentioned in Section 2 above, without the written consent of the other Party, except if and to the extent that: 3.1 the Information is in the public domain at the time of disclosure or subsequently becomes part of the public domain, except by the breach by one of the Paties to the NDA of its obligations hereunder; or 3.2 the Information is received by the respective Party from a third party, provided that such Information was not obtained by said third party in breach of obligations of confidentiality; or 3.3 is shown by the respective Party to be in its possession prior to the time of execution hereof and was not acquired in violation of the obligations of secrecy imposed hereunder.

4. The Parties to the NDA undertake to disclose the Information only to those of their consultants, employees and affiliates who have to be so informed in order to ensure its proper evaluation, on a “need-to-know” basis. The Parties shall be responsible for ensuring that the obligations of confidentiality and non-use contained herein are observed by said consultants, employees and affiliates, and they declare that they have policies and procedures which provide such adequate protection for the Information.

5. Any copies of the Information made by a Party to the NDA shall, upon reproduction by the respective Party, contain the same proprietary and confidential notices or legends which appear on the Information provided pursuant hereto. The respective Party’s information produced as a result of the assessment of the Information shall be similarly marked.

6. Upon the request by any of the Parties to the NDA, the other Party shall return to the requesting Party all the Information, including all records, products and samples received, and any copies thereof as well as any notes, memoranda or other writings or documentation which contain or pertain to the Information or any portion thereof, and shall erase all electronic records thereof.

7. The Information and all rights, title and interest therein shall remain at all times the exclusive property of the Party that provided the said Information. Nothing hereunder may be construed as granting any right, warranty or license by implication or otherwise under any patent, copyright, know-how or design rights, or other form of protection of industrial or intellectual property, or as creating any obligation on the part of the respective Party to enter into any business relationship whatsoever or to offer for sale any service or products.

8. This Agreement constitutes the entire agreement and understanding between the Parties with respect to the subject matter hereof and supersedes all prior written or oral agreements with respect hereto.

9. This Agreement may not be modified except by written instrument signed by a duly authorized representative of each Party hereto.

10. This Agreement shall be governed by the laws of the State of Israel.

For: IIM For _______________________

Name:………………………………..

Address:…………………………….

Tel.:…………………………………..

If you are interested to be investor for this project, please email to Benjamin Chernuhin bchernuhin@hotmail.com

Other articles:

- New Bioenergy Strategy Advances Innovation
- BC Mineral Exploration Had A Strong 2008 Year
- Quality Real-Estate services for all your Greater Vancouver needs!