On A Methods Of Using Weighted Simulation Improving Reliability To Redundant Fiber Optic Communication Systems

The article describes the methodology of weighted modelling to increase the reliability of redundant fibre-optic communication systems. In a specific example, a network graph of a many-node fibre-optic communication system is considered. Applying general ideas to determine the reliability characteristics of systems consisting of a large number of different types of elements with different functional relationships between them is quite a difficult task. The materials presented in this article are intended to solve this problem. Along with their relative simplicity, they are highly accurate. The numerical examples in this chapter show that the use of these methods for highly reliable systems can reduce the variance of the estimate by several orders of magnitude compared to the direct modelling method, and thus reduce the time required for calculations on electronic computers by several orders of magnitude. The purpose of the study is to increase the reliability of fibre-optic communication systems. The research methodology is based on models with a limited number of monotonous failure chains, which is available for visual enumeration of the reliability of highly reliable systems. As a result, it is proposed to obtain an approximate formula for assessing the reliability of a highly reliable system, both by modelling and analytically, and calculations using it can be performed using the quadrature method or moment methods. This allows you to build a model according to the block principle, including fullscale blocks or records of the results of their tests, simplifies the interpretation of the results, and creates convenience in software implementation.


Introduction
Modern telecommunication systems are very complex. In this regard, some serious problems arise for system developers, in particular, with the qualitative and quantitative analysis of the effectiveness of communication systems functioning in the initial stages of design. Increasingly high demands are placed on the reliability of a telecommunication system; therefore, the requirements for the adequacy of mathematical models and the accuracy of calculations are increasing. The main mathematical methods for analyzing the reliability of timefunctioning systems are the Markov process method and the semi-Markov process method. Both methods, which are similar in their analytical content, allow us to solve a large number of various problems in the theory of reliability: determining the characteristics of redundant systems, analyzing control models, preventive maintenance, troubleshooting, etc. [1][2][3][4] However, with an increase in the number of process states, using these methods, great analytical difficulties arise. It is possible to solve such a problem as, for example, determining the distribution of the uptime of a redundant communication system in many ways, but not in all cases, and with the increasing complexity of the systems, the proportion of solvable among them becomes less and less. These methods not only give a computational gain but also make it possible to reveal the qualitative nature of the change in the reliability of communication systems.

1.1.
The relevance of the work In recent years, in all developed countries of the world, highly reliable data transmission equipment is widely used in the information transmission system. Based on this, this research option is relevant and timely.

1.2.
Formulation of the problem Using weighted modelling to redundant communication systems to ensure high reliability of telecommunication systems as a whole. To solve this problem: we consider the real problems of reliability analysis of redundant systems, taking into account the nature of the failure flow. In the Markov model, where for k failures, the intensity of further failures is λk, the dependence of λk on k can be taken into account in many ways. One of the simplest methods is as follows. Let We realize the failure times t1, …,tr-1, as for the simplest flow with a parameter. If now for somewhere -independent random variables uniform in (0, 1), then the test is considered ineffective; repeat the However, it should be borne in mind that modelling may be only a small part of the entire implementation, and then even the coefficient values are acceptable.

Materials and methods
An alternative method is to reject the implementation if at least one of the inequalities is  equivalent, and the moments tiis obtained as ordered independent uniform random variables in the interval (0, y). Indeed, the joint density is proportional to; for small λythe exponent is approximately replaced by zero. As you know, any stream of homogeneous events are defined by distributions F(t1), F(t2│t1), F(t3│t1, t2), etc. If the stream events themselves are in the interval under consideration (0, y if these distributions are unlikely, then it makes no sense to approximate these distributions very precisely; however, it is desirable to preserve the General behaviour of the function [3][4][5][6][7]. So if t1if there is t te    2 a moment when an element with unloaded duplication fails, then the density is t1there is; thus, for small λy enter the weight, and the value of t1we model it as y max {ω1, ω2}, where ω1, ω2 are independent and uniform in (0, 1). ),..., These functions may also depend on some additional parameters: the operating mode and load level at the moment, the interval during which a particular mode is valid, and many others [6][7]. In most cases, the procedure for calculating the failure rate λ (...) is preferable to using conditional distribution functions F( ... ). With the help of the latter, only the weight is determined, and then only in the case when the approximate formula has the form: When calculating reliability using formulas, only a few of its characteristics can be obtained explicitly. Statistical modelling immediately opens up a wide range of opportunities in this regard. In this article, we will only consider the functionals of a process on a failure chain (not necessarily monotonous). The indicator of falling into the off-call state is calculated by iterating through the States received sequentially by the system, and accessing the table or procedure for calculating the health function. The time τ of staying in the back state is calculated as a function of ti, ηiand successive States of the system. Statistics τ i unbiased estimates of the distribution moments τ are given, and the event indicator {τ ∆} is an indicator of system failure during temporary redundancy. The mathematical expectation τ is essential for calculating the non-stationary and stationary availability coefficients of the system. The work performed during the lifetime of the failure chain is calculated as the sum of the length of time spent in various States multiplied by the system performance in them. It is easy to determine the loss of productivity α as the difference between the work during this time, whether the system is very serviceable, and the actual work [8][9][10][11]. An indicator of failure in fibre-optic systems with temporary redundancy is when the value α reaches a certain critical level. Let the system perform object management, and in the state of operability, the average mismatch characteristic or some other indicator of management quality is σ0. In the off-base state i the quality indicator changes according to the differential equation σ'(t) = fi(σ, t). For σ(t)=σ1. Control crashes. First, we implement a chain of failures, then solve the differential equation in consecutive intervals, where the state of the system is unchanged, fixing the failure at the output σ(t) to a critical level. This model allows generalization to the multidimensional parameter σ(t), which is essential when managing a multi-parameter object.Let's examine the partially accessible fibre-optic communication system shown in Fig. 1. Let's assume δi(t)several failed inputs i-th routers at the time of t. It is easy to see that if the system is working properly, and if a failure is possible (for example, in the case of δ1(t) + δ2(t) = 5). Therefore, we consider the back-end of a chain of length 5 or more. For the calculation, given that the recovery time is comparable to T (its average value is T/2), we will consider the system non-recoverable. However, in this case, the probability of minimal failure chains approximates q(T) not accurate enough. For 10 % accuracy of calculations, we choose the weights so that the chains last up to 9 failures.
Let descry an example that allows you to more clearly imagine the structure of the process [12][13][14][15].
Consider a recurrent algorithm for constructing trajectories of a random process ), (t  which is the basis for the weighted modelling method proposed below. Let ), ( ),..., 0 t independent right continuous Markov processes defined on the same probability space and taking values in X, where N is some fixed natural number; (X, Ά) is a measurable space. Then the recurrent algorithm for constructing trajectories of the process , 0 ), (  t t  is formulated as follows: This article describes a weighted modelling method that allows you to calculate with high accuracy the characteristics of systems represented in the form nonnegative, bounded and measurable function of the corresponding variables. The need to create special methods for accelerated modelling of the quantity   T P is justified by the fact that for real systems failure is a rare event (i.e. the probability of an event   T N   small, for example, has the order 10 -4 -10 -6 ), and therefore direct simulation methods cannot be used. At the same time, the complexity of real systems is often an insurmountable obstacle to the use of analytical and asymptotic methods. Therefore, in several cases, the only way leading to finding certain characteristics of complex systems is to create accelerated modelling methods that have the advantages of both analytical (high accuracy) and statistical (universality) methods.
Here are the two most important interpretations of the quantity   WhereE is the set of system failure states. Then   T P -average system failure time in the interval   T , 0 .
Note that the structure of the process   , 0 ,  t t  is close to switching processes [3].
it is believed and the score is built. Otherwise, the random variable (moment of arrival of the first event).

According to the distribution
As a moment i t advance of i the event is chosen Even very simple and very rough estimates show that for highly reliable systems, the described method allows one to achieve a very significant gain invariance (and, consequently, to sharply reduce the amount of computer time). So, for highly responsible technical systems, the failure of the next element is a rare event. Therefore, it is quite natural to assume that  (which can be used for both highly reliable and unreliable systems) is the sample variance. The examples are given below, as well as the calculation of the reliability of real technical systems, show that for highly reliable systems it is possible to reduce the cost of computer time by several orders of magnitude in comparison with the method of direct simulation [11][12][13][14]. The described method serves as a basis for determining the reliability characteristics of various classes of systems. For this, it is necessary for the system under study to set random processes . Now we will consider exactly how to solve this issue for weighted modelling of redundant systems with recovery. One of the most important classes of systems is redundant systems with recovery. To determine the reliability of a typical system of this class, we use a weighted modelling method based on the general ideas of the method described above.
In this way, We interpret the random processes for the system under consideration According to the general method outlined above, it is enough to have an algorithm for constructing trajectories of processes

  
then the failure of the next element with probability 1 will occur before the end of the recovery of one of the faulty at the moment   1 t elements. In this case, the trajectory     By the method of simulation, the trajectory of the process is built       , , The algorithm formulated below allows one to build joint realizations of the number  the element that failed first and the moment  his refusal provided that in the interval   T 0, one of the elements has failed.

Results
The above interpretations of random processes can be proposed the following weighted probability modelling method  .   follows from the general algorithm given above [5].
The accuracy of the estimates obtained essentially depends on the parameter   r N ..., , 1  . If it is chosen 0  N , then the method formulated above will turn into a conventional method of direct simulation. It is easy to show that the variance  ) ( 1 T P D decreases monotonically with increasing N and is minimal at r N  .

Conclusions
Based on the above example, it can be judged that one of the ways to speed up the simulation is the method of "weighted" modelling. In the practice of modelling random processes that describe the behaviour of complex systems, various concretizations of the method of weighted modelling have been applied, taking into account the specifics of a particular class of problems.