Development of a Technique for the Spectral Description of Curves of Complex Shape for Problems of Object Classification

Vascular pathology symptoms can be determined by retinal image segmentation and classification. However, the retinal images from non-invasive diagnostics have a complex structure containing tree-like vascular beds, multiple segment boundaries, false segments, and various distortions. It should be noted that complex structure images’ segmentation does not always provide a single solution. Thus, the goal is to increase the efficiency of vascular diagnostics. This study aims to develop a technique for describing the geometric properties of complexly structured image segments used for classifying vascular pathologies based on retinal images. The advantages and disadvantages of the existing methods and algorithms of segmentation were considered. The most effective use areas of the mentioned methods and algorithms are revealed. Through detecting retinal thrombosis, the algorithm's efficiency for constructing a mathematical model of an arbitrary shape segment based on the morphological processing of binary and halftone images was justified. A modified variant of this algorithm based on the spectral analysis procedure of arbitrary shape boundary curves was used for the spectral description of complex shape curves for classifying vascular pathologies based on retinal images. Two approaches have been developed. The first one allows obtaining a closing segment of the curve from a symmetric mapping of the initial parametric curves. The second involves intelligent data processing and obtaining contours of minimum thickness, forming convex sets. The results of experiments confirm the possibility of practical use of the developed technique to solve problems of vascular pathology classification based on retinal images, showing the correct forecast probability was 0.93 with all associated risk factors.


Page | 1456
At the same time, medical researchers are becoming increasingly interested in using image segmentation techniques to diagnose various diseases using ultrasound, tomography, optical, fluorographic, and other non-invasive medical imaging techniques [11][12][13][14]. In the listed cases, image segmentation detects hidden elements in a medical image [15], such as tumors and other diseases. In addition to diagnostics, this method is used for surgical planning, intraoperative navigation, and more detailed anatomy studies. Furthermore, it is essential for creating organ models using medical imaging, which frequently employs volumetric images from computed tomography and magnetic resonance imaging [16][17][18]. Besides, segmentation makes it possible to automate less important practical tasks, such as estimating organ volume through three-dimensional image segmentation [19][20][21]. Figure 1 demonstrates one of the segmentation methods.

Figure 1. Medical image segmentation result
Generally, biomedical image analysis is a subject primarily related to computer-assisted diagnostics [22,23]. The authors expect that mathematical methods and machine learning will, in the future, help to significantly simplify and speed up disease diagnosis, particularly at the early stages [24]. Diagnoses will become more accurate, and they will be established faster, so there will be more chances to preserve health and save lives. Advances in image recognition could contribute a lot to analyzing medical images, but it is not so simple [25,26]. Today, neural networks are actively used to analyze biomedical images in conjunction with other methods [27]. Neural networks are a helpful tool, but they will not replace research involving mathematical methods. For example, in MRI, the data picture is obtained using an inverse Fourier transform, and it can show different defects explicitly related to physics: the loss of frequency information. Therefore, very different mathematical methods and approaches must be used, so presumably any area of mathematics would be applicable here. In intelligent image processing, nearly every mathematical area is used, from quaternions and topology to graph theory and statistical methods. Such methods are known for determining image descriptors and their segments, which allow one to build a table of feature spaces for some affine transformations, noise, and illumination changes. However, these descriptors are designed to work in intelligent systems for searching similar images rather than in classification systems. At present, algorithms based on using many basic classifiers followed by aggregation of their solutions to reduce classification errors are successfully used for classification. The use of these algorithms implies large sample sizes. However, there is a rather large class of complexly structured images. Therefore, providing the necessary training sample size that particularly refers to medical images is pretty challenging. The essential component of this complex problem is developing an approach that makes it possible to describe the geometrical properties of segments of complexly structured images from a unified position.
The above statement states that developing new techniques and mathematical methods is still an essential task in image analysis. Therefore, to improve the efficiency and quality of diagnostics, simplify the work of medical specialists, and reduce the probability of a negative impact of the human factor using automated digital processing of medical images, this work develops a technique of spectral description of complex shape curves for the classification of vascular diseases from retinal images. Sokic et al. (2016) developed a method for extracting descriptors that include the phase of the Fourier coefficients [28]. For this purpose, specific points were defined and used as the landmarks of the shape orientation. After conducting a study, it was found that this method is superior to many existing ones. Yang et al. (2019) described a multiscale Fourier descriptor based on triangular elements used to identify shapes [29]. This descriptor has such properties as geometric transformation invariance and the object starting point. Test results confirm that the proposed method exceeds particular methods for describing complex shapes in terms of search efficiency and computational complexity. Li et al. (2015) proposed a universal Fourier descriptor for the holistic and efficient description of color images without losing any color information [30]. The results confirm that the quaternionic general invariants of the Fourier descriptor are stable against geometric transformations and noise. This descriptor makes it possible to achieve high accuracy, reliability, and efficiency in color object recognition.

2-Literature Review
El-Ghazal et al. (2012) touched on the problems of accuracy and computation of Fourier descriptors [31]. They proposed a new curvature-based Fourier descriptor for shape searching. The authors used an unconventional method Page | 1457 for representing the shape contour in curvature space at scale as a two-dimensional binary image. According to the experimental results, it was found that the developed method has a high performance compared with the known ones. Shu et al. (2015) proposed a new multiscale signature, the Fourier descriptor, as a function of the contour line that reflects the local characteristics of the contour [32]. This method allows us to describe the coarse and fine features of the shape. Yadav et al. (2007) conducted a comparative study of shape-based object search and classification using three methods: traditional Fourier descriptors (FD), general Fourier descriptors (GFD), and wavelet Fourier descriptors (WFD) [33]. As a result, it was found that WFD performs better than the FD and GFD methods. Chen et al. (2009) presented a Fourier descriptor-based image alignment algorithm (FDBIA) for automatic optical inspection (AOI) applications in a real-time environment [34]. It thinks through component detection and contour tracking algorithms using information about the magnitude and phase of Fourier descriptors to establish a match between the target objects detected in the reference image and the inspected images, so the parameters for aligning the two images can be adequately estimated. To improve computational efficiency, the proposed component detection and contour tracing algorithms use run-length encoding (RLE) and Blobs tables to represent pixel information in the areas of interest. Mennesson et al. (2014) proposed new Fourier-Mellin descriptors for color images [35]. They are constructed using Batard's representation of images and Clifford-Fourier transforms and extension the classical Fourier-Mellin descriptors for grayscale images. They are invariant to direct similarity transforms (displacement, rotation, and scale), and marginal processing of color images is avoided. The implementation of these functions is given, and the choice of a bivector (a dedicated color plane that parameterizes the Clifford-Fourier transform) is discussed. The proposed formalism extends and clarifies the concept of direction of analysis introduced for quaternion Fourier-Mellin moments. Duan et al. (2008) presented an edge-based image alignment method that combines Fourier descriptors (FD) and iterative closest point (ICP) computation into an accurate and reliable processing pipeline [36]. Here, Fourier descriptors are used to simultaneously determine edge matching and estimate transformation parameters. The authors conclude that matching edge pairs can be reliably detected for all identified edges using a Fourier descriptor and a reliable distance matrix. Thus, the shape is an important visual feature of the image, and it is widely used to describe a content of the image for its classification and retrieval [37,38]. The literature analysis allows us to conclude that methods and partial algorithms for digital image analysis and description of mythological features of objects with complex geometric shapes based on the definition of Fourier descriptors are still rapidly developed through various modifications and adaptations to solve specialized tasks with their specific features and constraints.

3-Research Methodology
The object of this study is complexly structured low-entropy images of biomedical objects obtained by non-invasive diagnostics using medical imaging tools. The segmentation process of complexly structured images is currently not a strictly formalized algorithm. Two main approaches describe the algorithmization of this process. The first one is based on accentuating the brightness variations in each image pixel. In other words, it is a search for its outlines and the "edges." Here, the concept of "edge" covers the segmentation process and the segments' boundaries. The essence of the second approach is to find a set of homogeneous colors (brightness), or homogeneitiestextures.
An essential requirement for image analysis aimed, for example, at detecting retinal thrombosis, is that all vessel boundaries should be visible during this analysis. To achieve such an effect, it is preferable to use a mathematical apparatus that allows the use of a halftone raster or binary images. The most effective for this method is to use the technique of a segment model possessing an arbitrary shape. Here, creating an image representation of almost any curve becomes possible. It will be finalized as a contour of minimum thickness. In such a case, the critical issue lies precisely in the mathematical representation of the complex curve. Therefore, at this stage of the study, let us assume that because of intelligent segmentation of images, absolutely all segments are expressed in the form of an arbitrary curve, with one fragment shown in Figure 2. The fundamental task of morphological curve description is to determine the exact set of curves I contained in the binary image R 2 .
In this case, the following vector function will be used to determine the curve Fi(x, y)  R: where S expresses the arc length of the curve Si. Therefore, the derivative ⃗ of this vector function is defined by the expression: where is the angle between the positive direction of the Ox axis and the tangent; β is the angle between the positive direction of the Oy axis and the tangent. Next, let us investigate the arbitrary closed curve shown in Figure 3. If considering an information file, the closed curve here is interpreted as a sequence of pixel coordinates. Since the image is binary, there is no need in storing the function value within the information file. Instead, the function itself can be described using its arguments: In this case, the set {( , )} will act as a union of two non-intersecting subsets here Pi is: This formula uses an ascending ranking of elements xg; in turn, the set Q will look like this: This expression also uses a ranking of elements, but this time in descending order. All elements are ordered in pairs: The order of transition to pairs is: The pair of these non-intersecting sets, Pi and Qi, is a partition of the set of pixels of the closed curve contour shown in Figure 3.
Since not all curves are closed and not all meet the requirements of Equations 6 and 8, it is necessary to use a technique by which data normalization can be carried out so that all these conditions are fully satisfied. At the first stage of normalization, it must perform detuning from all interferences using fixing within the Cartesian system of coordinates. Thus, by moving the segment along the x-axis by a certain distance x1 and along the y-axis by a certain distance y1, the problem described above can be levelled. The following transformation achieves the possibility of shifting the image: 1 1 ; .
If the curve F(x, y) = 0, for which description the sets Pi and Qi are used, is closed, it will also belong to the class of periodic curves. Due to compliance with these conditions, it can be described by a discrete Fourier transform. However, if the parametric representation of the closed curve is used, it is necessary to use the Fourier spectrum counts. Special attention should be paid to the complex reports generated within the discretization. Their main characteristic is a uniform step along the abscissa axis. The following formula is used to determine them: here, 2N serves as the total number of counts required to describe the segment boundary accurately. It is also worth noting that our statement implies that the total number of elements in Pi and Qi will be the same that will correspond to the condition: It is required to choose one pixel as a sampling step. However, we must not forget about the scale of the initial image, for which it is necessary to perform the coordination of the available step with the actual image. In this case, the set of Pi and Qi must be considered as some ordered array of complex numbers: Since the curved lines characterizing the topology of the image segments are not closed, they do not fulfil Equations 6 and 8. Let us consider the two variants in which the closure of the image segment occurs.
The first one is based on the fact that the curve is represented in parametric form. It makes it possible to obtain the closing segment of the curve using the symmetric representation of the initial parametric curves shown in Figure 4. This transformation is implemented using a simple algorithm. It is required to double the data array that contains coordinates, get 2Li, and then write it in the newly formed cells of values but in reverse order: This equation is identical to that written in Equation 13 but is used for y coordinate.
It is worth noting that this method has a disadvantage, shown in Figure 5-a, where the curve topology in the split form is demonstrated. Therefore, using an equation like Equation 8 is not acceptable here. Furthermore, since retinal vessels belong to branched curves, the method considered is rather limited. Therefore, let us consider the second variant.

Figure 5. Conversion of a tree-like curve into a closed contour: a) -initial curve; b) -obtained closed contour as a result of the transformation
It uses intelligent information processing to obtain outlines of minimum thickness. This variant is shown in Figure  5-b. It is a branched curve converted to the corresponding contour of minimum thickness.
The algorithm was based on accentuating changes in brightness in each pixel of the image or finding a set of uniform colors or uniform textures. For intelligent processing of a branched curve, it is possible to use the corresponding morphological operator in the Image Processing Toolbox of the MATLAB software package, which is widely used to solve various computational problems. In order to solve a problem related to the transformation of a branched curve into a single closed contour, it must use a series of morphological transformation operations in a specific sequence, including infill, skel, dilate, bwperim, shrink. To demonstrate the efficiency and operability of this technique, let us analyze a variant of test curve processing, shown in Figure 6.

Figure 6. Test image of the curve
The morphological operator imfill will process the initial image at the first stage to fill the holes in the initial binary image. The skel operator is applied to the already filled image at the second stage. It has a parameter indicating the degree of its executionuntil the changes in the image stop. Figure 7 shows the result of changing the initial image with the imfill and skel operators. In the third stage, the dilate operator processes the obtained image. It performs dilations with a 3×3 element. Figure  8 shows the results of the dilate operator for the first acquired image.

Figure 8. Results of image processing with the infill, skel, dilate operators
After that, the morphological operator bwperim proceeds to process the obtained image. First, it is used to outline the specific boundaries of binary objects. The results of this operation are contained in Figure 9.

Figure 9. Obtained image using the first four morphological transformation operations
It is necessary to use the morphological operator shrink to obtain the final image. It compresses objects into points, excluding internal holes; objects with internal holes are compressed into rings. Figure 10 demonstrates the results of this operation.

Figure 10. Results of image processing with the operators infill, skel, dilate, bwperim, shrink
The difference between the curves in Figures 9 and 10 is noted in Figure 11 on a larger scale. Figure 11 demonstrates a particular image element, which is sequentially processed by the operators infill, skel, dilate, bwperimand indicates the most problematic places due to image digitization, implemented with the parameter l, which has a step equal to one pixel. The advantage of this method is that the starting point can be chosen arbitrarily since a complete contour traversal will lead to the starting point without any errors or distortions. After making a complete traversal, all the pixels of the given image get the number 1. Several problem areas depicted in Figure 11-a are in the zone of uncertainty due to the ambiguity of their digitizing direction. Unambiguity of the digitizing direction can be achieved only when each pixel borders two neighbouring ones. Otherwise, an artifact cannot be excluded. Figure 11-b demonstrates the ability of the operator shrink to apply this condition.
A similar technique of morphological transformation of complex shape curves provides an opportunity to obtain an arbitrary line that corresponds to the topology of image segmentsa closed-type curve. The resulting closed curve will have the property of minimum thickness, expressed using the following equations: However, these equations will be valid only for the contours of minimum thickness, which form convex sets. However, the overwhelming number of minimum thickness contours, which correspond to the segments of complex images, do not form convex sets. Consequently, Equations 14 and 15 can be applied very restrictedly.
If the curve is closed, it is periodic and can be represented in a discrete Fourier transform. Let us consider Ka point boundary of a contour in the xOy plane. The initial point (x0, y0) is chosen arbitrarily. Following from the arbitrarily chosen point (x0, y0), it is required to perform a complete traversal of the contour. It will be carried out counter clockwise. The result will be the coordinates of the "edge" points or boundaries. To record these coordinates, the following form will be used: Based on these data, the boundary will act as a set of coordinate pairs, represented as follows: Let us use the complex numbers to express the coordinate pairs: Eventually, the corresponding coordinates of points x and y will be taken as imaginary and real parts of the sequence of complex numbers.
The essential advantage of such an algorithm is that we can transform a two-dimensional spectral analysis problem into a one-dimensional one, significantly simplifying its subsequent solution. It is vital to note that, despite the changes in the interpretation of this sequence, the boundaries' essence has not changed, meaning that a similar sequence of coordinates will be used to describe it. Here, the following expression will be used for the Fourier transform of this sequence: for the series 0,1, 2, , 2 3 uN  Here, the coefficients intended to describe the boundaries are called decomposition coefficients or Fourier descriptors. Therefore, to restore the boundary zk, it is necessary to use the inverse Fourier transform for these coefficients: for values 0,1, 2, , 1. kK  When working with a spectral representation of contours of minimum thickness, it makes sense to use the potential of the MATLAB software package mentioned above.
When solving problems related to creating the space of informative characteristics, it is impossible to allow the descriptor value to be subordinate to the number of contour pixels. Therefore, some changes ˆ( ) Sk should be made in the abovementioned MATLAB software package: This calculation implies using only M number of terms, and the value of k in no way falls into the specified range from 0 to 2N-3. Therefore, it can be concluded that the approximated boundary will consist of the same number of points, but determining their coordinates will require a much smaller number of terms. Based on the Fourier transform analysis results, it becomes clear that all high-frequency elements will display fine details, and all low-frequency elements will form specific boundary contours. A simple conclusion is then: as the value of M decreases, the number of lost constituent elements of the boundary increases. The boundary of an arbitrary contour is shown in Figure 12. In addition, there are the results of restoring the latter, considering the changing values of M. The MATLAB spectral analysis package capabilities were used to obtain these results.

Figure 12. Restoring contour outlines by the number of Fourier coefficients M
It is worth noting that the corners of the restored boundaries try to get off the sequence that happens from the values of M, which are approximately 33. When M = 98, an exact copy of the initial contour is restored. So, it can be concluded that a small number of low-order coefficients will create a complete description of the general shape of the boundary, which will make it possible to solve the initial task. At the same time, for detailed restoring of all small details, it is necessary to use many high-order terms. The combination of informative features of the spatial frequency zone to determine the contour outlines is achieved by using Fourier descriptors invariant to rotation and displacement. It is required to ensure that the susceptibility of descriptors to geometric changes is minimal, for example, to rotation, resizing, or displacement of objects, since invariance automatically implies the loss of one degree of freedom. It is also worth remembering that the first coefficient of the Fourier decomposition cannot be taken into account in the invariance for displacement, which causes the loss of two degrees of freedom.
In order to determine dimensional invariance, it is necessary to resort to the absolute value of the second coefficient of the Fourier decomposition, whose value will be 1. The definition of rotational invariance is related to the relationship between all phases. In this case, the presence of three invariants means the loss of four degrees of freedom.
Suppose the movement sequence along the border points will influence the processing result. In that case, it is necessary to add the condition of independence of the descriptors to the choice of the starting point. Then, the expression ap(k) = a(k -k0) determines the change of sequence that can be judged from the equation: The descriptors are dependent on the change of the starting point because a(u) is multiplied by a term that depends on u.
The undoubted advantage of using Fourier descriptors to represent the geometric structures of specific segments is the content of the above-described invariants in the first two Fourier descriptors. If, however, normalization of other descriptors is performed using the phase and absolute value of the second descriptor, it will lead to the formation of a general invariant description of objects concerning rotation, displacement, and scale. If higher-order descriptors are disregarded, finer elements can be gradually removed from the shape description, controlled by the image.

4-Results
Determination of system adequacy and its fixation is reduced to the analysis of its feature. When the frequency range corresponding to the Fourier decomposition coefficients with the corresponding number does not depend on the total number of counts in the contour, such a system is adequate. It should be taken into account that the overwhelming number of contours on which it is possible to identify the borders of segments corresponds to a mismatched number of counts. Thus, when identifying the informative features of spectral counts, it is necessary to reduce them to similar frequency ranges. If the sampling rates have equal values in the contour with a different number of counts, this condition cannot be fulfilled. As a result, the additional counts will result from the closed contour curve interpolation process. Therefore, this technique cannot be used when the highest available frequency is used for the algorithm.
Therefore, it is recommended to form the space of informative features to solve this problem. First, it is necessary to set the total number of counts for a contour. This value for each contour related to the training or control samples should coincide. In order to calculate this value, appropriate statistical studies are performed. For example, the total number of pixels in the contours of microscopic capillary images ranges from 500 to 3,000 pixels. In this case, one pixel is equivalent to one count. To study spectral mappings of "contours of minimum thickness," a software package developed in a MATLAB environment was used. The calculated Fourier spectrum corresponding to the contour presented in Figure 10 is shown in Figure 13.

Figure 13. Amplitude spectrum of Fourier descriptors
The spectrum of Fourier descriptors has a symmetric structure: they are grouped into pairs by frequencies, i.e., each descriptor on the negative half axis corresponds to a descriptor on the positive half axis with the same frequency coordinates. The amplitudes of these descriptors can be in very different ratios.
The Fourier descriptors used in the proposed technique of synthesizing the space of informative features in spatial frequencies to identify the boundary shape should be invariant to displacement and rotation. However, depending on the particular case, ensuring invariance to scale is a relevant, informative source when comparing and classifying segments in different images taken at different scales. Table 1 shows, as an example, the experimental data of fundamental components of the first five Fourier descriptors (| |, where i = -5, -4, …, 5) for different models of central line segments of retinal blood vessels with different lengths (with a different number of counts).
When the problems of forming the space of informative features are solved, the dependence of the descriptor value on the number of pixels in the contour is not allowed, with a result of necessity to use the mathematical expression (22). As a result, there will be the same number of points in the approximate boundary, but fewer terms are required to restore their coordinates. Table 2 shows the result of the normalization of the Fourier descriptors relative to the fundamental component of the first Fourier descriptor.  A comparison between discrete frequency counts can be performed if all contours have the same total number of counts. To equalize the number of samples, it is necessary to perform transformations, after which the number of counts in the contours reaches the maximum values. The value analysis in Table 2 shows that the descriptors with lower indices in segments of the same shape but different scales became approximately the same. However, the descriptors with higher indices remained different since they determined the high-frequency segment of the spectrum where the differences in the shape of erythrocytes of different sizes are manifested. From this, we can conclude that in different photographs taken with different degrees of magnification, the possibility of invariance with respect to scale is important when comparing and classifying segments.
A specific solution is to run the contour multiple times. There should be enough passes for the number of counts to reach the maximum value possible in the sample. This technique of count alignment assumes that each available contour in the sample is characterized by multiple counts. In practice, however, this condition cannot be met. Of course, it is possible to set a more significant number of counts than the maximum in the given sample. However, the spectrum energy associated with false harmonics will significantly decrease in this case. Furthermore, this approach will significantly increase the duration of calculations, negatively affecting the solution of tasks performed in the interactive mode, which involves calculating the spectrum of contours with variable values of adjustable parameters. However, it is worth remembering that the interpolation of the signal space can form a situation when the virtual ones will also be formed in the signal space between the real counts.
Because of performing such actions, we obtain the following picture of transformations: Kmax spectrum  adding zeros to the spectral counts of the high-frequency segment up to Kmax. Since the amplitude of the Fourier coefficients is interrelated with the frequency response, even insignificant frequency morphisms of the observed signal can cause oscillations of the spectral components. Thus, the inverse Fourier transforms, and the corresponding mismatches between the direct and inverse transform can be characterized as a sign of the correctness of the morphisms present in the frequency domain. Furthermore, by introducing additional zeros into the spectrum between the initial counts, additional counts are formed, which are interpolations. The exact initial spatial coordinates express the intermediate counts since the maximum possible sampling rates were used. Therefore, when the task of spectrum determination appears, a situation should be created on the contour where the coordinates of the count will determine its dimensionality, and the values of the auxiliary counts will be equal to the values of adjacent interpolation nodes.
The decrease in the amplitude of the Fourier decomposition coefficients is proportional to the number of intermediate counts formed in the signal spectrum. This process is carried out by adding extra zeros. In this case, we can use the former energy equivalent. From the above, it follows that at the second stage, it is necessary to multiply the Page | 1466 spectral components of the i-th contour with Kmax / Ki when Kmax = 3000. Here, Ki is the number of counts observed in the i-th contour. Figure 14 shows the spectra of the test contour before (a) and after the modification in the spectral domain (b).
(a) (b) Figure 15-a shows graphs of the parametric curves x = f1(t) and y = f2(t), corresponding to the initial and restored image by 51 descriptors. Figure 14-b shows the same curves but corresponding to the modified image spectrum. It can be seen that Figures 15-a and 15-b differ only in scale. The scale on the abscissa axis increased 2.8 times; the scale increased in the same ratio on the ordinate axis in Figure 15

Figure 15. Parametric curves of the contour boundaries before modification (a) and after modification (b)
To estimate the information loss when equating a part of descriptors to zero, it is necessary to compare the initial contour, and the contour restored using a limited set of descriptors by a specific criterion. Illustrative examples of initial parametric curves and restored ones using M descriptors are shown in Figure 16.

Figure 16. Parametric curves of the initial and restored contour boundary
By performing inverse Fourier transforms concerning the modified contour spectrum and comparing it with the initial contour, it is possible to achieve a minimum number of analyzed descriptors, which is the primary goal of the proposed technique. Restoring the contour boundaries of the segment of interest requires using the fundamental reversibility of the Fourier transform; thus, Equation 22 can be applied. Therefore, whatever the number of counts present on the contour, the same number of descriptors will be required for its restoration. It is possible to simplify the decision models considerably when some descriptors have zero value. Based on the neural network classification model, it is possible to exclude nodes that belong to the neural network's input layer and correspond to the given descriptors. Figure 17 shows the final comparison between the initial and reconstructed contours based on 39 descriptors. Since any contour point shown in Figure 17-a corresponds to some other points in Figure 17-bthe restored contour has thickened lines. For information loss estimation, we can compare the initial contour with another one according to a specifically chosen criterion that also requires a specified number of descriptors. Next, a contour study in two parametric curves will be conducted. Here M descriptors replace K descriptors, observing the condition M<K because, in this case, it becomes possible to create curves that make it possible to characterize the outlines of the segment boundaries as follows: Since the characteristics of the initial curve and the curve restored using M descriptors are similar, the area S of the contour that has the minimum thickness can be expressed as follows: Here  and  are values of the restored parameter; K is the total number of counts in the sample for the contour of the minimum thickness.
For precise coordinate k * establishing, a mask with a 3×3 dimension (or 4×4 in another variant) will be placed in the pixel number k to achieve the maximum pixel hit in this mask. Pixel k * is a mandatory component of the mask. It is as close as possible to the normal restored from the pixel, as is demonstrated in Figure 18. If there is no error, the total area is considered approximately equal to 1 according to Equation 26. If this equality is fulfilled, the contour belongs to the category of contours with the minimum thickness. In such a case, the parameter Λ will be used as a restoration error criterion that can be optimized. The following relationship is used to determine it: Figure 19 shows graphs of interdependence between the numbers of descriptors used to restore the contour boundary (if there are two contours) and the information loss/.

Figure 19. Graphs showing the dependence of information loss on the number of descriptors (for two contours)
In order to minimize the number of coefficients, it is necessary to find a point on the graphs where, in the rightward direction, the reduction of the criterion Λ will be minimal. It means that if we move from this point to the right, the "informative debris" will be located there.
The suggested segmentation method implements the 3 successive steps shown in Figure 20. The morphological processing method was described earlier in the article, and the principles of the neural network are shown in Figure 21. Because of sequential processing by the proposed means, a feature vector is formed, which can be used to solve classification problems.

5-Discussion
The operators of sequential image processing by the gradient and threshold operators do not satisfy the criteria for the accuracy of segment boundaries detection, since they do not consider the morphological features of the image and noise typical of ophthalmological images. As a result of this research, a technique was developed for the spectral description of the complex shape curves of raster images used to classify vascular diseases based on retinal images. The technique draws on an algorithm used for building mathematical models of arbitrary-shaped segments based on morphological processing of binary raster images and the method of spectral analysis of arbitrary-shaped boundary curves, making it possible to represent the image of any curve, including a non-closed one, as a contour of minimum thickness.
Two alternative approaches are similar to the proposed one. The first alternative approach is based on representing a curve in the parametric form, which allows the closing segment of the curve to be obtained through symmetric mapping of the initial parametric curves. The second alternative approach involves using intelligent data processing to obtain contours of a minimum thickness (one pixel). The first approach, although quite simple, has a significant drawback: it is not efficient for branching topologies. Therefore, based on the second approach, a method of morphological processing of complex-shaped curves was developed, which allowed obtaining a closed curve of minimum thickness from an arbitrary line corresponding to the topology of the image segments. However, this can be realized only for the contours of minimum thickness that form convex sets.
An automated system for the analysis of ocular fundus pathologies in its most general form should allow measuring the morphological characteristics of ocular fundus objects interactively. The output information, along with the assistance of experts, can help medical decision-makers to image pathological formations and morphological structures of the ocular fundus and assess the quality of their preliminary pathology diagnosis results through various indicators, such as diagnostic sensitivity (DS), diagnostic specificity (DSp), and diagnostic effectiveness of the decision rule (DE). Presented below are the results of a series of experiments aimed at analyzing the possibility of the practical application of the developed technique to solve the problem of classification of vascular diseases based on retinal images.
The spectral analysis results for the contours of minimum thickness were studied in detail using a selection of 352 elements, which included various classes of contours. Of the selected elements, 138 were classified as pathological and 214 as usual: an ophthalmologist determined the criteria for the contour class of minimum thickness, and a neural network was used to divide the contour images into the two classes. First, the contour spectrum of minimum thickness was set, and then the Fourier coefficients were exported. Next, the system stored the data as a vector available for statistical analysis. Finally, to obtain the results of the training sample, the files of the training module were merged with the exported ones.

Page | 1471
The control sample consisted of 90 randomly selected items. The analysis was carried out based on the calculated quality indicators of diagnostic rules, including specificity (DSp), sensitivity during diagnosis (DS), the effectiveness of the rule (DE), and predictive significance of positive (PS+) and negative (NS-)results. Analyzing the results of the final tests allowed us to calculate and subsequently enter the indicators into Table 3, in which r is the serial number of the disease class; nr is the number of images to be checked in the control sample; n0 is the number of images without detected pathologies; TP the true positive result, which determines the number of correct classifications performed according to the rule; FP the false-positive result, which determines the number of images misclassified according to the rules for the classes under study; FN the false-negative result, which determines the number of images misclassified to class 0; TN the true-negative result, which is equal to the number of class 0 images provided that the classification is performed correctly.
The first class is assumed to include all normal vessels, and the second class is assumed to include vessels that have pathology. Table 4 reflects the result of this model (DS =95%; DSp = 92%; DE = 93%), which used a validation sample to create the organizational network. The effectiveness evaluation of the rules for predicting the formation of central retinal vein thrombosis is implemented considering the index of correct detection probability of the disease. We should use the frequency value from the control sample and compare healthy and sick people to determine it.
To determine a control sample, it is possible to use a technique that is often used in practical medicine: Here n is the number of observations needed in a particular case; mthe preliminary number of observations performed by a specialist; Wmthe difference between the minimum and maximum values of the characteristic in question obtained based on the preliminary observations; Xthe averaged value of the characteristic under study; research error, %; Kwthe table coefficient, which considers the degree of confidence P; (1±(2m) -1/2 ) 2table values relative to the numbers from 10 to 20 in preliminary studies.
Prediction of formation risk based on the results of conducted research m = 150; Kw = 0,048 (accepted 0.99) was carried out at δ = 10%. The sample size is equal to n = 98 ± 21 conducted observations. The key value was n = 150. Prethrombosis was detected in 90 individuals. This category also included people with incipient thrombosis. An ocular fundus examination further verified the results.
As a result, the probability of correct predictions is equal to 0.93 in the presence of all possible associated risk factors. The number of control sample elements by disease stage was calculated similarly. Table 5 shows the results of verification based on the control sample compared with the expert estimates. Analysis of the table shows that the results of tests on control samples within the permissible error (about 3%) practically coincide with the results of expert evaluation, which allows us to draw conclusions about the appropriateness of using the results obtained in the work.

6-Conclusions
Some problems associated with ophthalmic image analysis have not yet been fully resolved. For example, for the successful classification of images of particular objects, a priori information about their structure and properties is required. However, some factors can reduce the value of a priori data and thus hinder image segmentation, where each segment is considered an object of a specific class. As a result, such an image may have alternative structures, and no unambiguous decision to choose one is always possible. Furthermore, characteristic features of objects in such complexly structured images are a distortion of segment boundaries and the appearance of false segments, the presence of tree-like structures, and others. Therefore, the use of known local gradient methods of boundary extraction and morphological delineation procedures does not give the expected effect since they are tied to a priori given data and do not analyze the results of the decisions made.
Several methods have been proposed to determine the descriptors of images and their segments, making it possible to construct a feature space table to some affine transformations, noise, and changes in illumination. However, these descriptors are designed to work in intelligent systems for searching similar images rather than in classification systems. At present, algorithms based on using many basic classifiers followed by aggregation of their solutions to reduce classification errors are successfully used for classification. The use of these algorithms implies large sample sizes. However, there is a rather large class of complexly structured images. Therefore, providing the necessary training sample size that particularly refers to medical images is pretty challenging.
The essential component of this complex problem is developing an approach that makes it possible to describe the geometrical properties of segments of complexly structured images from a unified position. The proposed algorithm for building a mathematical model of an arbitrary shape segment based on morphological processing of binary raster images, which helps represent an image of any curve, including an unclosed one, as a contour of minimum thickness, is aimed at ensuring the achievement of this goal. Its variant, based on spectral analysis of arbitrarily shaped boundary curves, was used in developing a technique for the spectral description of complex shape curves for the classification of vascular diseases based on retinal images.
Two approaches were developed in this research. The first one allows for the symmetric mapping of the initial parametric curves to obtain the closing segment of the curve. The second method assumes using intelligent data processing to obtain contours of a minimum thickness. The first approach isn't applicable to branching topologies. The second one is applicable only for contours of a minimum thickness that form convex sets. For segmentation, a method for implementing the transition to binary imaging that reflects the contours of the initial image segments' borders was developed as a result of using the developed methods and neural network. A neural network model of image processing includes 5 layers. The base of the layer forming is a nine-element mask with eight directions in the active pixel that determine the possible coordinates of the segment border's next pixel. Three defining rules are used for making decisions about choosing the pixel of the segment border. Those rules are implemented by 3 layers of a neural network. The fourth layer aggregates decisive rules for each of 8 alternatives, while the fifth layer activates one pixel out of eight in a single node; the activation function is the most important for that pixel.
For the purpose of closing boundary curves, a method of binary image morphological processing was developed, allowing the description of boundary curves with a minimum thickness contour. The developed method of boundary curve morphological description is distinguished from the previously known methods due to the set of boundary curves describing points is represented as two disjoint subsets with boundaries established by the upper and lower coordinate borders of the points forming the initial set. Furthermore, Fourier descriptors allow defining the vector of signs for identifying boundary forms.
The developed method for forming vectors of signs makes the classification possible, regardless of the count number in the analyzed contours. To achieve that, the number of spectral counts is maximized by adding zero counts on the left and right of boundary frequencies and increasing the amplitude of spectral components in proportion to specter enhancement. After that, the quantity of Fourier descriptors is optimized. In the next stage, the obtained data, along with the vector of pixel attributes and average characteristics of background pixels, are fed to the input of a neural network, which is a classificatory. As a result, the probability of correct prediction is 0,93 with all associated risk factors concerned.

7-2-Data Availability Statement
The data presented in this study are available in the article.

7-3-Funding
Selected findings of this work were obtained under the Grant Agreement in the form of subsidies from the federal budget of the Russian Federation for state support for the establishment and development of world-class scientific centers performing R&D on scientific and technological development priorities dated April 20, 2022, No. 075-15-2022-307.

7-6-Conflicts of Interest
The authors declare that there is no conflict of interest regarding the publication of this manuscript. In addition, the ethical issues, including plagiarism, informed consent, misconduct, data fabrication and/or falsification, double publication and/or submission, and redundancies have been completely observed by the authors.