ISSN: 2155-9570
Research Article - (2021)Volume 12, Issue 6
Glaucoma is an eye disorder with different causes but a clinical and common effect on the eye and optic nerve and is related to intraocular pressure and can be a silent disease in all ages and individuals. Significant efforts and advances in the field of retinal image processing have been made to provide automated systems for the diagnosis of various diseases on it. Such systems, in addition to allowing the processing of retinal images in large volumes with minimal time and cost, are free from fatigue and other weaknesses that the diagnostician may suffer. In this article, the proposed method of using the technique Edge detection is for segmentation of retinal blood vessels, qualitative analysis of the results showed high accuracy for segmentation of retinal blood vessels. Using the results can be best used to diagnose with glaucoma.
Glaucoma; Retinal imaging; Blood vessels; Image processing
The main task of the eye is to focus the light it receives from outside on the retina so that an accurate image of the object is created on the retina. The retina sends these images to the brain as neural messages, and these messages are interpreted in the brain [1]. Therefore, to see clearly, it is first necessary to focus the light precisely on the retina. The structure of the eye is like a sphere. At the front of the sphere is a clear window called the cornea. Light enters the cornea from the outside environment and reaches the lens after passing through the pupil [2]. The lens focuses light precisely on the retina to create a clear image of the retina. For objects to be seen accurately and clearly, it is necessary for the path that light travels through the eye to be clear and for the cornea and lens to focus light directly on the retina. When a sharp object approaches our eyes, we involuntarily close our eyelids [3]. The eyelids are differentiated structures of the skin and subcutaneous muscles that are responsible for protecting the eyes. Eyelashes like a filter prevent dust and various particles from entering the eye [4]. The eyelids themselves have two important functions: first, they block and protect most of the eyeball like a defensive wall, and second, the eyelids open and close every 5 to 10 seconds. It helps to wash away germs and foreign particles from the surface of the eye and sweeps the surface of the eye. Besides, opening and closing the eyelids helps to distribute the tears evenly over the eyeball.
Retina
It is a thin, light-sensitive membrane located behind the eyeball. The light rays that hit the retina are converted into neural messages that are transmitted to the brain through the optic nerve and interpreted in the brain. In the human retina, different types of photoreceptor cells are sensitive to light is different. Cylindrical optical receivers are mostly used for viewing in dark environments. Receiver The cones are differentiated for color and fine detail [5]. The arrangement of these cells in the retina is as follows the central area of the retina (macula) has more cone receptors. Therefore, when a person looks directly at an object, the image of that object falls directly on the macula, where the number of cone cells is greater, and as a result, the object is seen more clearly [6].
Glaucoma
It is a term used to describe a group of eye disorders with different causes but a common clinical effect on the eye and optic nerve and is related to intraocular pressure [7]. Therefore, this problem can damage the eyesight, which is irreversible and causes blindness if left untreated. Simply put, it is caused by a sharp increase in the pressure of the fluid or the substance inside the eye. In general, the problem is that this fluid comes out of very small pores around the iris (the colored circle of the eye). In some cases, the pores are congenitally narrow. In other cases, the protrusion of the iris or the blockage of these pores with iris pigments or blood cells after bleeding into the eye almost completely blocks the drainage route inside the eye [8]. Glaucoma is known as a thief of vision because it is asymptomatic and unnoticed throughout one's life.
An overview of medical image processing
Medical imaging is the process used to make images of the human body (or parts and functions thereof) for clinical purposes (medical procedures seeking to identify, treat, and evaluate diseases) or medical sciences (including anatomical and physiological studies) [9]. Medical imaging is an overlap of several disciplines such as medical physics, medical engineering, biology, and optics. Different types of medical radiographic imaging, this type of imaging is used to diagnose various types of fractures, dislocations, types of stenosis, and wounds in the gastrointestinal tract, limb tears, joint diseases, etc. CT scans for emergencies such as stasis, shock, and bleeding can be seen quickly. It is also necessary to perform this type of imaging on the spine, chest, and abdomen. Ultrasound is used. Ultrasound to check for a variety of diseases related to the biliary, urinary, vascular, heart, and pregnant women and children. MRI This type of imaging quickly shows very small buildings and shows the boundary between adjacent tissues well. It also shows muscles, arteries, tendons, and ligaments well [10]. Types of medical imaging devices Simple radiology devices in these devices by producing X-rays in a tube and using a series of necessary techniques and conditions and passing the radiation through the patient's body and its contact with the film and then recording the image by the emergence and fixation devices from different body parts. Is removed CT scan machine in this device, cross-sectional and transverse imaging is done by rotating the device around the desired organ, and in each rotation, it captures a section of the organ in the shortest time, and a computer reconstructs the images [11] An MRI machine uses a large magnetic field that, when the patient is placed in it, the radio waves sent by the device affect the nucleus of the hydrogen atom in the body and place them in a magnetic field. Then the computer imagery is done from different parts of the desired member. To use this system, the PET device injects a radioactive element that produces positrons into the patient's body and then produces two gamma rays. Accordingly, in this system, the anatomy and physiology of the body are determined. The nuclear medicine of a radioactive substance is used intravenously or orally or by inhalation [12]. Due to metabolic functions in the body, these radioactive substances accumulate in a specific place. Then a camera in this system called a gamma camera counts the number of gamma rays emitted by the patient, which indicates the amount of activity absorbed in that organ. As a result, a specific disease, such as a tumor, can cause a change in the count and the disease can be diagnosed, and finally, a retinoscope (retina) is a specialized auxiliary device in ophthalmology. In general, the measurement of the refractive force of the eye is noted even without the cooperation of the patient [13].
Using segmentation on dataset
In this study, 2,000 eye images with glaucoma were used, the images have a resolution of 256 × 256 Pixels yet, it is reduced to 128 × 128 pixels to decrease the computational cost of the model. Three examples are shown in Figure 1.
Figure 1: Mean change in (a) CMT at month 4 following DEX implants.
Experimental setup
In this section, the proposed experimental setup for classification has described the Segmentation that has been developed in Mat lab 2020a due to the availability of the most common machine learning and image processing toolbox. In the work using both CPU and GPU and 10.9 GB RAM 1868 GB hard and runtime for deep learning and free of charge access to robust GPU.
Proposed method
Introduction: Segmenting an image refers to splitting the image into regions so that the pixels in each region have a specific feature (which can belong to an object) in common. The most basic feature in the segmentation of a monochrome image is the brightness of the image and in the segmentation of a color image, its color components. In addition, the edges of the image and tissue are useful features for segmentation. In this method, we perform the segmentation of retinal blood vessels using the image processing technique to identify a person with glaucoma.
Edge detection: Undoubtedly, the most important parameter in the image, in which the edges have the most impact, is the detection of the edge, which is one of the most important issues in image processing and machine vision. Edging is one of the lower order processes in image processing so that the performance of higher-order processes such as object recognition, segmentation, and image coding is directly dependent on the efficiency of this lowlevel processing. In general, the edge is not a local feature of the image and depends on the structure of the image around that area.
Edge models: To analyze edge detection methods and dimensions of simulation results, we first introduce common edge models. In general, three models are used. Stair edge is the simplest model that is determined by the step function:
This model is defined by three parameters of background intensity (x, edge contrast (k), and edge position (I). In real images, the brightness changes gradually and a slope is suggested:
Slope edge is defined by four characteristics of background intensity (x), edge contrast (k), beginning of edge (I1), end of edge (I2).
The third model is closer to the real edge in the images. This model is defined based on image blur:
Edge detection algorithm in medical images: Based on the definition of the edge as the location of changes in lighting levels, the range of these changes should also be considered to decide on the presence of the edge and its exact location. In this case, if the edges of an image are exposed, the location of all the prominent and opaque objects in the image is determined and their basic properties such as surface, environment, shape structure, type, and position of objects, only by processing limited points of the image that the edges They will be measurable and recognizable. As a result, the use of a precision edge detector directly helps to increase the feature recognition rate and the ability to accurately segment the image. The vector f (x, y) provides the maximum rate of change of brightness.
Edge algorithms can detect many objects from their line image. The human visual system makes some kind of edge discovery before recognizing the color or intensity of light. Therefore, it makes sense to do edge discovery before interpreting images in automated systems. Performing edge detection operations is an important consideration in many artificial vision systems. The main purpose of edge finding is to reduce the volume of data in the image while maintaining the original structure and shape of the image. The shadow mirror is not a physical reality and is where a part of the image begins or ends. The edge can be thought of as the place where the horizontal and vertical planes of an object meet. Due to the fact that early diagnosis of glaucoma prevents the disease from becoming acute, we compare the proposed algorithm with other algorithms and finally show the complete response of the simulated output. Afterimage preprocessing and application of edge detection algorithms, different edge detection techniques are available in blood vessel imaging, the methods of which are compared as follows (Figure 1).
Canny algorithm: The edge-finding algorithm is one of the best edge finding to date. One of the criteria in this algorithm is to reduce the error rate, is as much as possible no edge in the image should be lost, and because the edge is not an edge should not be assumed instead of an edge. Pointed edges found to be as close as possible to the main edges. This method detects weak edges more correctly and is less deceived by noise (Figure 2).
Figure 2: Edged image using the canny algorithm.
Roberts algorithm: This algorithm is very sensitive to noise and uses fewer pixels to approximate the gradient, and has less power than the canny algorithm (Figure 3).
Figure 3: Edged image using the Roberts algorithm.
Zero cross algorithm: This algorithm searches for parts of the Laplace of an image whose Laplace value exceeds zero. In other words, the points where Laplace changes the sign (Figure 4).
Figure 4: Edged image with zero cross algorithm.
LOG (Laplacian of Gaussian): The Laplace LOG algorithm is a two-dimensional isotropic measurement of a second-order spatial derivative of an image. Laplace an image shows areas of rapid intensity change and is therefore often used to detect edges (Figure 5).
Figure 5: Edged image with LOG algorithm.
In this article, we compare the methods of edge detection for the segmentation of retinal blood vessels with glaucoma. These operators are the most common for edge detection in the image, and each is useful in a specific range. Each operator has its own limitations and to achieve the desired results, the appropriate edge detection algorithm must be implemented. According to the comparison table, it can be seen that when using an operator, many parameters must be considered, one of which is the processing time. The Canny algorithm, despite having a complex calculation and high processing time, has the highest accuracy among operators. To extract the edge in the images of patients with glaucoma, however, the role of other operators cannot be ignored, as mentioned; each is useful in a certain range. Therefore, operators such as LOG when discussing the speed of operation Opinion can play a better role. Therefore, the effects of operators can vary depending on the selected elements (Tables 1 and 2; Figure 6).
Experiment | Mean sensitivity (%) | Mean accuracy (%) |
---|---|---|
Canny algorithm | ||
Image 1 | 90.54 | 89.25 |
Image 2 | 84.65 | 81.76 |
Image 3 | 83.87 | 80.98 |
Image 4 | 92.09 | 89.75 |
Image 5 | 87.98 | 84.15 |
Image 6 | 89.32 | 87.99 |
Image 7 | 88.59 | 86.76 |
Image 8 | 91.75 | 87.85 |
Image 9 | 89.99 | 86.81 |
Image 10 | 86.22 | 84.98 |
Roberts algorithm | ||
Image 1 | 65.7 | 64.31 |
Image 2 | 64.87 | 63.76 |
Image 3 | 50.34 | 46.21 |
Image 4 | 56.52 | 56.53 |
Image 5 | 65.98 | 54.78 |
Image 6 | 67.76 | 74.12 |
Image 7 | 54.43 | 43.65 |
Image 8 | 48.51 | 47.21 |
Image 9 | 64.77 | 45.65 |
Image 10 | 58.45 | 57.69 |
Zero cross algorithm | ||
Image 1 | 69.32 | 68.09 |
Image 2 | 68.54 | 67.45 |
Image 3 | 69.5 | 67.86 |
Image 4 | 67.32 | 65.7 |
Image 5 | 68.97 | 66.99 |
Image 6 | 65.76 | 64.67 |
Image 7 | 59.56 | 57.82 |
Image 8 | 64.85 | 62.89 |
Image 9 | 63.07 | 61.65 |
Image 10 | 61.71 | 59.76 |
LOG algorithm | ||
Image 1 | 64.6 | 62.01 |
Image 2 | 62.57 | 61.36 |
Image 3 | 60.76 | 58.08 |
Image 4 | 54.05 | 53.13 |
Image 5 | 59.08 | 58.88 |
Image 6 | 61.16 | 60.06 |
Image 7 | 65.24 | 64.35 |
Image 8 | 69.02 | 68.32 |
Image 9 | 61.53 | 60.01 |
Image 10 | 66.28 | 64.18 |
Experiment | Mean sensitivity (%) | Mean accuracy (%) |
Canny algorithm | ||
Image 1 | 90.54 | 89.25 |
Image 2 | 84.65 | 81.76 |
Image 3 | 83.87 | 80.98 |
Image 4 | 92.09 | 89.75 |
Image 5 | 87.98 | 84.15 |
Image 6 | 89.32 | 87.99 |
Image 7 | 88.59 | 86.76 |
Image 8 | 91.75 | 87.85 |
Image 9 | 89.99 | 86.81 |
Image 10 | 86.22 | 84.98 |
Roberts algorithm | ||
Image 1 | 65.7 | 64.31 |
Image 2 | 64.87 | 63.76 |
Image 3 | 50.34 | 46.21 |
Image 4 | 56.52 | 56.53 |
Image 5 | 65.98 | 54.78 |
Image 6 | 67.76 | 74.12 |
Image 7 | 54.43 | 43.65 |
Image 8 | 48.51 | 47.21 |
Image 9 | 64.77 | 45.65 |
Image 10 | 58.45 | 57.69 |
Zero cross algorithm | ||
Image 1 | 69.32 | 68.09 |
Image 2 | 68.54 | 67.45 |
Image 3 | 69.5 | 67.86 |
Image 4 | 67.32 | 65.7 |
Image 5 | 68.97 | 66.99 |
Image 6 | 65.76 | 64.67 |
Image 7 | 59.56 | 57.82 |
Image 8 | 64.85 | 62.89 |
Image 9 | 63.07 | 61.65 |
Image 10 | 61.71 | 59.76 |
LOG algorithm | ||
Image 1 | 64.6 | 62.01 |
Image 2 | 62.57 | 61.36 |
Image 3 | 60.76 | 58.08 |
Image 4 | 54.05 | 53.13 |
Image 5 | 59.08 | 58.88 |
Image 6 | 61.16 | 60.06 |
Image 7 | 65.24 | 64.35 |
Image 8 | 69.02 | 68.32 |
Image 9 | 61.53 | 60.01 |
Image 10 | 66.28 | 64.18 |
Table 1: Comparison tables of experiments performed on images using two parameters of accuracy and sensitivity in algorithms.
Sensitivity | Accuracy | Processing time | Computing | Operator |
---|---|---|---|---|
High | High | High | Complicated | Canny |
Low | Low | High | Simple | Roberts |
Medium | Medium | Low | Simple | Zero cross |
Medium | Medium | Low | Simple | LOG |
Table 2: Investigating the complexity of algorithms in the simulation process.
Figure 6: Check the speed, accuracy, sensitivity and processing time between algorithms.
In this section, in order to evaluate the proposed method of segmentation of retinal blood vessels with glaucoma, the obtained results were compared with other algorithms. All algorithms used and image segmentation was compared in terms of data processing speed accuracy and result accuracy. Compared to previous methods, the use of image segmentation in the field of medical image processing has flourished more and more, which shows their high accuracy compared to other methods. Among the algorithms examined, the Canny algorithm has so far Accuracy and retention of vital image information are problematic. The method of image segmentation Despite providing a variety of methods for segmentation in medical images, general segmentation can still compete with other methods in accuracy and can Even if there is a difference in some images, the output of the process of slicing the image of blood birds is a set of parts whose community includes the whole image or a set of lines that are extracted from the image. Each pixel in each section is similar in that it has specific properties such as color, brightness, or texture. Image segmentation plays an essential role in image analysis and comprehension, and it is the classification of an image into several parts according to the image characteristics such as pixel value or frequency [14]. Adjacent parts are considered different according to the mentioned characteristics. In previous studies, there was no proper speed and accuracy, because the image of blood vessels was not fully examined. The method of dividing the image that includes the edge or border, based on the area, each of which is divided into several techniques. Can, are, can more powerfully confirm the accuracy of the process. On the other hand, imageprocessing systems are rapidly evolving through the modification and optimization of existing techniques or the combination of existing methods with other techniques in related fields. There are also other research subfields in the field of image processing that researchers can use [14]. Combine with the techniques introduced in this article and improve their performance. The effect of image processing is observed in many sciences and industries, and some of these applications are so dependent on image processing that without it they fail to achieve their goals. The application of image processing in any field is very wide. Deals with methods that can be used to understand the meaning and content of images. According to the comparison table, it can be concluded that the segmentation method is much more accurate than traditional machine vision methods. The image was divided using a series of decisions and the desired result was obtained, which allows the physician to make a correct diagnosis and extract the information with high accuracy. The purpose of segmenting an image is to form raw data in the form of becoming more usable for future statistical processing. It is expected that in the future, feature extraction will be done more accurately and more details will be provided to the machine's visual systems to identify the objects in the image, which will speed up the diagnosis of the disease. The method of segmentation based on mathematical tools is an efficient method for processing retinal images and thus improving image accuracy. Using special features, it is possible to focus on time, scale, and maintain important coefficients (information) for analysis. The resolution and contrast of the eye images are almost unaffected by the segmentation method. Considering the images and relative values of accuracy, precision, and speed of data processing, among all the mentioned methods, the best case is when the segmentation method is used.
In this case, the fragmented image has the highest accuracy. Research into the use of a large number of low-quality cameras to match images and compare their performance with systems that use two high-quality cameras is one area of research that can be explored. Very little research has been done on displaying 3 D pixelbased memory of scenes and storing important features in scenes such as corners and edges. On the subject of scene reconstruction in retinal blood vessel images, there is also a gap in research related to clustering techniques to shape objects based on neighborhood characteristics and coloring. Besides, as more and more researchers turn to GPUs to perform processes related to machine vision (twoway matching, scene reconstruction, and object recognition), it is expected that the position of the three-dimensional pixels will be more accurately determined and more detailed. Identify objects in the image to be provided to image processing systems.
No acknowledgements to be made.
Citation: Shahalinejad S, Nooshyar M (2021) Segmentation of Retinal Blood Vessels in Glaucoma using a Comparative Study of Edge Detection Methods in Medical Image Processing. J Clin Exp Ophthalmol. 12:892.
Received: 23-Jul-2021 Accepted: 06-Aug-2021 Published: 13-Aug-2021
Copyright: © 2021 Shahalinejad S, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.