The research paper published by IJSER journal is about DIGITAL IMAGE PROCESSING : A focused Medical Application 1

ISSN 2229-5518

DIGITAL IMAGE PROCESSING : A focused

Medical Application

Kamal K Vyas, Dr S Tiwari, Amita Pareek

Abstract— Digital Image Processing is a rapidly evolving field with growing applications in Engineering and Medical. Modern digital technology has made it possible to manipulate Multi-dimensional signals. Digital Image Processing has a broad spectrum of applications. This exploratory paper projected the different compression standards, specified compression technique for medial images. Despite rapid progress in mass-storage density and processor speed, the digital communication system’s performance demands for data storage capacity and data-transmission bandwidth.

For correct diagnosis, of medical image, a Region Based Compression approach using Block-Based Binary Plane Technique for medical image is preferred. The Physician identifies the region (rectangular in shape) where the important information for the diagnosis is present. Then the part identified by the Physician is compressed using loss less technique so that it is extracted with no loss in quality when it is displayed.

Keywords: Lossless Compression, Lossy Compression, Region based compression, Medical Image Compression, Human Vision Model.

—————————— ——————————

I. INTRODUCTION

An image may be defined as a two dimensional function,

f(x, y), where x and y are spatial (plane) coordinate, and the amplitude of function f( ) at any pair of coordinates (x, y) is called the intensity or gray level of the image at that point. When(x, y), and the intensity values of f( ) are all finite, discrete quantities, the image is called as digital image. Image Processing is processing of the image so as to reveal the inner details of the image for further investigation. With the advent of digital computers, Digital Image Processing has started revolutionizing the world with its diverse applications. The field of Image Processing continues, as it has since the early 1970’s. The discipline of Digital Image Processing covers a vast area of scientific and engineering research. Modern digital technology has made it possible to manipulate multi- dimensional signals over a range of platforms from simple digital circuits to advanced parallel computers. It’s built on a foundation of one- and two-dimensional signal processing theory and overlaps with such disciplines as Artificial Intelligence (Scene understanding), information theory (image coding), statistical pattern recognition (image classification), communication theory (image coding and transmission), and microelectronics (image sensors, image processing hardware). Image processing has revolutionized in various fields. Examples include mapping internal organs in medicine using various scanning technologies (image reconstruction from projections), automatic fingerprint recognition (pattern recognition and image coding) and HDTV (video coding).

————————————————

Kamal K Vyas, (Id: 0850111503), Director SIET, Sikar (Raj), India.,E-mail ID: kamalkvyas@gmail.com.

Dr S Tiwari, Freelance Academician, eYug, India, E-mail ID:

stiwari.eyug@rediffmail.com
II. MAJOR STEPS
The major steps involved in any image processing applications are as follows-

Image acquisition: In order to process any

image the image must be acquired so as to perform the necessary processing. Images are generated by the combination of an illumination source and the reflection or absorption of energy from that source by the elements of the scene being imaged. The illumination may originate from a source of electromagnetic energy such as radar, infrared, or X-Ray image. Depending on the nature of the source the illumination energy is either reflected or transmitted through the object of interest.

Image compression: Image compression solves

the problem of reducing the amount of data required to represent the image. The basis of compression lies in removal of redundant data that might be useful for the purpose of storage. Image compression also plays a major role in transmitting data through Internet.
The compression [3] is done in several ways i.e. lossy, lossless, near lossless & context based compression depending upon the requirement and the type of the image data whether it a medical or non- medical data. Since there is a limitation on channel bandwidth (BW) for the data transmission and the data transfer rate (bps), it is better to transmit the data in compressed form to avoid the problem of storage of huge data, speed of the data transfer, BW limitations, access speeds, costs, loss of the information and processing of the data.

III. MEDICAL IMAGE COMPRESSION

The aim behind the Medical Image Compression (MIC) [1,2] is to extract important features of the image from which a description, interpretation and understanding of the image can be provided by the computer to the

Amita Pareek, EC Deptt, JIET Girls college, Jodhpur, India, e-mail ID: amushiva1@gmail.com

IJSER © 2012 http://www.ijser.org

The research paper published by IJSER journal is about DIGITAL IMAGE PROCESSING : A focused Medical Application 2

ISSN 2229-5518

medical practitioners. The MIC techniques are concerned with reduction of number of bits required to store and transmit the image data without any appreciable loss of useful information. The MIC is effective only when compression techniques preserve all the relevant and important image information [3]. This is the case of lossless compression. Lossy compression, on the other hand, is more efficient in terms of storage and transmission needs but there is no guaranty to preserve the characteristics needed in medical diagnosis. To avoid the above risks, there may be third option that the diagnostically important region –“Region of Interest” (ROI) of the image is lossless compressed and the remaining part of the image (BG) is lossy compressed with high Conversion Ratio (CR), hence both the requirements are met in one option [3] i.e. preserving the useful information and the high CR. For the evaluation of selective as well as lossy compression techniques, some image quality measures and standards have been developed. The main image compression standards [3] are given in next section.
IV. STANDARDS FOR IMAGE COMPRESSION
The joint photographic experts group (JPEG) is a very well known ISO/ITU-T standard created in the late
1980s [4] and is based on discrete cosine transform (DCT). There are several modes defined in JPEG, including baseline, lossless, progressive and hierarchical. The baseline mode is the most popular one and supports lossy coding only. The lossless mode
is not popular but provides for lossless coding, although it does not support lossy. In the baseline mode, the image is divided in 8x8 blocks and each of these is transformed with the DCT into the frequency spectrums. The transformed blocks are quantized with a uniform scalar quantizer, zig-zag scanned and entropy coded with Huffman coding. The quantization step size for each of the 64 DCT coefficients is specified in a quantization table, which remains the same for all blocks. The DC coefficients of all blocks are coded separately, using a predictive coding. JPEG is a symmetric algorithm. Therefore, decompression runs the reverse way round, and the calculation time is same to encode and decode a medical image. The lossless mode is based on a completely different algorithm, which uses a predictive scheme. The prediction is based on the nearest three causal neighbors and seven different predictors are defined. The prediction error is entropy coded with Huffman coding. Here, we refer to this mode as L-JPEG [3]. The JPEG-LS is the latest ISO/ITU-T standard for lossless coding of still images. It also provides for ‘near- lossless’ compression [3]. The JPEG2000 (JPEG2K) [3] is the next ISO/ITU-T standard for still image coding. The most popular compression algorithms in use today in the medical community are lossless JPEG (Joint Photographic Experts Group) as well as lossless Wavelet (here JPEG 2000 is based on wavelet compression).
Storm and Cosman in 1997 [7] developed a region based coding approach. They discussed two approaches: one uses different compression methods in each region such as ‘contour-texture’ coding and sub-band decomposition coding, and the other uses the same compression method in each region such as the discrete cosine transform but with varying compression quality in each region such as by using different quantization tables. They used two multi-resolution coding schemes: wavelet zero tree coding and the S-transform, and considered only 8 bit images. Today MIC plays a key role in pushing hospitals towards filmless imaging and makes them completely digital. Image compression allows Picture Archiving and Communication Systems (PACS), to reduce the file sizes without compromising with diagnostic information quality. One research paper [7] proposed an approach to improve the performance of medical image compression while satisfying both the medical team who need to use it, and the legal team who need to defend the hospital against any malpractice resulting from misdiagnosis owing to faulty compression of medical images. It state that, improved compression performance can be accomplished by making use of clinically relevant regions as defined by physicians.
V. OVERVIEW OF MEDICAL IMAGE COMPRESSION TECHNIQUE
A typical 12-bit medical X-ray may be 2048 pixels by
2560 pixels in dimension. This translates to a file size of
10MB. A typical 16-bit mammogram image may be 4500 pixels by 4500 pixels in dimension for a file size of 40MB. Hence the requirement for storage & transmission time
drastically increases. Even if storage is infinite, but transmitting such a huge file is still a problem. Many hospitals have satellite centres. These hospitals make

use of ‘Tele-radiology’ applications that allow the clinic

staff to operate the clinic without the need of radiologist. Instead of a diagnostic radiologist, a technician in the clinic can take the X-ray and send the image through a network connection to the hospital where the diagnostic radiologist can read the image and send back a
diagnosis. If transmission time increases, then it may be
dangerous for patient. To reduce transmission time, it is necessary to compress images. There are two techniques for MIC, the first one is old, in which read off the ‘relevant’ regions and then use lossless compression in relevant regions and lossy compression in the others. The second one is new approach in which image can be subtracted from the prestored atlas image generating a residual image. This residual image will be compressed (lossless in clinically relevant regions and lossy in the others). If the alignment is done well, the residual information is minimised, thus yielding higher compression. Reconstruction [7] is straight forward, the compressed residual image is first decompressed. Then the decompressed residual is added back to the template according to the model and the original image can be

IJSER © 2012 http://www.ijser.org

The research paper published by IJSER journal is about DIGITAL IMAGE PROCESSING : A focused Medical Application 3

ISSN 2229-5518

obtained. There is a need to store only atlas along with residual image in computer not a bulky original image. One more advantage of this technique is that image may be 2D of 3D. The research paper [7] provides some pedagogically found facts by analyzing an X-Ray image. The X-ray was captured directly from the patient using a high- resolution 12-bit digital X-ray scanner. The image was then downsampled to 8 bits. Figure 1 shows the image with three clinically relevant regions defined on it. These three areas partition the image into seven areas as shown on the image indicated by regions 17. Regions 3 and 5 have been marked as ROI by a radiologist to be losslessly compressed. Region 4 has been marked by the radiologist to be lossy compressed, but at a higher level than the rest of the image. A JPEG level of 50 was chosen for region 4. The region 3 has dimensions 836 ×
1344. The 5 has dimensions 760 × 1344.


technique. Then the residual image will be compressed in a similar region-based approach. For example, Figure 4 shows the partitioned residual image produced when the original image is subtracted from the image obtained (see Figure 3) after matching the corresponding atlas image (see Figure 2) to the figure 1 image. The partitioning is the same as in figure 1. As before, the three ROI partition the image into 7 areas as shown on the image, indicated by regions 1 through 7. The clinically relevant region 3 has dimensions 836 × 1344. The clinically relevant region 5 has dimensions 760 × 1344.

Fig 2 [7] Atlas chest X- ray that matches to Figure 1

Fig 3[7] The atlas image after being registered to Figure 1

Fig 1 [7]

The original file size of the uncompressed raw image is 5111808 bytes using 8 bits per pixel. With the entire image compressed using lossless JPEG-2000, the compressed file size is 564185 bytes. This gives a compression ratio of 9.061: 1 or 0.883 bpp. Regions 3 and 5 are compressed using lossless JPEG. Regions 1,
2, 6 and 7 are compressed using lossy JPEG at a compression level of 10. Region 4 is compressed using lossy JPEG at a compression level of 50. The results are shown in Table 1 below-

Fig 4 [7] Partitioned residual chest X-ray


The original file size of the uncompressed raw image is 5111808 bytes using 8 bits per pixel (bpp). With the entire image compressed using lossless JPEG-2000, the compressed file size is 2987890 bytes. This gives a compression ratio of 1.711:1 or 4.676 bpp. Regions 3 and
5 will be compressed using lossless JPEG. Regions 1, 2,
6 and 7 will be compressed using lossy JPEG at level 10. Region 4 will be compressed using lossy JPEG at level
50. The results are shown in Table2.

Table : 1 [7]

Residual image Technique: In the previous subsections, we have seen an overview of compression with respect to the clinically relevant regions on the raw image. Now the question arises Can the atlas (bank of templates) help us in producing better compression using ‘Residual image

IJSER © 2012 http://www.ijser.org

Table : 2 [7]

The research paper published by IJSER journal is about DIGITAL IMAGE PROCESSING : A focused Medical Application 4

ISSN 2229-5518

Thus using the ROI approach on the residual, the compressed file size is 1380229 bytes. This gives a compression ratio of 3.704:1 or 2.160 bpp. The ROI areas take up 42.0% of the entire image area. The compression ratio has improved over 100% over the
1.711:1 ratio for the lossless compression of the residual image. The residuals may not be exceptionally effective. In this example, residuals produced a final result of 2.160 bpp compared to the 0.445 bpp achieved using the same ROI on the original images. There are fundamental reasons that the residual may not exceptionally well, here the first is that the residual approach must encode the atlas used and ‘transform’ parameters. The second problem is that even minor misalignments result in high- amplitude high-frequency data (i.e., the residual image looks mostly like edges), which are then harder to compress. This paper [7] further suggests that the simple
6-parameters affine transforms used for alignment are insufficient and there is a strong need of the development of more general deformable models. The most of the compression methods (such as JPEG etc) strive to achieve minimal loss of information based on mathematical measurements of difference. The particularities of the human vision system are often not taken into account. Thus, not only the image quality may be degraded noticeably, but also unwanted artifacts may be introduced. In this context, a new method for image compression was suggested in one research paper [8], which is developed in accordance to the properties of human vision. The images reconstructed from the compressed data appear to be identical to the originals. This method is good for all types of images including photographic pictures, diagrams and text.
VI. OVERVIEW OF HUMAN VISION MODEL

A widely used method to measure the sensitivity of the eye is the use of special test images [8]. Stripes of alternating brightness or color are fit together (Fig. 5). On one of the axes the frequency of the stripes per distance unit increases exponentially. On the other axis, the difference (of brightness or color) between neighboring stripes decreases exponentially.
A subject looking at the test images would be able to tell immediately that below a certain level of difference between the neighboring stripes, they appear as a uniform background. Furthermore, this perception threshold varies with the overall brightness of the surrounding; i.e., the value is greatest for medium- brightness surrounding; with the increase or decrease of the overall brightness of the surrounding the eye is less sensitive to changes. Thus, we can define a threshold function which depends on the relative difference of the changes and on the overall brightness of the surrounding. The physical preceptors of light in the eye respond to the wavelengths of the red, green and blue light. However, the signals are combined before they reach the brain. The signals which are received by the brain can be described as “what the brightness is”, “how blue or how yellow it is” (blue chrominance), and “how red or how green it is” (red chrominance). Among the standards defined by [9] is the YCbCr color model (brightness, blue chrominance, red chrominance), which approximates how humans see. One important fact is that this model is very much suitable for compression purposesit achieves very good decorrelation of the image bands (i.e., the value of a band cannot be predicted from the values of the other bands). This is the reason why the YCbCr model is employed for the compression.

Selection of Image Regions: Threshold function is a property which distinguishes the regions of the image where the human eye would be able to perceive noticeable changes. There are seperate threshold functions for the Y, Cb and Cr bands. Here the selection of regions is described for a single band because the process is same for all bands. The image is divided into two types of regions as shown in figure 6 below.

Fig 6 Colour, Y, Cr & Cb image of a picture

Fig 5 [8] Perception testing picture with threshold

The white regions describe pixels with visually significant informationsuch as edges. These regions are governed by the area above the perception threshold, which simply means that the observer will perceive a sharp change in the image (i.e., change of brightness or change of chrominance). The black regions describe the rest of the image. They are governed by the area below the perception threshold, which simply means that the observer might notice changes, but they will be very gradual and appear blurry. Clearly, the black regions can be compressed with loss of information while they would still appear identical to the original. The approach used for

IJSER © 2012 http://www.ijser.org

The research paper published by IJSER journal is about DIGITAL IMAGE PROCESSING : A focused Medical Application 5

ISSN 2229-5518

the division of the image into black and white regions is based on the gradient of the image. Indeed, the higher the gradient, the bigger is the relative change in the image and thus the stronger the visual perception of the change. For each pixel of the Y band, the average value of the 3- by-3 neighborhood is calculated and used for the adjustment of the gradient according to the average brightness of the surrounding. After this, a flat threshold value can be applied across the whole image and the resulting division to black and white regions constitutes the desired extraction of visually significant regions.

Compression of Visually Significant Regions: The white regions contain visually significant information and compression method applied to the pixels in these regions does not cause unwanted loss of detail. The white regions are selected, based on the value of the gradient (i.e., the gradient has a large value). This implies that there are big


Fig 7 Region-based visually lossless compression method

differences in the values of the neighboring pixels. The human sight loses precision with the increase of stimulus. In other words, the eye cannot tell between small variations of large differences. The two least significant bits of pixels in the white regions for both the brightness and the chrominance bands do not play a role for the visual discrimination between the original and the compressed image. Thus, for all pixels in the white regions the two least significant bits are discarded. The coding [10] of each bit-plane separately could offer a better level of compression. Additionally, pixel values are transformed using the gray code to increase the chance that neighboring pixels in the image will have same bits in a given bit-plane and thus compressibility is improved.

Compression of the Visually Less Significant Regions: The compression of the black regions is a more sophisticated process than the compression of white regions, since there is greater freedom in discarding visually insignificant information. The gradient in the black regions is very small and the human eye perceives only smooth changes in such regions. A method which can be successfully used in these circumstances is the prediction-based coding; pixel values can be approximated well from their neighborhoods. In order to increase the chance of successful prediction, the black regions can be additionally blurred before the start of compression.

Region Based Compression: Figure 7 shows a diagram of the region-based visually lossless compression method. TIFF works best when there are large, uniform- color regionsand vector graphics consist mostly of such regions. Region-based compression usually produces files which are not larger than twice the size of the LZW - TIFF output and are approximately 10 to 12% of the size of the original image. When compressing RGB images of text, the proposed method performs as well as LZW -TIFF. The visually lossless compression method is designed to work with different types of images, including images where other lossy compression techniques can introduce unwanted artifacts. Since the method is based on the human vision model, it is completely versatile. Best performance should be expected with images with mixed content. Such images were obtained by taking snapshots of the homepages of different websites, mixing text, vector graphics and photographs. This method would be slower than single-pass compressors, since it has to make a number of passes. On the other hand, most of the calculations involve very simple operations. The space complexity of the method also exceeds that of standard compressors, since the bitmap of the regions is required in addition to the image data.

IJSER © 2012 http://www.ijser.org

International Journal of Scientific & Engineering Research Volume 3, Issue 10, ISSN 2229-5518

VII. APPLICATION OF DIGITAL IMAGE PROCESSING IN DIAGNOSIS
Diabetes is a disorder of metabolism. The energy required by the body is obtained from glucose which is produced as a result of food digestion. Digested food enters the body stream with the aid of a hormone called insulin which is produced by the pancreas, an organ that lies near the stomach. During eating, the pancreas automatically produces the correct amount of insulin needed for allowing glucose absorption from the blood into the cells. In individuals with diabetes, the pancreas either produces too little or no insulin or the cells do not react properly to the insulin that is produced. The build up of glucose in the blood, overflows into the urine and then passes out of the body. Therefore, the body loses its main source of fuel even though the blood contains large amounts of glucose [11]. The effect of diabetes on the eye is called Diabetic Retinopathy (DR). It is known to damage the small blood vessel of the retina and this might lead to loss of vision.
This research work [11] is one of the method of applying digital image processing to the field of medical diagnosis in order to lessen the time and stress undergone by the ophthalmologist and other members of the team in the screening, diagnosis and treatment of diabetic retinopathy. The primary aim of this project is to develop a system that will be able to identify patients with BDR and PDR from either colour image or grey level image obtained from the retina of the patient. These types of images are called fundus images. The different diabetic retinopathy diseases that are of interest include red spots, microaneurysm and neovascularisation and they fall between BDR and PDR stages of the disease. The secondary aim includes developing a MATLAB based Graphic User Interface (GUI) tool to be used by the ophthalmologist in marking fundus images. The marked images are to be used for the development of DR grading and database system for this present and future work. As shown in Figure 8, the input fundus image is analysed by the system and the output contains the grading and the result with the co-ordinates of the detected abnormality shown on the GUI. The input image to the Pre-Processing stage can be a colour or a grey level image. The Pre- Processing stage corrects the problem of Illumination variation that occurred when taken the pictures. Other problems corrected by this process include the enhancement of the contrast between the exudates and vein network and the background to aid in segmentation and detection of the abnormalities. Process involves in this stage include Colour Space Conversion, Zero Padding of Image Edges, Median Filtering and Windowed Based Adaptive Histogram Equalization with Overlap Mean. The output of this stage is passed to the Segmentation stage. This stage segments the background pixel from the exudates and the vein networks using K-mean clustering algorithm with two cluster class centres. The exudates and the vein networks class centres also contain some noisy pixels that were
over enhanced during the Pre-Processing stage and will be removed during the next stage called Disease Classifier stage.

VIII. CONCLUSION

Although the residual approach is still in experimental stage, this exploratory paper has shown that the overall approach of clinically relevant regions has clearly demonstrated advantages over both traditional lossless compression and simple lossy compression, and that the residual approach is still more preferable. The human vision model is employed to take advantage of different ways to reduce the visually insignificant data in the compressed image. The YCbCr color model is used, both because of its suitability for compression and because of its resemblance of how people see. Each band is divided into regions with visually significant data and with visually less significant data. The regions are coded using the most appropriate coding method to achieve high compression performance. The method is applicable to all types of images, since it takes into account the properties of the human sight rather than mathematical definitions of closeness. The region based compression technique proves itself the better choice for all types of images, in spite of its slower speed. In application section, we have seen one valuable use of image processing in diagnosis of Diabetic Retinopathy.
REFERENCES

1. Chen S.Y., Lin W.C., Chen C.T., “Split and merge image

segmentation based on localized feature analysis and statistical

tests”, CVGIP– Graphical modeling and Image proc., Vol. 53, No.

5, 457-475, 1991.

2. Jayram K.U., Gabor T.H., “Guest Editorial Medical Image Reconstruction Process, Vision and Analysis’’, The MIPG Process, IEEE Trans. in Medical Image Vol-21, No.4 April 2002.

3. M.A. Ansari and R.S. Anand, “Recent Trends in Image Compression and Its Application in Telemedicine and Teleconsultation”, XXXII National Systems Conference, Nsc 2008, December 17-19, 2008.

4. C. Christopoulos, A. Skodras and T. Ebrahimi, “The JPEG 2000 still image coding system: An overview”, IEEE Trans. on Consumer Electronics, Vol. 46, No. 4, pp 1103-1127, November 2000.

IJSER © 2012 http://www.ijser.org

The research paper published by IJSER journal is about DIGITAL IMAGE PROCESSING : A focused Medical Application 7

ISSN 2229-5518

5. S. Tai, Y. Wu, and C. Lin, “An adaptive 3-D Discrete Cosine Transform Coder for medical image compression,” IEEE Trans. on Info. Tech. in Biomed, Vol.4, pp.259-63, Sep 2000.

6. Ahn C.B., Kim I.Y., Han S.W.,‘‘ Medical Image Compression Using JPEG Progressive Coding’’, IEEE Conf. Record, Nuclear Science Symposium and Medical Imaging, pp.13361339, 31 Oct.-6 Nov.

1993.

7. Matthew J. Zukoski, Terrance Boult & Tunç Iyriboz, “A novel approach to medical image compression”, Int. J. Bioinformatics Research and Applications, Vol. 2, No. 1, 2006.

8. Lenko Grigorov and Purang Abolmaesumi, “Region-Based Method

for Visually Lossless Image Compression”, School of Computing,

Queen’s University, Kingston, Ontario K7L 3N6, Canada.

9. YCbCr Color Model, ITU-R BT.601 Recommendation, International

Telecommunication Union, http://www.itu.int.

10. R Gonzalez and R. Woods, Digital Image Processing, 2nd Ed, Prentice Hall, 2002.

11. Iqbal, M.I, Aibinu, A.M, Gubbal, N.S and Khan, A, “Automatic

Diagnosis Of Diabetic Retinopathy Using Fundus Images”,

MEE06:19, October 2006.

IJSER © 2012 http://www.ijser.org