Search results

Filters

  • Journals
  • Authors
  • Keywords
  • Date
  • Type

Search results

Number of results: 4
items per page: 25 50 75
Sort by:
Download PDF Download RIS Download Bibtex

Abstract

Super-resolution image reconstruction utilizes two algorithms, where one is for single-frame image reconstruction, and the other is for multi-frame image reconstruction. Singleframe image reconstruction generally takes the first degradation and is followed by reconstruction, which essentially creates a problem of insufficient characterization. Multi-frame images provide additional information for image reconstruction relative to single frame images due to the slight differences between sequential frames. However, the existing super-resolution algorithm for multi-frame images do not take advantage of this key factor, either because of loose structure and complexity, or because the individual frames are restored poorly. This paper proposes a new SR reconstruction algorithm for images using Multi-grained Cascade Forest. Multi-frame image reconstruction is processed sequentially. Firstly, the image registration algorithm uses a convolutional neural network to register low-resolution image sequences, and then the images are reconstructed after registration by the Multi-grained Cascade Forest reconstruction algorithm. Finally, the reconstructed images are fused. The optimal algorithm is selected for each step to get the most out of the details and tightly connect the internal logic of each sequential step. This novel approach proposed in this paper, in which the depth of the cascade forest is procedurally generated for recovered images, rather than being a constant. After training each layer, the recovered image is automatically evaluated, and new layers are constructed for training until an optimal restored image is obtained. Experiments show that this method improves the quality of image reconstruction while preserving the details of the image.

Go to article

Authors and Affiliations

Yaming Wang
Zhikang Luo
Wenqing Huang
Download PDF Download RIS Download Bibtex

Abstract

Multi-focus image fusion is a method of increasing the image quality and preventing image redundancy. It is utilized in many fields such as medical diagnostic, surveillance, and remote sensing. There are various algorithms available nowadays. However, a common problem is still there, i.e. the method is not sufficient to handle the ghost effect and unpredicted noises. Computational intelligence has developed quickly over recent decades, followed by the rapid development of multi-focus image fusion. The proposed method is multi-focus image fusion based on an automatic encoder-decoder algorithm. It uses deeplabV3+ architecture. During the training process, it uses a multi-focus dataset and ground truth. Then, the model of the network is constructed through the training process. This model was adopted in the testing process of sets to predict the focus map. The testing process is semantic focus processing. Lastly, the fusion process involves a focus map and multi-focus images to configure the fused image. The results show that the fused images do not contain any ghost effects or any unpredicted tiny objects. The assessment metric of the proposed method uses two aspects. The first is the accuracy of predicting a focus map, the second is an objective assessment of the fused image such as mutual information, SSIM, and PSNR indexes. They show a high score of precision and recall. In addition, the indexes of SSIM, PSNR, and mutual information are high. The proposed method also has more stable performance compared with other methods. Finally, the Resnet50 model algorithm in multi-focus image fusion can handle the ghost effect problem well.
Go to article

Authors and Affiliations

K. Hawari
1
Ismail Ismail
1 2

  1. Universiti Malaysia Pahang, Faculty of Electrical and Electronics Engineering, 26300 Kuantan, Malaysia
  2. Politeknik Negeri Padang, Electrical Engineering Department, 25162, Padang, Indonesia
Download PDF Download RIS Download Bibtex

Abstract

For brain tumour treatment plans, the diagnoses and predictions made by medical doctors and radiologists are dependent on medical imaging. Obtaining clinically meaningful information from various imaging modalities such as computerized tomography (CT), positron emission tomography (PET) and magnetic resonance (MR) scans are the core methods in software and advanced screening utilized by radiologists. In this paper, a universal and complex framework for two parts of the dose control process – tumours detection and tumours area segmentation from medical images is introduced. The framework formed the implementation of methods to detect glioma tumour from CT and PET scans. Two deep learning pre-trained models: VGG19 and VGG19-BN were investigated and utilized to fuse CT and PET examinations results. Mask R-CNN (region-based convolutional neural network) was used for tumour detection – output of the model is bounding box coordinates for each object in the image – tumour. U-Net was used to perform semantic segmentation – segment malignant cells and tumour area. Transfer learning technique was used to increase the accuracy of models while having a limited collection of the dataset. Data augmentation methods were applied to generate and increase the number of training samples. The implemented framework can be utilized for other use-cases that combine object detection and area segmentation from grayscale and RGB images, especially to shape computer-aided diagnosis (CADx) and computer-aided detection (CADe) systems in the healthcare industry to facilitate and assist doctors and medical care providers.
Go to article

Bibliography

  1.  Cancer Research UK Statistics from the 5th of March 2020. [Online]. https://www.cancerresearchuk.org/health-professional/cancer- statistics/statistics-by-cancer-type/brain-other-cns-and-intracranial-tumours/incidence#ref-
  2.  E. Kot, Z. Krawczyk, K. Siwek, and P.S. Czwarnowski, “U-Net and Active Contour Methods for Brain Tumour Segmentation and Visualization,” 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, United Kingdom, 2020, pp. 1‒7, doi: 10.1109/ IJCNN48605.2020.9207572.
  3.  J. Kim, J. Hong, H. Park, “Prospects of deep learning for medical imaging,” Precis. Future. Med. 2(2), 37–52 (2018), doi: 10.23838/ pfm.2018.00030.
  4.  E. Kot, Z. Krawczyk, and K. Siwek, “Brain Tumour Detection and Segmentation Using Deep Learning Methods,” in Computational Problems of Electrical Engineering, 2020.
  5.  A.F. Tamimi and M. Juweid, “Epidemiology and Outcome of Glioblastoma,” in: Glioblastoma [Online]. Brisbane (AU): Codon Publications, 2017, doi: 10.15586/codon.glioblastoma.2017.ch8.
  6.  A. Krizhevsky, I. Sutskever, and G.E. Hinton, “ImageNet classification with deep convolutional neural networks,” in: Advances in Neural Information Processing Systems, 2012, p. 1097‒1105.
  7.  M.A. Al-masni, et al., “Detection and classification of the breast abnormalities in digital mammograms via regional Convolutional Neural Network,” 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Seogwipo, 2017, pp. 1230‒1233, doi: 10.1109/EMBC.2017.8037053.
  8.  P. Yin, R. Yuan, Y. Cheng, and Q. Wu, “Deep Guidance Network for Biomedical Image Segmentation,” IEEE Access 8, 116106‒116116 (2020), doi: 10.1109/ACCESS.2020.3002835.
  9.  R. Sindhu, G. Jose, S. Shibon, and V. Varun, “Using YOLO based deep learning network for real time detection and localization of lung nodules from low dose CT scans”, Proc. SPIE 10575, Medical Imaging 2018: Computer-Aided Diagnosis, 105751I, 2018, doi: 10.1117/12.2293699.
  10.  R. Ezhilarasi and P. Varalakshmi, “Tumor Detection in the Brain using Faster R-CNN,” 2018 2nd International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud), Palladam, India, 2018, pp. 388‒392, doi: 10.1109/I-SMAC.2018.8653705.
  11.  S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-timeobject detection with region proposal networks,” in Advances in neuralinformation processing systems, 2015, pp. 91–99.
  12.  S. Liu, H. Zheng, Y. Feng, and W. Li, “Prostate cancer diagnosis using deeplearning with 3D multiparametric MRI,” in Proceedings of Medical Imaging 2017: Computer-Aided Diagnosis, vol. 10134, Bellingham: International Society for Optics and Photonics (SPIE), 2017. p. 1013428.
  13.  M. Gurbină, M. Lascu, and D. Lascu, “Tumor Detection and Classification of MRI Brain Image using Different Wavelet Transforms and Support Vector Machines,” in 2019 42nd International Conference on Telecommunications and Signal Processing (TSP), Budapest, Hungary, 2019, pp. 505‒508, doi: 10.1109/TSP.2019.8769040.
  14.  H. Dong, G. Yang, F. Liu, Y. Mo, and Y. Guo, “Automatic brain tumor detection and segmentation using U-net based fully convolutional networks,” in: Medical image understanding and analysis, pp. 506‒517, eds. Valdes Hernandez M, Gonzalez-Castro V, Cham: Springer, 2017.
  15.  O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in: Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, Lecture Notes in Computer Science, vol 9351, doi: 10.1007/978-3-319- 24574-4_28.
  16.  K. Hu, C. Liu, X. Yu, J. Zhang, Y. He, and H. Zhu, “A 2.5D Cancer Segmentation for MRI Images Based on U-Net,” in 2018 5th International Conference on Information Science and Control Engineering (ICISCE), Zhengzhou, 2018, pp. 6‒10, doi: 10.1109/ICISCE.2018.00011.
  17.  H.N.T.K. Kaldera, S.R. Gunasekara, and M.B. Dissanayake, “Brain tumor Classification and Segmentation using Faster R-CNN,” Advances in Science and Engineering Technology International Conferences (ASET), Dubai, United Arab Emirates, 2019, pp. 1‒6, doi: 10.1109/ ICASET.2019.8714263.
  18.  B. Stasiak, P. Tarasiuk, I. Michalska, and A. Tomczyk, “Application of convolutional neural networks with anatomical knowledge for brain MRI analysis in MS patients”, Bull. Pol. Acad. Sci. Tech. Sci. 66(6), 857–868 (2018), doi: 10.24425/bpas.2018.125933.
  19.  L. Hui, X. Wu, and J. Kittler, “Infrared and Visible Image Fusion Using a Deep Learning Framework,” 24th International Conference on Pattern Recognition (ICPR), Beijing, 2018, pp. 2705‒2710, doi: 10.1109/ICPR.2018.8546006.
  20.  K. Simonyan and A. Zisserman, “Very deep convolutional networks for largescale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
  21.  M. Simon, E. Rodner, and J. Denzler, “ImageNet pre-trained models with batch normalization,” arXiv preprint arXiv:1612.01452, 2016.
  22.  VGG19-BN model implementation. [Online]. https://pytorch.org/vision/stable/_modules/torchvision/models/vgg.html
  23.  D. Jha, M.A. Riegler, D. Johansen, P. Halvorsen, and H.D. Johansen, “DoubleU-Net: A Deep Convolutional Neural Network for Medical Image Segmentation,” 2020 IEEE 33rd International Symposium on Computer-Based Medical Systems (CBMS), Rochester, MN, USA, 2020, pp. 558‒564, doi: 10.1109/CBMS49503.2020.00111.
  24.  Jupyter notebook with fusion code. [Online]. https://github.com/ekote/computer-vision-for-biomedical-images-processing/blob/master/ papers/polish_acad_of_scienc_2020_2021/fusion_PET_CT_2020.ipynb
  25.  E. Geremia et al., “Spatial decision forests for MS lesion segmentation in multi-channel magnetic resonance images”, NeuroImage 57(2), 378‒390 (2011).
  26.  D. Anithadevi and K. Perumal, “A hybrid approach based segmentation technique for brain tumor in MRI Images,” Signal Image Process.: Int. J. 7(1), 21‒30 (2016), doi: 10.5121/sipij.2016.7103.
  27.  S. Ioffe and C. Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift,” arXiv preprint arXiv:1502.03167.
  28.  S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137‒1149, (2017), doi: 10.1109/TPAMI.2016.2577031.
  29.  T-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. Lawrence Zitnick, “Microsoft COCO: common objects incontext” in Computer Vision – ECCV 2014, 2014, p. 740–755.
  30.  Original Mask R-CNN model. [Online]. https://github.com/matterport/Mask_RCNN/releases/tag/v2.0
  31.  Mask R-CNN model. [Online]. https://github.com/ekote/computer-vision-for-biomedical-images-processing/releases/tag/1.0, doi: 10.5281/ zenodo.3986798.
  32.  T. Les, T. Markiewicz, S. Osowski, and M. Jesiotr, “Automatic reconstruction of overlapped cells in breast cancer FISH images,” Expert Syst. Appl. 137, 335‒342 (2019), doi: 10.1016/j.eswa.2019.05.031.
  33.  J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation”, Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), 2015, pp. 3431‒3440.
  34.  The U-Net architecture adjusted to 64£64 input image size. [Online]. http://bit.ly/unet64x64
Go to article

Authors and Affiliations

Estera Kot
1
Zuzanna Krawczyk
1
Krzysztof Siwek
1
Leszek Królicki
2
Piotr Czwarnowski
2

  1. Warsaw University of Technology, Faculty of Electrical Engineering, Pl. Politechniki 1, 00-661 Warsaw, Poland
  2. Medical University of Warsaw, Nuclear Medicine Department, ul. Banacha 1A, 02-097 Warsaw, Poland
Download PDF Download RIS Download Bibtex

Abstract

Nowadays, Medical imaging modalities like Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET), Single Photon Emission Tomography (SPECT), and Computed Tomography (CT) play a crucial role in clinical diagnosis and treatment planning. The images obtained from each of these modalities contain complementary information of the organ imaged. Image fusion algorithms are employed to bring all of this disparate information together into a single image, allowing doctors to diagnose disorders quickly. This paper proposes a novel technique for the fusion of MRI and PET images based on YUV color space and wavelet transform. Quality assessment based on entropy showed that the method can achieve promising results for medical image fusion. The paper has done a comparative analysis of the fusion of MRI and PET images using different wavelet families at various decomposition levels for the detection of brain tumors as well as Alzheimer’s disease. The quality assessment and visual analysis showed that the Dmey wavelet at decomposition level 3 is optimum for the fusion of MRI and PET images. This paper also compared the results of several fusion rules such as average, maximum, and minimum, finding that the maximum fusion rule outperformed the other two.
Go to article

Authors and Affiliations

Jinu Sebastian
1
G.R. Gnana King
1

  1. Sahrdaya College of Engineering and Technology, Thrissur, Kerala, India under APJ Abdul Kalam Technological University

This page uses 'cookies'. Learn more