Search results

Filters

  • Journals
  • Authors
  • Keywords
  • Date
  • Type

Search results

Number of results: 170
items per page: 25 50 75
Sort by:
Download PDF Download RIS Download Bibtex

Abstract

Synthetic aperture (SA) technique is a novel approach to present day commercial systems and has previously not been used in medical ultrasound imaging. The basic idea of SA is to combine information acquired simultaneously from all directions over a number of emissions and to reconstruct the full image from these data.

The paper presents the multi-element STA (MSTA) method for medical ultrasound imaging. The main difference with the STA approach is the use of a few elements in the transmit mode in contrast to a single element aperture. This allows increasing the system frame rate, decreasing the number of emissions, and provides the best compromise between the penetration depth and lateral resolution. Besides, a modified MSTA is proposed with a corresponding RF signal correction in the receive mode, which accounts for the element directivity property.

In the experiments a 32-element linear transducer array with 0.48 mm inter-element spacing and a burst pulse of 100 ns duration were used. Two elements wide transmission aperture was used to generate an ultrasound wave covering the full image region. The comparison of 2D ultrasound images of a tissue mimicking phantom obtained using the STA and MSTA methods is presented to demonstrate the benefits of the second one.

Go to article

Authors and Affiliations

Ihor Trots
Marcin Lewandowski
Yuriy Tasinkevych
Andrzej Nowicki
Download PDF Download RIS Download Bibtex

Abstract

The paper presents the optimization problem for the multi-element synthetic transmit aperture method (MSTA) in ultrasound imaging applications. The optimal choice of the transmit aperture size is made as a trade-off between the lateral resolution, penetration depth and the frame rate. Results of the analysis obtained by a developed optimization algorithm are presented. The maximum penetration depth and lateral resolution at given depths are chosen as optimization criteria. The results of numerical experiments carried out in MATLAB® using synthetic aperture data of point reflectors obtained by the FIELD II simulation program are presented. The visualization of experimental synthetic aperture data of a tissue mimicking phantom and in vitro measurements of the beef liver performed using the SonixTOUCH Research system are also shown.

Go to article

Authors and Affiliations

Marcin Lewandowski
Yuriy Tasinkevych
Andrzej Nowicki
Ihor Trots
Download PDF Download RIS Download Bibtex

Abstract

B a c k g r o u n d: Echocardiography is the first exam to establish the myocardial function in patients with takotsubo syndrome (TTS). However, ECG-Gated Myocardial Single-Photon Emission Tomography (G-SPECT) also allows to calculate left ventricular ejection fraction (LVEF) and can be useful in early stadium of TTS.

A i m: To compare LVEF obtained from 99mTc-MIBI G-SPECT and echocardiography in patients with TTS.

M a t e r i a l a n d M e t h o d s: Study population: 20 patients in medium age 77 (62–89) with TTS were included. In all patients 99mTc-MIBI G-SPECT and echocardiography was performed on the same day.

R e s u l t s: LVEF measured by G-SPECT and echocardiography ranged from 34 to 83% and 38 to 69%, respectively. The LVEF values for ECHO were significantly lower than for SPECT. The correlation between the LVEF was r = 0.76. The calculated correlation coefficient (r) for linear regression analysis was 0.64. The following equation shows the approximate interdependence of both LVEF calculations: LVEF GSPECT = 10.35 + 0.93 * LVEF Echo.

C o n c l u s i o n s: G-SPECT tends to overerestimate LVEF compared to echocardiography so these imaging techniques should not be used interchangeably. Calculated equation should be used for comparison of LVEF.

Go to article

Authors and Affiliations

Małgorzata Kobylecka
Monika Budnik
Janusz Kochanowski
Jakub Kucharz
Tomasz Mazurek
Adam Bajera
Leszek Królicki
Grzegorz Opolski
Download PDF Download RIS Download Bibtex

Abstract

The attenuating properties of biological tissue are of great importance in ultrasonic medical imaging. Investigations performed in vitro and in vivo showed the correlation between pathological changes in the tissue and variation of the attenuation coefficient. In order to estimate the attenuation we have used the downshift of mean frequency (fm) of the interrogating ultrasonic pulse propagating in the medium. To determine the fm along the propagation path we have applied the fm estimator (I/Q algorithm adopted from the Doppler mean frequency estimation technique). The mean-frequency shift trend was calculated using Single Spectrum Analysis. Next, the trends were converted into attenuation coefficient distributions and finally the parametric images were computed. The RF data were collected in simulations and experiments applying the synthetic aperture (SA) transmit-receiving scheme. In measurements the ultrasonic scanner enabling a full control of the transmission and reception was used. The resolution and accuracy of the method was verified using tissue mimicking phantom with uniform echogenicity but varying attenuation coefficient.

Go to article

Authors and Affiliations

Ziemowit Klimonda
Andrzej Nowicki
Jerzy Litniewski
Download PDF Download RIS Download Bibtex

Abstract

Ultrasound is used for breast cancer detection as a technique complementary to mammography, the standard screening method. Current practice is based on reflectivity images obtained with conventional instruments by an operator who positions the ultrasonic transducer by hand over the patient’s body. It is a non-ionizing radiation, pain-free and not expensive technique that provides a higher contrast than mammography to discriminate among fluid-filled cysts and solid masses, especially for dense breast tissue. However, results are quite dependent on the operator’s skills, images are difficult to reproduce, and state-of-the-art instruments have a limited resolution and contrast to show micro-calcifications and to discriminate between lesions and the surrounding tissue. In spite of their advantages, these factors have precluded the use of ultrasound for screening.

This work approaches the ultrasound-based early detection of breast cancer with a different concept. A ring array with many elements to cover 360◦ around a hanging breast allows obtaining repeatable and operator-independent coronal slice images. Such an arrangement is well suited for multi-modal imaging that includes reflectivity, compounded, tomography, and phase coherence images for increased specificity in breast cancer detection. Preliminary work carried out with a mechanical emulation of the ring array and a standard breast phantom shows a high resolution and contrast, with an artifact-free capability provided by phase coherence processing.

Go to article

Authors and Affiliations

Jorge Camacho
Luis Medina
Jorge F. Cruza
José M. Moreno
Carlos Fritsch
Download PDF Download RIS Download Bibtex

Abstract

The pathological states of biological tissue are often resulted in attenuation changes. Thus, information about attenuating properties of tissue is valuable for the physician and could be useful in ultrasonic diagnosis. We are currently developing a technique for parametric imaging of attenuation and we intend to apply it for in vivo characterization of tissue. The attenuation estimation method based on the echoes mean frequency changes due to tissue attenuation dispersion, is presented. The Doppler IQ technique was adopted to estimate the mean frequency directly from the raw RF data. The Singular Spectrum Analysis technique was used for the extraction of mean frequency trends. These trends were converted into attenuation distribution and finally the parametric images were computed. In order to reduce variation of attenuation estimates the spatial compounding method was applied. Operation and accuracy of attenuation extracting procedure was verified by calculating the attenuation coefficient distribution using the data from the tissue phantom (DFS, Denmark) with uniform echogenicity while attenuation coefficient underwent variation.

Go to article

Authors and Affiliations

Andrzej Nowicki
Ziemowit Klimonda
Jerzy Litniewski
Download PDF Download RIS Download Bibtex

Abstract

Acoustic waves are a carrier of information mainly in environments where the use of other types of waves, for example electromagnetic waves, is limited. The term acoustical imaging is widely used in the ultrasonic engineering to imaging areas in which the acoustic waves propagate. In particular, ultrasound is widely used in the visualization of human organs-ultrasonography (Nowicki, 2010). Expanding the concept, acoustical imaging can also be used to presentation (monitoring) the current state of sound intensity distribution leading to characterization of sources in observed underwater region. This can be represented in the form of an acoustic characteristic of the area, for example as a spectrogram. Knowledge of the underwater world which is built by analogy to the perception of the space on the Earth's surface is to be systematize in the form of images. Those images arise as a result of graphical representation of processed acoustic signals. In this paper, it is explained why acoustic waves are used in underwater imaging. Furthermore, the passive and active systems for underwater observation are presented. The paper is illustrated by acoustic images, most of them originated from our own investigation.
Go to article

Authors and Affiliations

Grażyna Grelowska
Eugeniusz Kozaczka
Download PDF Download RIS Download Bibtex

Abstract

The identification of macroalgal beds is a crucial component for the description of fjord ecosystems. Direct, biological sampling is still the most popular investigation technique but acoustic methods are becoming increasingly recognized as a very efficient tool for the assessment of benthic communities. In 2007 we carried out the first acoustic survey of the littoral areas in Kongsfjorden. A 2.68 km2 area comprised within a 12.40 km2 euphotic zone was mapped along the fjord's coast using single- and multi-beam echosounders. The singlebeam echosounder (SBES) proved to be a very efficient and reliable tool for macroalgae detection in Arctic conditions. The multibeam echosounder (MBES) was very useful in extending the SBES survey range, even though it's ability in discriminating benthic communities was limited. The final result of our investigation is a map of the macroalgae distribution around the fjord, showing 39% macroalgae coverage (1.09 km2) of investigated area between isobaths -0.70 m and -30 m. Zonation analysis showed that most of the studied macroalgae areas occur up to 15 m depth (93%). These results were confirmed by biological sampling and observation in key areas. The potential of acoustic imaging of macrophytes, and a proposed methodology for the processing of acoustic data, are presented in this paper along with preliminary studies on the acoustic reflectivity of macroalgae, also highlighting differences among species. These results can be applied to future monitoring of the evolution of kelp beds in different areas of the Arctic, and in the rest of the world.
Go to article

Authors and Affiliations

Jarosław Tęgowski
Aleksandra Kruss
Agnieszka Tatarek
Józef Wiktor
Philippe Blondel
Download PDF Download RIS Download Bibtex

Abstract

Super-resolution image reconstruction utilizes two algorithms, where one is for single-frame image reconstruction, and the other is for multi-frame image reconstruction. Singleframe image reconstruction generally takes the first degradation and is followed by reconstruction, which essentially creates a problem of insufficient characterization. Multi-frame images provide additional information for image reconstruction relative to single frame images due to the slight differences between sequential frames. However, the existing super-resolution algorithm for multi-frame images do not take advantage of this key factor, either because of loose structure and complexity, or because the individual frames are restored poorly. This paper proposes a new SR reconstruction algorithm for images using Multi-grained Cascade Forest. Multi-frame image reconstruction is processed sequentially. Firstly, the image registration algorithm uses a convolutional neural network to register low-resolution image sequences, and then the images are reconstructed after registration by the Multi-grained Cascade Forest reconstruction algorithm. Finally, the reconstructed images are fused. The optimal algorithm is selected for each step to get the most out of the details and tightly connect the internal logic of each sequential step. This novel approach proposed in this paper, in which the depth of the cascade forest is procedurally generated for recovered images, rather than being a constant. After training each layer, the recovered image is automatically evaluated, and new layers are constructed for training until an optimal restored image is obtained. Experiments show that this method improves the quality of image reconstruction while preserving the details of the image.

Go to article

Authors and Affiliations

Yaming Wang
Zhikang Luo
Wenqing Huang
Download PDF Download RIS Download Bibtex

Abstract

Image segmentation is a typical operation in many image analysis and computer vision applications. However, hyperspectral image segmentation is a field which have not been fully investigated. In this study an analogue- digital image segmentation technique is presented. The system uses an acousto-optic tuneable filter, and a CCD camera to capture hyperspectral images that are stored in a digital grey scale format. The data set was built considering several objects with remarkable differences in the reflectance and brightness components. In addition, the work presents a semi-supervised segmentation technique to deal with the complex problem of hyperspectral image segmentation, with its corresponding quantitative and qualitative evaluation. Particularly, the developed acousto-optic system is capable to acquire 120 frames through the whole visible light spectrum. Moreover, the analysis of the spectral images of a given object enables its segmentation using a simple subtraction operation. Experimental results showed that it is possible to segment any region of interest with a good performance rate by using the proposed analogue-digital segmentation technique.

Go to article

Authors and Affiliations

César Isaza
Julio M. Mosquera
Gustavo A. Gómez-Méndez
Jonny P. Zavala-De Paz
Ely Karina-Anaya
José A. Rizzo-Sierra
Omar Palillero-Sandoval
Download PDF Download RIS Download Bibtex

Abstract

Despite great technological progress scientists still are not capable of ascertaining how many species are there on Earth. Systematic studies are not only time-consuming, but sometimes also significantly impeded by constraints of available equipment. One of the methods for morphology evaluation, which is gradually more often used for taxonomical research is microcomputed tomography. It’s great spatial resolution and ability to gather volumetric data during single acquisition without sectioning specimen are properties especially useful in evaluation of small invertebrates. Nondestructive nature of micro-CT gives possibility to combine it with other imaging techniques even for single specimen. Moreover, in case of rare organisms studies it allows to collect full structural data without fracturing their bodies. Application of proper staining, exposure parameters or specific sample preparation significantly improves quality of performed studies. The following article presents summary of current trends and possibilities of microtomography in morphology studies of small invertebrates.
Go to article

Authors and Affiliations

Teresa Nesteruk
Łukasz Wiśniewski
Download PDF Download RIS Download Bibtex

Abstract

The main objective of this study is to improve the ultrasound image by employing a new algorithm based on transducer array element beam pattern correction implemented in the synthetic transmit aperture (STA) method combined with emission of mutually orthogonal complementary Golay sequences. Orthogonal Golay sequences can be transmitted and received by different transducer elements simultaneously, thereby decreasing the time of image reconstruction, which plays an important role in medical diagnostic imaging. The paper presents the preliminary results of computer simulation of the synthetic aperture method combined with the orthogonal Golay sequences in a linear transducer array. The transmission of long waveforms characterized by a particular autocorrelation function allows to increase the total energy of the transmitted signal without increasing the peak pressure. It can also improve the signal-to-noise ratio and increase the visualization depth maintaining the ultrasound image resolution. In the work, the 128-element linear transducer array with a 0.3 mm pitch excited by 8-bits Golay coded sequences as well as one cycle at nominal frequencies of 4 MHz were used. The comparison of 2D ultrasound images of the phantoms is presented to demonstrate the benefits of a coded transmission. The image reconstruction was performed using the synthetic STA algorithm with transmit and receive signals correction based on a single element directivity function.
Go to article

Authors and Affiliations

Ihor Trots
Download PDF Download RIS Download Bibtex

Abstract

Ultrasonic methods of human body internal structures imaging are being continuously enhanced. New algorithms are created to improve certain output parameters. A synthetic aperture method (SA) is an example which allows to display images at higher frame-rate than in case of conventional beam-forming method.

Higher computational complexity is a limitation of SA method and it can prevent from obtaining a desired reconstruction time. This problem can be solved by neglecting a part of data. Obviously it implies a decrease of imaging quality, however a proper data reduction technique would minimize the image degradation.

A proposed way of data reduction can be used with synthetic transmit aperture method (STA) and it bases on an assumption that a signal obtained from any pair of transducers is the same, no matter which transducer transmits and which receives. According to this postulate, nearly a half of the data can be ignored without image quality decrease.

The presented results of simulations and measurements with use of wire and tissue phantom prove that the proposed data reduction technique reduces the amount of data to be processed by half, while maintaining resolution and allowing only a small decrease of SNR and contrast of resulting images.

Go to article

Authors and Affiliations

Piotr Karwat
Marcin Lewandowski
Ziemowit Klimonda
Andrzej Nowicki
Michał Sęklewski
Download PDF Download RIS Download Bibtex

Abstract

Commercially available cardiac scanners use 64–128 elements phased-array (PA) probes and classical delay-and-sum beamforming to reconstruct a sector B-mode image. For portable and hand-held scanners, which are the fastest growing market, channel count reduction can greatly decrease the total power and cost of devices. The introduction of ultra-fast imaging methods based on plane waves and diverging waves provides new insight into heart’s moving structures and enables the implementation of new myocardial assessment and advanced flow estimation methods, thanks to much higher frame rates. The goal of this study was to show the feasibility of reducing the channel count in the diverging wave synthetic aperture image reconstruction method for phased-arrays. The application of ultra-fast 32-channel subaperture imaging combined with spatial compounding allowed the frame rate of approximately 400 fps for 120 mm visualization to be achieved with image quality obtained on par with the classical 64-channel beamformer. Specifically, it was shown that the proposed method resulted in image quality metrics (lateral resolution, contrast and contrast-to-noise ratio), for a visualization depth not exceeding 50 mm, that were comparable with the classical PA beamforming. For larger visualization depths (80–100 mm) a slight degradation of the above parameters was observed. In conclusion, diverging wave phased-array imaging with reduced number of channels is a promising technology for low-cost, energy efficient hand-held cardiac scanners.

Go to article

Authors and Affiliations

Yuriy Tasinkevych
Marcin Lewandowski
Ziemowit Klimonda
Mateusz Walczak
Download PDF Download RIS Download Bibtex

Abstract

Therapeutic and surgical applications of focused ultrasound require monitoring of local temperature rises induced inside tissues. From an economic and practical point of view ultrasonic imaging techniques seem to be the most suitable for the temperature control. This paper presents an implementation of the ultrasonic echoes displacement estimation technique for monitoring of local temperature rise in tissue during its heating by focused ultrasound The results of the estimation were compared to the temperature measured with thermocouple. The obtained results enable to evaluate the temperature fields induced in tissues by pulsed focused ultrasonic beams using non-invasive imaging ultrasound technique

Go to article

Authors and Affiliations

Piotr Karwat
Jerzy Litniewski
Tamara Kujawska
Wojciech Secomski
Kazimierz Krawczyk
Download PDF Download RIS Download Bibtex

Abstract

Many imaging techniques are playing an increasingly significant role in clinical diagnosis. In the last years especially noninvasive electrical conductivity imaging methods have been investigated. Magnetoacoustic tomography with magnetic induction (MAT-MI) combines favourable contrast of electromagnetic tomography with good spatial resolution of sonography. In this paper a finite element model of MAT-MI forward problem has been presented. The reconstruction of the Lorentz force distribution has been performed with the help of a time reversal algorithm.
Go to article

Authors and Affiliations

Adam Ryszard Żywica
Download PDF Download RIS Download Bibtex

Abstract

This paper proposes a new approach to the processing and analysis of medical images. We introduce the term and methodology of medical data understanding, as a new step in the way of starting from image processing, and followed by analysis and classification (recognition). The general view of the situation of the new technology of machine perception and image understanding in the context of the more well known and classic techniques of image processing, analysis, segmentation and classification is shown below

Go to article

Authors and Affiliations

R. Tadeusiewicz
M.R. Ogiela
Download PDF Download RIS Download Bibtex

Abstract

At the current stage of diagnostics and therapy, it is necessary to perform a geometric evaluation of facial skull bone structures basing upon virtually reconstructed objects or replicated objects with reverse engineering. The objective hereof is an analysis of imaging precision for cranial bone structures basing upon spiral tomography and in relation to the reference model with the use of laser scanning. Evaluated was the precision of skull reconstruction in 3D printing, and it was compared with the real object, topography model and reference model. The performed investigations allowed identifying the CT imaging accuracy for cranial bone structures the development of and 3D models as well as replicating its shape in printed models. The execution of the project permits one to determine the uncertainty of components in the following procedures: CT imaging, development of numerical models and 3D printing of objects, which allows one to determine the complex uncertainty in medical applications.

Go to article

Bibliography

[1] D. Mitsouras, P. Liacouras, A. Imanzadeh, A.A. Giannopoulos, T. Cai, K.K. Kumamaru, and V.B. Ho. Medical 3D printing for the radiologist. RadioGraphics, 35(7):1965–1988, 2015. doi: 10.1148/rg.2015140320.
[2] F. Paulsen and J. Wasche. Sobotta Atlas of Human Anatomy, General anatomy and musculoskeletal system. Vol. 1, 2013.
[3] G.B. Kim, S. Lee, H. Kim, D.H. Yang, Y.H. Kim, Y.S. Kyung, and S.U. Kwon. Threedimensional printing: basic principles and applications in medicine and radiology. Korean Journal of Radiology, 17(2):182–197, 2016. doi: 10.3348/kjr.2016.17.2.182.
[4] J.W. Choi and N. Kim. Clinical application of three-dimensional printing technology in craniofacial plastic surgery. Archives of Plastic Surgery, 42(3):267–277, 2015. doi: 10.5999/aps.2015.42.3.267.
[5] J.E. Loster, M.A. Osiewicz, M. Groch, W. Ryniewicz, and A. Wieczorek. The prevalence of TMD in Polish young adults. Journal of Prosthodontics, 26(4):284–288, 2017. doi: 10.1111/jopr.12414.
[6] A.S. Soliman, L. Burns, A. Owrangi, Y. Lee,W.Y. Song, G. Stanisz, and B.P. Chugh. A realistic phantom for validating MRI-based synthetic CT images of the human skull. Medical Physics, 44:4687–4694, 2017. doi: 10.1002/mp.12428.
[7] F. Heckel, S. Zidowitz, T. Neumuth, M. Tittmann, M. Pirlich, and M. Hofer. Influence of image quality on semi-automatic 3D reconstructions of the lateral skull base for cochlear implantation. In CURAC, 129–134, 2016.
[8] G. Budzik, T. Dziubek, and P. Turek. Basic factors affecting the quality of tomographic images. Problems of Applied Sciences, 3:77–84, 2015. (in Polish)
[9] S. Singare, C. Shenggui and N. Li. The Benefit of 3D Printing in Medical Field: Example Frontal Defect Reconstruction. Journal of Material Sciences & Engineering, 6(2):335, 2017. doi: 10.4172/2169-0022.1000335.
[10] A. Ryniewicz, K. Ostrowska, R. Knapik, W. Ryniewicz, M. Krawczyk, J. Sładek, and Ł. Bojko. Evaluation of mapping of selected geometrical parameters in computer tomography using standards. Przegląd Elektrotechniczny, 91(6):88–91, 2015. (in Polish) doi: 10.15199/48.2015.06.17.
[11] R. Kaye, T. Goldstein, D. Zeltsman, D.A. Grande, and L.P. Smith. Three dimensional printing: a review on the utility within medicine and otolaryngology. International Journal of Pediatric Otorhinolaryngology, 89:145-148, 2016. doi: 10.1016/j.ijporl.2016.08.007.
[12] G.T. Grant and P.C. Liacouras. Craniofacial Applications of 3D Printing. In: 3D Printing in Medicine: A Practical Guide for Medical Professionals. Rybicki, Frank J., Grant, Gerald T. (Eds.), Springer, Cham, Switzerland, pp. 43–50, 2017. doi: 10.1007/978-3-319-61924-8_5.
[13] T. Cai, F.J. Rybicki, A.A. Giannopoulos, K. Schultz, K.K. Kumamaru, P. Liacouras, and D. Mitsouras. The residual STL volume as a metric to evaluate accuracy and reproducibility of anatomic models for 3D printing: application in the validation of 3D-printable models of maxillofacial bone from reduced radiation dose CT images. 3D Printing in Medicine, 1(1):2, 2015. doi: 10.1186/s41205-015-0003-3.
[14] T.Y. Hsieh, B. Cervenka, R. Dedhia, E.B. Strong, and T. Steele. Assessment of a patient- specific, 3-dimensionally printed endoscopic sinus and skull base surgical model. JAMA Otolaryngology–Head & Neck Surgery, 144(7):574-579, 2018. doi: 10.1001/jamaoto.2018.0473.
[15] Y.W. Chen, C.T. Shih, C.Y. Cheng, and Y.C. Lin. The development of skull prosthesis through active contour model. Journal of Medical Systems, 41:164, 2017. doi: 10.1007/s10916-017-0808-2.
[16] J.S. Naftulin, E.Y. Kimchi, and S.S. Cash. Streamlined, inexpensive 3D printing of the brain and skull. PLoS One, 10(8):e0136198, 2015. doi: 10.1371/journal.pone.0136198.
[17] A. Ryniewicz, K. Ostrowska, Ł. Bojko, and J. Sładek. Application of non-contact measurement methods for the evaluation of mapping the shape of solids of revolution. Przegląd Eletrotechniczny, 91(5):21–24, 2015. (in Polish). doi: 10.15199/48.2015.05.06.
[18] V. Favier, N. Zemiti, O.C. Mora, G. Subsol, G. Captier, R. Lebrun. and B. Gilles. Geometric and mechanical evaluation of 3D-printing materials for skull base anatomical education and endoscopic surgery simulation – A first step to create reliable customized simulators. PloS One, 12(12): e0189486, 2017. doi: 10.1371/journal.pone.0189486.
[19] M.P. Chae,W.M. Rozen, P.G. McMenamin, M.W. Findlay, R.T. Spychal, and D.J. Hunter-Smith. Emerging applications of bedside 3D printing in plastic surgery. Frontiers in Surgery, 2:25, 2015. doi: 10.3389/fsurg.2015.00025.
[20] J.A. Sładek. Coordinate Metrology. Accuracy of Systems and Measurements. Springer, 2015.
[21] ISO 15530-3:2011: Geometrical product specifications (GPS) – Coordinate measuring machines (CMM): Technique for determining the uncertainty of measurement – Part 3: Use of calibrated workpieces or measurement standards.
[22] A. Marro, T. Bandukwala, and W. Mak. Three-dimensional printing and medical imaging: a review of the methods and applications. Current Problems in Diagnostic Radiology, 45(1): 2–9, 2016. doi: 10.1067/j.cpradiol.2015.07.009.
[23] A. Ryniewicz. Evaluation of the accuracy of the surface shape mapping of elements of biobearings in in vivo and in vitro tests. Scientific Works of the Warsaw University of Technology. Mechanics, 248:3–169, 2013. (in Polish).
[24] B.M. Mendez, M.V. Chiodo, and P.A. Patel. Customized “In-Office” three-dimensional printing for virtual surgical planning in craniofacial surgery. The Journal of Craniofacial Surgery, 26(5):1584–1586, 2015. doi: 10.1097/SCS.0000000000001768.
[25] J.J. de Lima Moreno, G.S. Liedke, R. Soler, H.E.D. da Silveira, and H.L.D. da Silveira. Imaging factors impacting on accuracy and radiation dose in 3D printing. Journal of Maxillofacial and Oral Surgery, 17(4):582–587, 2018. doi: 10.1007/s12663-018-1098-z.
[26] S.W. Park, J.W. Choi, K.S. Koh and T.S. Oh. Mirror-imaged rapid prototype skull model and pre-molded synthetic scaffold to achieve optimal orbital cavity reconstruction. Journal of Oral and Maxillofacial Surgery, 73(8):1540–1553, 2015. doi: 10.1016/j.joms.2015.03.025.
[27] K.M. Day, P.M. Phillips, and L.A. Sargent. Correction of a posttraumatic orbital deformity using three-dimensional modeling. Virtual surgical planning with computer-assisted design, and three-dimensional printing of custom implants. Craniomaxillofacial Trauma and Reconstruction, 11(01):078–082, 2018. doi: 10.1055/s-0037-1601432.
[28] Y.C. Lin, C.Y. Cheng, Y.W. Cheng, and C.T. Shih. Skull repair using active contour models. Procedia Manufacturing, 11: 2164–2169, 2017. doi: 10.1016/j.promfg.2017.07.362.
[29] J.N. Winer, F.J. Verstraete, D.D. Cissell, S. Lucero, K.A. Athanasiou and B. Arzi. The application of 3-dimensional printing for preoperative planning in oral and maxillofacial surgery in dogs and cats. Veterinary Surgery, 46(7):942–951, 2017. doi: 10.1111/vsu.12683.
[30] J.Y. Lim, N. Kim, J.C. Park, S.K. Yoo, D.A. Shin, and K.W. Shim. Exploring for the optimal structural design for the 3D-printing technology for cranial reconstruction: a biomechanical and histological study comparison of solid vs. porous structure. Child’s Nervous System, 33(9):1553–1562, 2017. doi: 10.1007/s00381-017-3486-y.
[31] W. Shui, M. Zhou, S. Chen, Z. Pan, Q. Deng, Y. Yao, H. Pan, T. He, and X. Wang. The production of digital and printed resources from multiple modalities using visualization and three-dimensional printing techniques. International Journal of Computer Assisted Radiology and Surgery, 12(1):13–23, 2017. doi: 10.1007/s11548-016-1461-9.
Go to article

Authors and Affiliations

Andrzej Ryniewicz
1 2
Wojciech Ryniewicz
3
Stanisław Wyrąbek
1
Łukasz Bojko
4

  1. Cracow University of Technology, Faculty of Mechanical Engineering, Poland.
  2. State University of Applied Science, Nowy Sącz, Poland.
  3. Jagiellonian University Medical College, Faculty of Medicine, Dental Institute, Department of Dental Prosthodontics, Cracow, Poland.
  4. AGH University of Science and Technology, Faculty of Mechanical Engineering and Robotics, Department of Machine Design and Technology, Cracow, Poland.
Download PDF Download RIS Download Bibtex

Abstract

The water’s edge is the most iconic and identifiable image related to the city of Durban and in seeking an ‘authenticity’ that typifies the built fabric of the city, the image that this place creates is arguably the answer. Since its formal establishment as a settlement in 1824, this edge has been a primary element in the urban fabric. Development of the space has been fairly incremental over the last two centuries, starting with colonial infl uenced built interventions, but much of what is there currently stems from the 1930’s onwards, leading to a Modernist and later Contemporary sense of place that is moderated by regionalist infl uences, lending itself to creating a somewhat contextually relevant image. This ‘international yet local’ sense of place is however under threat from the increasingly prominent ‘global’ image of a-contextual glass high-rise towers placed along a non-descript public realm typical of global capital interests that is a hallmark of the turnkey project trends by developers from the East currently sweeping the African continent.

Go to article

Authors and Affiliations

Louis Du Plessis
Download PDF Download RIS Download Bibtex

Abstract

Evaluating the image quality is a very important problem in image and video processing. Numerous methods have been proposed over the past years to automatically evaluate the quality of images in agreement with human quality judgments. The purpose of this work is to present subjective and objective quality assessment methods and their classification. Eleven widely used and recommended by International Telecommunication Union (ITU) subjective methods are compared and described. Thirteen objective method is briefly presented (including MSE, MD, PCC, EPSNR, SSIM, MS-SSIM, FSIM, MAD, VSNR, VQM, NQM, DM, and 3D-GSM). Furthermore the list of widely used subjective quality data set is provided.

Go to article

Authors and Affiliations

Sebastian Opozda
Arkadiusz Sochan
Download PDF Download RIS Download Bibtex

Abstract

For brain tumour treatment plans, the diagnoses and predictions made by medical doctors and radiologists are dependent on medical imaging. Obtaining clinically meaningful information from various imaging modalities such as computerized tomography (CT), positron emission tomography (PET) and magnetic resonance (MR) scans are the core methods in software and advanced screening utilized by radiologists. In this paper, a universal and complex framework for two parts of the dose control process – tumours detection and tumours area segmentation from medical images is introduced. The framework formed the implementation of methods to detect glioma tumour from CT and PET scans. Two deep learning pre-trained models: VGG19 and VGG19-BN were investigated and utilized to fuse CT and PET examinations results. Mask R-CNN (region-based convolutional neural network) was used for tumour detection – output of the model is bounding box coordinates for each object in the image – tumour. U-Net was used to perform semantic segmentation – segment malignant cells and tumour area. Transfer learning technique was used to increase the accuracy of models while having a limited collection of the dataset. Data augmentation methods were applied to generate and increase the number of training samples. The implemented framework can be utilized for other use-cases that combine object detection and area segmentation from grayscale and RGB images, especially to shape computer-aided diagnosis (CADx) and computer-aided detection (CADe) systems in the healthcare industry to facilitate and assist doctors and medical care providers.
Go to article

Bibliography

  1.  Cancer Research UK Statistics from the 5th of March 2020. [Online]. https://www.cancerresearchuk.org/health-professional/cancer- statistics/statistics-by-cancer-type/brain-other-cns-and-intracranial-tumours/incidence#ref-
  2.  E. Kot, Z. Krawczyk, K. Siwek, and P.S. Czwarnowski, “U-Net and Active Contour Methods for Brain Tumour Segmentation and Visualization,” 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, United Kingdom, 2020, pp. 1‒7, doi: 10.1109/ IJCNN48605.2020.9207572.
  3.  J. Kim, J. Hong, H. Park, “Prospects of deep learning for medical imaging,” Precis. Future. Med. 2(2), 37–52 (2018), doi: 10.23838/ pfm.2018.00030.
  4.  E. Kot, Z. Krawczyk, and K. Siwek, “Brain Tumour Detection and Segmentation Using Deep Learning Methods,” in Computational Problems of Electrical Engineering, 2020.
  5.  A.F. Tamimi and M. Juweid, “Epidemiology and Outcome of Glioblastoma,” in: Glioblastoma [Online]. Brisbane (AU): Codon Publications, 2017, doi: 10.15586/codon.glioblastoma.2017.ch8.
  6.  A. Krizhevsky, I. Sutskever, and G.E. Hinton, “ImageNet classification with deep convolutional neural networks,” in: Advances in Neural Information Processing Systems, 2012, p. 1097‒1105.
  7.  M.A. Al-masni, et al., “Detection and classification of the breast abnormalities in digital mammograms via regional Convolutional Neural Network,” 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Seogwipo, 2017, pp. 1230‒1233, doi: 10.1109/EMBC.2017.8037053.
  8.  P. Yin, R. Yuan, Y. Cheng, and Q. Wu, “Deep Guidance Network for Biomedical Image Segmentation,” IEEE Access 8, 116106‒116116 (2020), doi: 10.1109/ACCESS.2020.3002835.
  9.  R. Sindhu, G. Jose, S. Shibon, and V. Varun, “Using YOLO based deep learning network for real time detection and localization of lung nodules from low dose CT scans”, Proc. SPIE 10575, Medical Imaging 2018: Computer-Aided Diagnosis, 105751I, 2018, doi: 10.1117/12.2293699.
  10.  R. Ezhilarasi and P. Varalakshmi, “Tumor Detection in the Brain using Faster R-CNN,” 2018 2nd International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud), Palladam, India, 2018, pp. 388‒392, doi: 10.1109/I-SMAC.2018.8653705.
  11.  S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-timeobject detection with region proposal networks,” in Advances in neuralinformation processing systems, 2015, pp. 91–99.
  12.  S. Liu, H. Zheng, Y. Feng, and W. Li, “Prostate cancer diagnosis using deeplearning with 3D multiparametric MRI,” in Proceedings of Medical Imaging 2017: Computer-Aided Diagnosis, vol. 10134, Bellingham: International Society for Optics and Photonics (SPIE), 2017. p. 1013428.
  13.  M. Gurbină, M. Lascu, and D. Lascu, “Tumor Detection and Classification of MRI Brain Image using Different Wavelet Transforms and Support Vector Machines,” in 2019 42nd International Conference on Telecommunications and Signal Processing (TSP), Budapest, Hungary, 2019, pp. 505‒508, doi: 10.1109/TSP.2019.8769040.
  14.  H. Dong, G. Yang, F. Liu, Y. Mo, and Y. Guo, “Automatic brain tumor detection and segmentation using U-net based fully convolutional networks,” in: Medical image understanding and analysis, pp. 506‒517, eds. Valdes Hernandez M, Gonzalez-Castro V, Cham: Springer, 2017.
  15.  O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in: Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, Lecture Notes in Computer Science, vol 9351, doi: 10.1007/978-3-319- 24574-4_28.
  16.  K. Hu, C. Liu, X. Yu, J. Zhang, Y. He, and H. Zhu, “A 2.5D Cancer Segmentation for MRI Images Based on U-Net,” in 2018 5th International Conference on Information Science and Control Engineering (ICISCE), Zhengzhou, 2018, pp. 6‒10, doi: 10.1109/ICISCE.2018.00011.
  17.  H.N.T.K. Kaldera, S.R. Gunasekara, and M.B. Dissanayake, “Brain tumor Classification and Segmentation using Faster R-CNN,” Advances in Science and Engineering Technology International Conferences (ASET), Dubai, United Arab Emirates, 2019, pp. 1‒6, doi: 10.1109/ ICASET.2019.8714263.
  18.  B. Stasiak, P. Tarasiuk, I. Michalska, and A. Tomczyk, “Application of convolutional neural networks with anatomical knowledge for brain MRI analysis in MS patients”, Bull. Pol. Acad. Sci. Tech. Sci. 66(6), 857–868 (2018), doi: 10.24425/bpas.2018.125933.
  19.  L. Hui, X. Wu, and J. Kittler, “Infrared and Visible Image Fusion Using a Deep Learning Framework,” 24th International Conference on Pattern Recognition (ICPR), Beijing, 2018, pp. 2705‒2710, doi: 10.1109/ICPR.2018.8546006.
  20.  K. Simonyan and A. Zisserman, “Very deep convolutional networks for largescale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
  21.  M. Simon, E. Rodner, and J. Denzler, “ImageNet pre-trained models with batch normalization,” arXiv preprint arXiv:1612.01452, 2016.
  22.  VGG19-BN model implementation. [Online]. https://pytorch.org/vision/stable/_modules/torchvision/models/vgg.html
  23.  D. Jha, M.A. Riegler, D. Johansen, P. Halvorsen, and H.D. Johansen, “DoubleU-Net: A Deep Convolutional Neural Network for Medical Image Segmentation,” 2020 IEEE 33rd International Symposium on Computer-Based Medical Systems (CBMS), Rochester, MN, USA, 2020, pp. 558‒564, doi: 10.1109/CBMS49503.2020.00111.
  24.  Jupyter notebook with fusion code. [Online]. https://github.com/ekote/computer-vision-for-biomedical-images-processing/blob/master/ papers/polish_acad_of_scienc_2020_2021/fusion_PET_CT_2020.ipynb
  25.  E. Geremia et al., “Spatial decision forests for MS lesion segmentation in multi-channel magnetic resonance images”, NeuroImage 57(2), 378‒390 (2011).
  26.  D. Anithadevi and K. Perumal, “A hybrid approach based segmentation technique for brain tumor in MRI Images,” Signal Image Process.: Int. J. 7(1), 21‒30 (2016), doi: 10.5121/sipij.2016.7103.
  27.  S. Ioffe and C. Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift,” arXiv preprint arXiv:1502.03167.
  28.  S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137‒1149, (2017), doi: 10.1109/TPAMI.2016.2577031.
  29.  T-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. Lawrence Zitnick, “Microsoft COCO: common objects incontext” in Computer Vision – ECCV 2014, 2014, p. 740–755.
  30.  Original Mask R-CNN model. [Online]. https://github.com/matterport/Mask_RCNN/releases/tag/v2.0
  31.  Mask R-CNN model. [Online]. https://github.com/ekote/computer-vision-for-biomedical-images-processing/releases/tag/1.0, doi: 10.5281/ zenodo.3986798.
  32.  T. Les, T. Markiewicz, S. Osowski, and M. Jesiotr, “Automatic reconstruction of overlapped cells in breast cancer FISH images,” Expert Syst. Appl. 137, 335‒342 (2019), doi: 10.1016/j.eswa.2019.05.031.
  33.  J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation”, Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), 2015, pp. 3431‒3440.
  34.  The U-Net architecture adjusted to 64£64 input image size. [Online]. http://bit.ly/unet64x64
Go to article

Authors and Affiliations

Estera Kot
1
Zuzanna Krawczyk
1
Krzysztof Siwek
1
Leszek Królicki
2
Piotr Czwarnowski
2

  1. Warsaw University of Technology, Faculty of Electrical Engineering, Pl. Politechniki 1, 00-661 Warsaw, Poland
  2. Medical University of Warsaw, Nuclear Medicine Department, ul. Banacha 1A, 02-097 Warsaw, Poland

This page uses 'cookies'. Learn more