Search results

Filters

  • Journals
  • Authors
  • Keywords
  • Date
  • Type

Search results

Number of results: 144
items per page: 25 50 75
Sort by:
Download PDF Download RIS Download Bibtex

Abstract

Super-resolution image reconstruction utilizes two algorithms, where one is for single-frame image reconstruction, and the other is for multi-frame image reconstruction. Singleframe image reconstruction generally takes the first degradation and is followed by reconstruction, which essentially creates a problem of insufficient characterization. Multi-frame images provide additional information for image reconstruction relative to single frame images due to the slight differences between sequential frames. However, the existing super-resolution algorithm for multi-frame images do not take advantage of this key factor, either because of loose structure and complexity, or because the individual frames are restored poorly. This paper proposes a new SR reconstruction algorithm for images using Multi-grained Cascade Forest. Multi-frame image reconstruction is processed sequentially. Firstly, the image registration algorithm uses a convolutional neural network to register low-resolution image sequences, and then the images are reconstructed after registration by the Multi-grained Cascade Forest reconstruction algorithm. Finally, the reconstructed images are fused. The optimal algorithm is selected for each step to get the most out of the details and tightly connect the internal logic of each sequential step. This novel approach proposed in this paper, in which the depth of the cascade forest is procedurally generated for recovered images, rather than being a constant. After training each layer, the recovered image is automatically evaluated, and new layers are constructed for training until an optimal restored image is obtained. Experiments show that this method improves the quality of image reconstruction while preserving the details of the image.

Go to article

Authors and Affiliations

Yaming Wang
Zhikang Luo
Wenqing Huang
Download PDF Download RIS Download Bibtex

Abstract

Image segmentation is a typical operation in many image analysis and computer vision applications. However, hyperspectral image segmentation is a field which have not been fully investigated. In this study an analogue- digital image segmentation technique is presented. The system uses an acousto-optic tuneable filter, and a CCD camera to capture hyperspectral images that are stored in a digital grey scale format. The data set was built considering several objects with remarkable differences in the reflectance and brightness components. In addition, the work presents a semi-supervised segmentation technique to deal with the complex problem of hyperspectral image segmentation, with its corresponding quantitative and qualitative evaluation. Particularly, the developed acousto-optic system is capable to acquire 120 frames through the whole visible light spectrum. Moreover, the analysis of the spectral images of a given object enables its segmentation using a simple subtraction operation. Experimental results showed that it is possible to segment any region of interest with a good performance rate by using the proposed analogue-digital segmentation technique.

Go to article

Authors and Affiliations

César Isaza
Julio M. Mosquera
Gustavo A. Gómez-Méndez
Jonny P. Zavala-De Paz
Ely Karina-Anaya
José A. Rizzo-Sierra
Omar Palillero-Sandoval
Download PDF Download RIS Download Bibtex

Abstract

The water’s edge is the most iconic and identifiable image related to the city of Durban and in seeking an ‘authenticity’ that typifies the built fabric of the city, the image that this place creates is arguably the answer. Since its formal establishment as a settlement in 1824, this edge has been a primary element in the urban fabric. Development of the space has been fairly incremental over the last two centuries, starting with colonial infl uenced built interventions, but much of what is there currently stems from the 1930’s onwards, leading to a Modernist and later Contemporary sense of place that is moderated by regionalist infl uences, lending itself to creating a somewhat contextually relevant image. This ‘international yet local’ sense of place is however under threat from the increasingly prominent ‘global’ image of a-contextual glass high-rise towers placed along a non-descript public realm typical of global capital interests that is a hallmark of the turnkey project trends by developers from the East currently sweeping the African continent.

Go to article

Authors and Affiliations

Louis Du Plessis
Download PDF Download RIS Download Bibtex

Abstract

Evaluating the image quality is a very important problem in image and video processing. Numerous methods have been proposed over the past years to automatically evaluate the quality of images in agreement with human quality judgments. The purpose of this work is to present subjective and objective quality assessment methods and their classification. Eleven widely used and recommended by International Telecommunication Union (ITU) subjective methods are compared and described. Thirteen objective method is briefly presented (including MSE, MD, PCC, EPSNR, SSIM, MS-SSIM, FSIM, MAD, VSNR, VQM, NQM, DM, and 3D-GSM). Furthermore the list of widely used subjective quality data set is provided.

Go to article

Authors and Affiliations

Sebastian Opozda
Arkadiusz Sochan
Download PDF Download RIS Download Bibtex

Abstract

For brain tumour treatment plans, the diagnoses and predictions made by medical doctors and radiologists are dependent on medical imaging. Obtaining clinically meaningful information from various imaging modalities such as computerized tomography (CT), positron emission tomography (PET) and magnetic resonance (MR) scans are the core methods in software and advanced screening utilized by radiologists. In this paper, a universal and complex framework for two parts of the dose control process – tumours detection and tumours area segmentation from medical images is introduced. The framework formed the implementation of methods to detect glioma tumour from CT and PET scans. Two deep learning pre-trained models: VGG19 and VGG19-BN were investigated and utilized to fuse CT and PET examinations results. Mask R-CNN (region-based convolutional neural network) was used for tumour detection – output of the model is bounding box coordinates for each object in the image – tumour. U-Net was used to perform semantic segmentation – segment malignant cells and tumour area. Transfer learning technique was used to increase the accuracy of models while having a limited collection of the dataset. Data augmentation methods were applied to generate and increase the number of training samples. The implemented framework can be utilized for other use-cases that combine object detection and area segmentation from grayscale and RGB images, especially to shape computer-aided diagnosis (CADx) and computer-aided detection (CADe) systems in the healthcare industry to facilitate and assist doctors and medical care providers.
Go to article

Bibliography

  1.  Cancer Research UK Statistics from the 5th of March 2020. [Online]. https://www.cancerresearchuk.org/health-professional/cancer- statistics/statistics-by-cancer-type/brain-other-cns-and-intracranial-tumours/incidence#ref-
  2.  E. Kot, Z. Krawczyk, K. Siwek, and P.S. Czwarnowski, “U-Net and Active Contour Methods for Brain Tumour Segmentation and Visualization,” 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, United Kingdom, 2020, pp. 1‒7, doi: 10.1109/ IJCNN48605.2020.9207572.
  3.  J. Kim, J. Hong, H. Park, “Prospects of deep learning for medical imaging,” Precis. Future. Med. 2(2), 37–52 (2018), doi: 10.23838/ pfm.2018.00030.
  4.  E. Kot, Z. Krawczyk, and K. Siwek, “Brain Tumour Detection and Segmentation Using Deep Learning Methods,” in Computational Problems of Electrical Engineering, 2020.
  5.  A.F. Tamimi and M. Juweid, “Epidemiology and Outcome of Glioblastoma,” in: Glioblastoma [Online]. Brisbane (AU): Codon Publications, 2017, doi: 10.15586/codon.glioblastoma.2017.ch8.
  6.  A. Krizhevsky, I. Sutskever, and G.E. Hinton, “ImageNet classification with deep convolutional neural networks,” in: Advances in Neural Information Processing Systems, 2012, p. 1097‒1105.
  7.  M.A. Al-masni, et al., “Detection and classification of the breast abnormalities in digital mammograms via regional Convolutional Neural Network,” 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Seogwipo, 2017, pp. 1230‒1233, doi: 10.1109/EMBC.2017.8037053.
  8.  P. Yin, R. Yuan, Y. Cheng, and Q. Wu, “Deep Guidance Network for Biomedical Image Segmentation,” IEEE Access 8, 116106‒116116 (2020), doi: 10.1109/ACCESS.2020.3002835.
  9.  R. Sindhu, G. Jose, S. Shibon, and V. Varun, “Using YOLO based deep learning network for real time detection and localization of lung nodules from low dose CT scans”, Proc. SPIE 10575, Medical Imaging 2018: Computer-Aided Diagnosis, 105751I, 2018, doi: 10.1117/12.2293699.
  10.  R. Ezhilarasi and P. Varalakshmi, “Tumor Detection in the Brain using Faster R-CNN,” 2018 2nd International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud), Palladam, India, 2018, pp. 388‒392, doi: 10.1109/I-SMAC.2018.8653705.
  11.  S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-timeobject detection with region proposal networks,” in Advances in neuralinformation processing systems, 2015, pp. 91–99.
  12.  S. Liu, H. Zheng, Y. Feng, and W. Li, “Prostate cancer diagnosis using deeplearning with 3D multiparametric MRI,” in Proceedings of Medical Imaging 2017: Computer-Aided Diagnosis, vol. 10134, Bellingham: International Society for Optics and Photonics (SPIE), 2017. p. 1013428.
  13.  M. Gurbină, M. Lascu, and D. Lascu, “Tumor Detection and Classification of MRI Brain Image using Different Wavelet Transforms and Support Vector Machines,” in 2019 42nd International Conference on Telecommunications and Signal Processing (TSP), Budapest, Hungary, 2019, pp. 505‒508, doi: 10.1109/TSP.2019.8769040.
  14.  H. Dong, G. Yang, F. Liu, Y. Mo, and Y. Guo, “Automatic brain tumor detection and segmentation using U-net based fully convolutional networks,” in: Medical image understanding and analysis, pp. 506‒517, eds. Valdes Hernandez M, Gonzalez-Castro V, Cham: Springer, 2017.
  15.  O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in: Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, Lecture Notes in Computer Science, vol 9351, doi: 10.1007/978-3-319- 24574-4_28.
  16.  K. Hu, C. Liu, X. Yu, J. Zhang, Y. He, and H. Zhu, “A 2.5D Cancer Segmentation for MRI Images Based on U-Net,” in 2018 5th International Conference on Information Science and Control Engineering (ICISCE), Zhengzhou, 2018, pp. 6‒10, doi: 10.1109/ICISCE.2018.00011.
  17.  H.N.T.K. Kaldera, S.R. Gunasekara, and M.B. Dissanayake, “Brain tumor Classification and Segmentation using Faster R-CNN,” Advances in Science and Engineering Technology International Conferences (ASET), Dubai, United Arab Emirates, 2019, pp. 1‒6, doi: 10.1109/ ICASET.2019.8714263.
  18.  B. Stasiak, P. Tarasiuk, I. Michalska, and A. Tomczyk, “Application of convolutional neural networks with anatomical knowledge for brain MRI analysis in MS patients”, Bull. Pol. Acad. Sci. Tech. Sci. 66(6), 857–868 (2018), doi: 10.24425/bpas.2018.125933.
  19.  L. Hui, X. Wu, and J. Kittler, “Infrared and Visible Image Fusion Using a Deep Learning Framework,” 24th International Conference on Pattern Recognition (ICPR), Beijing, 2018, pp. 2705‒2710, doi: 10.1109/ICPR.2018.8546006.
  20.  K. Simonyan and A. Zisserman, “Very deep convolutional networks for largescale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
  21.  M. Simon, E. Rodner, and J. Denzler, “ImageNet pre-trained models with batch normalization,” arXiv preprint arXiv:1612.01452, 2016.
  22.  VGG19-BN model implementation. [Online]. https://pytorch.org/vision/stable/_modules/torchvision/models/vgg.html
  23.  D. Jha, M.A. Riegler, D. Johansen, P. Halvorsen, and H.D. Johansen, “DoubleU-Net: A Deep Convolutional Neural Network for Medical Image Segmentation,” 2020 IEEE 33rd International Symposium on Computer-Based Medical Systems (CBMS), Rochester, MN, USA, 2020, pp. 558‒564, doi: 10.1109/CBMS49503.2020.00111.
  24.  Jupyter notebook with fusion code. [Online]. https://github.com/ekote/computer-vision-for-biomedical-images-processing/blob/master/ papers/polish_acad_of_scienc_2020_2021/fusion_PET_CT_2020.ipynb
  25.  E. Geremia et al., “Spatial decision forests for MS lesion segmentation in multi-channel magnetic resonance images”, NeuroImage 57(2), 378‒390 (2011).
  26.  D. Anithadevi and K. Perumal, “A hybrid approach based segmentation technique for brain tumor in MRI Images,” Signal Image Process.: Int. J. 7(1), 21‒30 (2016), doi: 10.5121/sipij.2016.7103.
  27.  S. Ioffe and C. Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift,” arXiv preprint arXiv:1502.03167.
  28.  S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137‒1149, (2017), doi: 10.1109/TPAMI.2016.2577031.
  29.  T-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. Lawrence Zitnick, “Microsoft COCO: common objects incontext” in Computer Vision – ECCV 2014, 2014, p. 740–755.
  30.  Original Mask R-CNN model. [Online]. https://github.com/matterport/Mask_RCNN/releases/tag/v2.0
  31.  Mask R-CNN model. [Online]. https://github.com/ekote/computer-vision-for-biomedical-images-processing/releases/tag/1.0, doi: 10.5281/ zenodo.3986798.
  32.  T. Les, T. Markiewicz, S. Osowski, and M. Jesiotr, “Automatic reconstruction of overlapped cells in breast cancer FISH images,” Expert Syst. Appl. 137, 335‒342 (2019), doi: 10.1016/j.eswa.2019.05.031.
  33.  J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation”, Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), 2015, pp. 3431‒3440.
  34.  The U-Net architecture adjusted to 64£64 input image size. [Online]. http://bit.ly/unet64x64
Go to article

Authors and Affiliations

Estera Kot
1
Zuzanna Krawczyk
1
Krzysztof Siwek
1
Leszek Królicki
2
Piotr Czwarnowski
2

  1. Warsaw University of Technology, Faculty of Electrical Engineering, Pl. Politechniki 1, 00-661 Warsaw, Poland
  2. Medical University of Warsaw, Nuclear Medicine Department, ul. Banacha 1A, 02-097 Warsaw, Poland
Download PDF Download RIS Download Bibtex

Abstract

This paper takes a look at the state-of-the-art solutions in the field of spectral imaging systems by way of application examples. It is based on a comparison of currently used systems and the challenges they face, especially in the field of high-altitude imaging and satellite imaging, are discussed. Based on our own experience, an example of hyperspectral data processing is presented. The article also discusses how modern algorithms can help in understanding the data that such images can provide.
Go to article

Authors and Affiliations

Jędrzej Kowalewski
1 2
Jarosław Domaradzki
2
Michał Zięba
1
Mikołaj Podgórski
1 2

  1. Scanway, Dunska 9, 54-427 Wrocław, Poland
  2. Wrocław University of Science and Technology, Faculty of Electronics, Photonics and Microsystems,Janiszewskiego 11/17, 50-372 Wrocław, Poland
Download PDF Download RIS Download Bibtex

Abstract

Thermal imagers often work in extreme conditions but are typically tested under laboratory conditions. This paper presents the concept, design rules, experimental verification, and example applications of a new system able to carry out measurements of performance parameters of thermal imagers working under precisely simulated real working conditions. High accuracy of simulation has been achieved by enabling regulation of two critical parameters that define working conditions of thermal imagers: imager ambient temperature and background temperature of target of interest. The use of the new test system in the evaluation process of surveillance thermal imagers can bring about a revolution in thermal imaging metrology by allowing thermal imagers to be evaluated under simulated, real working conditions.
Go to article

Authors and Affiliations

Krzysztof Chrzanowski
1 2
ORCID: ORCID

  1.   Institute of Optoelectronics, Military University of Technology, gen. Sylwestra Kaliskiego 2, 00-908 Warsaw, Poland
  2. INFRAMET, Bugaj 29a, Koczargi Nowe, 05-082 Stare Babice, Poland
Download PDF Download RIS Download Bibtex

Abstract

Thermal-imaging systems respond to infrared radiation that is naturally emitted by objects. Various multispectral and hyperspectral devices are available for measuring radiation in discrete sub-bands and thus enable a detection of differences in a spectral emissivity or transmission. For example, such devices can be used to detect hazardous gases. However, their operation principle is based on the fact that radiation is considered a scalar property. Consequently, all the radiation vector properties, such as polarization, are neglected. Analysing radiation in terms of the polarization state and the spatial distribution of thereof across a scene can provide additional information regarding the imaged objects. Various methods can be used to extract polarimetric information from an observed scene. We briefly review architectures of polarimetric imagers used in different wavebands. First, the state-of-the-art polarimeters are presented, and, then, a classification of polarimetric-measurement devices is described in detail. Additionally, the data processing in Stokes polarimeters is given. Emphasis is laid on the methods for obtaining the Stokes parameters. Some predictions in terms of LWIR polarimeters are presented in the conclusion.
Go to article

Bibliography

  1. Tyo, S. J., Goldstein, D. L., Chenault, D. B. & Shaw, J. A. Review of passive imaging polarimetry for remote sensing applications. Appl. Opt. 45, 5453–5469 (2006). https://doi.org/10.1364/AO.45.005453
  2. Kudenov, M. W., Pezzaniti, J. L. & Gerhart, G. R. Microbolo-meter-infrared imaging Stokes polarimeter. Opt. Eng. 48, 063201 (2009). https://doi.org/10.1117/1.3156844
  3. Harchanko, J. S., Pezzaniti, L., Chenault, D. & Eades, G. Comparing a MWIR and LWIR polarimetric imager for surface swimmer detection. Proc. SPIE 6945, 69450X (2008). https://doi.org/10.1117/12.778061
  4. Kudenov, M. W., Dereniak, E. L., Pezzaniti, L. & Gerhart, G. R. 2-Cam LWIR imaging Stokes polarimeter. Proc. SPIE 6972, 69720K (2008). https://doi.org/10.1117/12.784796
  5. Rodenhuis, M., Canovas, H., Jeffers, S. V. & Keller, C. U. The Extreme Polarimeter (ExPo): design of a sensitive imaging polarimeter. Proc. SPIE 7014, 70146T (2008). https://doi.org/10.1117/12.788439
  6. van Holstein, R. et al. Combining angular differential imaging and accurate polarimetry with SPHERE/IRDIS to characterize young giant exoplanets. Proc. SPIE 10400, 1040015 (2017). https://doi.org/10.1117/12.2272554
  7. Rotbøll, J., Søbjærg, S. & Skou, N. A novel L-Band polarimetric radiometer featuring subharmonic sampling. Radio Sci. 38, 1–7 (2003). https://doi.org/10.1029/2002RS002666
  8. Yueh, S. H. Modeling of wind direction signals in polarimetric sea surface brightness temperatures. IEEE Trans. Geosci. Remote Sensing 35, 1400–1418 (1997). https://doi.org/10.1109/36.649793
  9. Laymon, C. et al. MAPIR: An airborne polarimetric imaging radiometer in support of hydrologic satellite observations. in IEEE Geoscience and Remote Sensing Symposium 26–30 (2010).
  10. Coulson, K. L., Gray, E. L. & Bouricius, G. M. A study of the reflection and polarization characteristics of selected natural and artificial surfaces. Tech. Informat. Series Rep. R64SD74. (General Electric Co., Missile and Space Div., Space Sciences Lab., 1964)
  11. Lafrance, B. & Herman, M. Correction of the Stratospheric Aerosol Radiative Influence in the POLDER Measurements. IEEE Trans. Geosci. Remote Sensing 36, 1599–1608 (1998). https://doi.org/10.1109/36.718863
  12. Hooper, B. A., Baxter, B., Piotrowski, C., Williams, J. Z. & Dugan, J. An airborne imaging multispectral polarimeter (AROSS-MSP). in Oceans 2009, 1-10 (2009). https://doi.org/10.23919/OCEANS.2009.5422152
  13. Giakos, G. C. et al. Near infrared light interaction with lung cancer cells. in 2011 IEEE International Instrumentation and Measurement Technology Conference 1–6 (2011). https://doi.org/10.1109/IMTC.2011.5944333
  14. Sobczak, M., Kurzynowski, P., Woźniak, W., Owczarek, M. & Drobczyński, S. Polarimeter for measuring the properties of birefringent media in reflective mode. Opt. Express 28, 249–257 (2020). https://doi.org/10.1364/OE.380998
  15. Sadjadi, F. Electro-Optical Systems for Image Recognition. LEOS 2001. 14th Annual Meeting of the IEEE Lasers and Electro-Optics Society (Cat. No.01CH37242) vol. 2 550–551 (2001). https://doi.org/10.1109/LEOS.2001.968933
  16. Bieszczad, G., Gogler, S. & Krupiński, M. Polarization state imaging in long-wave infrared for object detection. Proc. SPIE 8897, 88970R (2013). https://doi.org/10.1117/12.2028858
  17. Gurton, K. P. & Felton, M. Remote detection of buried land-mines and IEDs using LWIR polarimetric imaging. Opt. Express 20, 22344–22359 (2012). https://doi.org/10.1364/OE.20.022344
  18. Więcek, B. & De Mey, G. Termowizja w podczerwieni. Podstawy i zastosowania. (Warszawa: Wydawnictwo Pomiary Automatyka Kontrola, 2011). [in Polish]
  19. Rogalski, A. Infrared detectors. (Amsterdam: Gordon and Breach Science Publishers, 2000).
  20. Chenault, D., Foster, J., Pezzaniti, L., Harchanko, J. & Aycock, T. Polarimetric sensor systems for airborne ISR. Proc. SPIE 9076, 90760K (2014). https://doi.org/10.1117/12.2053918
  21. Holtsberry, B. L. & Voelz, D. G. Material identification from remote sensing of polarized self-emission. Proc. SPIE 11132, 1113203 (2019). https://doi.org/10.1117/12.2528282
  22. Madura, H., Pomiary termowizyjne w praktyce : praca zbiorowa. (Agenda Wydawnicza PAKu, 2004). [in Polish]
  23. Baas, M., Handbook of Optics. (New York: McGraw-Hill, 1995).
  24. Eriksson, J., Bergström, D. & Renhorn, I. Characterization and performance of an LWIR polarimetric imager. Proc. SPIE 10434, 1043407 (2017). https://doi.org/10.1117/12.2278502
  25. Gogler, S., Bieszczad, G. & Swiderski, J. Method of signal processing in a time-division LWIR image polarimetric sensor. Appl. Opt. 59, 7268–7278 (2020). https://doi.org/10.1364/AO.396675
  26. Cremer, F., de Jongm, W. & Schutte, K. Infrared polarization measurements and modeling applied to surface-laid antipersonnel landmines. Opt. Eng. 41, 1021–1032 (2002). https://doi.org/10.1117/1.1467362
  27. Pezzaniti, L. J. & Chenault, D. B. A divison of aperture MWIR imaging polarimeter. Proc. SPIE 5888, 58880 (2005). https://doi.org/10.1117/12.623543
  28. Chun, C. S. L., Fleming, D. L., Harvey, W. A. & Torok, E. J. Target discrimination using a polarization sensitive thermal imaging sensor. Proc. SPIE 3062, 60–67 (1997). https://doi.org/10.1117/12.327165
  29. https://moxtek.com/ (2020).
  30. Stokes, R. J., Normand, E. L., Carrie, I. D., Foulger, B. & Lewis, C. Develepment of a QCL based IR polarimetric system for the stand-off detection and location of IEDs. Proc. SPIE 7486, 748609 (2009). https://doi.org/10.1117/12.830076
  31. Chenault D. B., Vaden, J. P., Mitchell, D. A. & Demicco, E. D. New IR polarimeter for improved detection of oil on water. SPIE Newsroom (2017). https://doi.org/10.1117/2.1201610.006717
  32. Tyo, S. J. & Turner, T. S. Variable-retardance, Fourier-transform imaging spectropolarimeters for visible spectrum remote sensing. Appl. Opt. 40, 1450–1458 (2001). https://doi.org/10.1364/AO.40.001450
  33. Craven-Jones, J., Way, B. M., Hunt, J., Kudenov, M. W. & Mercier, J. A. Thermally stable imaging channeled spectropolari-metry. Proc. SPIE 8873, 88730J (2013). https://doi.org/10.1117/12.2024112
  34. Smith, M. H., Woodruff, J. B. & Howe, J. D. Beam wander considerations in imaging polarimetry. Proc. SPIE 3754, 50–54 (1999). https://doi.org/10.1117/12.366359
Go to article

Authors and Affiliations

Grzegorz Bieszczad
1
ORCID: ORCID
Sławomir Gogler
1
ORCID: ORCID
Jacek Świderski
1
ORCID: ORCID

  1. Institute of Optoelectronics, Military University of Technology, 2 gen. S. Kaliskiego St., 00-908 Warsaw, Poland
Download PDF Download RIS Download Bibtex

Abstract

In Nantes, the last shipyard closed in 1986 leaving the city in a desperate situation. The cranes, symbolizing the industrial activity, one by one stopped. Unemployment stroked. The question was between turning the page, tearing down the workshops and reinventing a new story or trying to preserve would appear to most of the population, a kind of modern bulky legacy. In the early 2000’s, the revitalization of Nantes’ former industrial area, led to developing a new way thinking. Instead of designing an urban map with major spots and rows of housing, A. Chemetoff thought better to draw an urban landscape where the past could mix with the future. The industrial heritage has been then preserved in two diff erent ways: construction halls have been reshaped preserving the original structure, everything should be reversed. The intangible heritage, meaning worker’s knowledge, has been reinvested in the cultural industry. This way, the image of the city, its brand, moved from industrial to cultural, attracting a new kind of business, mainly high-tech, students, in a new: “art de Vivre” (Art of living).

Go to article

Authors and Affiliations

Laurent Lescop
Download PDF Download RIS Download Bibtex

Abstract

The aim of the article is to assess the current image of Poland as a tourist destination from the point of view of the Russians. To achieve the assumed goals, surveys were carried out. Basic statistical indicators such as mean, standard deviation, Pearson similarity index and graphic methods were also used in the study. Russian citizens did not perceive fully Poland as a country attractive for tourists both for themselves and for other European tourists. Their opinion in this regard was more critical than representatives of other nations.

Go to article

Authors and Affiliations

Wioletta Kamińska
Mirosław Mularczyk
Download PDF Download RIS Download Bibtex

Abstract

Words and images of the Republic: Italian political propaganda (1946–1948) – The article intends to highlight how the transition from monarchy to republic represents a significant boundary in Italian history not only from the institutional point of view but also from that of national political propaganda, in which words and images – the expression of a harsh ideological confrontation – contributed to the building of a national collective memory of which there are still evident and rooted traces in current political confrontation.

Go to article

Authors and Affiliations

Fabio Caffarena
Download PDF Download RIS Download Bibtex

Abstract

The authors report the characteristics of a diffraction-grating-free mid-wavelength infrared InP/In0.85Ga0.15As quantum well infrared photodetector focal plane array with a 640 × 512 format and a 15 m pitch. Combination of a normal incident radiation sensing ability of the high-x InxGa1-xAs quantum wells with a large gain property of the InP barriers led to a diffraction-grating-free quantum well infrared photodetector focal plane array with characteristics displaying great promise to keep the status of the quantum well infrared photodetector as a robust member of the new generation thermal imaging sensor family. The focal plane array exhibited excellent uniformity with noise equivalent temperature difference nonuniformity as low as 10% and a mean noise equivalent temperature difference below 20 mK with f/2 optics at 78 K in the absence of grating. Elimination of the diffraction-grating and large enough conversion efficiency (as high as 70% at a −3.5 V bias voltage) abolish the bottlenecks of the quantum well infrared photodetector technology for the new generation very small-pitch focal plane arrays.
Go to article

Authors and Affiliations

Cengiz Besikci
1 2
ORCID: ORCID
Saadettin V. Balcı
1
ORCID: ORCID
Onur Tanış
2
Oğuz O. Güngör
2
ORCID: ORCID
Esra S. Arpaguş
2

  1. Micro and Nanotechnology Program, Graduate School of Natural and Applied Sciences, Middle East Technical University, Dumlupınar Bulvarı 1, 06800 Ankara, Turkey
  2. Electrical and Electronics Engineering Department, Middle East Technical University, Dumlupınar Bulvarı 1, 06800 Ankara, Turkey
Download PDF Download RIS Download Bibtex

Abstract

The paper refers to the specific functional area, which identity was primarily based on the relationship with water – in major port cities, as well as related smaller settlemets. It discribes the phenomenon of using and reinterpreting the potential of a rich hydrographic network for constructing the contemporary spatial identity after the violent events of the 20th century. The case studies cited are differentiated due to the specifi city of the activities and the purpose of its implementation.

Go to article

Authors and Affiliations

Anna Golędzinowska
Download PDF Download RIS Download Bibtex

Abstract

Landscape is an object of perception, while its image is the sum of ideas on this object. Both terms used in the title of the paper have fairly strong impact on each other. In order to manage the city’s image well, it is necessary to take care of the landscape in all its areas especially in the “forgotten” and degraded ones. The aim of the author was to identify elements of landscape exposure along railway lines – areas with low aesthetic value in many cities around the world. The research area includes railway lines, in Cracow and Wrocław. The method adopted for the implementation of the study is the analysis of mental maps made in 2018 during field workshops. The paper is ended by conclusions on the landscape impact on the image of the city.

Go to article

Authors and Affiliations

Piotr Węgrzynowicz
Download PDF Download RIS Download Bibtex

Abstract

Modern cities are increasingly promoting their own individual brands to gain a competitive advantage. 28 Polish cities, after joining the Cittaslow international network of cities, can additionally use their native brand in their activities. The aim of the author was to answer the question: should cities only use an individual brand, or maybe they can support these activities with a common brand strategy. The growth of interest in individual brands of 28 cities belonging to the Cittaslow network has been evaluated, also their popularity, popularity of the native brand on the Facebook, and the use of the Cittaslow brand by cities on their websites were analysed. It was noticed that not all cities use the Cittaslow logo. But most cities in Cittaslow publish a link to the network and brand information on their websites. The native brand Cittaslow is in Poland at the positioning stage but probably its popularity will grow as the benefits from using it begin to be noticed.

Go to article

Authors and Affiliations

Agnieszka Stanowicka
Download PDF Download RIS Download Bibtex

Abstract

The aim of this study was to investigate four sources of implied motion in static images (a moving object as the source of implied motion, hand movements of the image creator as the source of implied motion, past experiences of the observer as the source of implied motion, and fictive movement of a point across an image as the source of implied motion). In the experiment of the study, participants orally described 16 static images that appeared on the screen of a computer. The aim was to find whether participants had used any motion-related word to describe each image. It was assumed that using motion-related words to describe a static image was an indication that the image had created a sense of motion for the observer. These results indicated that all four types of implied motion could create a significant sense of motion for the observer. Based on these results, it is suggested that observing these images could lead to simulating the actions involved in those motion events and the activation of the motor system. Finally, it is proposed that the three characteristics of being rule-based (clearly-defined), continuous, and gradual are critical in perceiving that image as a fictive motion.
Go to article

Bibliography


Babcock, M. K., & Freyd, J. J. (1988). Perception of dynamic information in static handwritten forms. The American Journal of Psychology, 101(1), 111–130.
Carlile, S., & Leung, J. (2016). The perception of auditory motion. Trends in Hearing, 20(1), 1-19.
Cattaneo, Z., Schiavi, S., Silvanto, J., & Nadal M. (2017). A TMS study on the contribution of visual area V5 to the perception of implied motion in art and its appreciation. Cognitive Neuroscience, 8(1), 59- 68. doi: https://doi.org/10.1080/17588928.2015.1083968.
Chen, I. H., Zhao, Q., Long, Y., Lu, Q., & Huang, C. R. (2019). Mandarin Chinese modality exclusivity norms. PLoS ONE,14, e0211336. doi: https://doi.org/10.1371/journal.pone.0211336
Filipović Đurđević, D. F., Popović Stijačić, M., & Karapandžić, J. (2016). A quest for sources of perceptual richness: Several candidates. In S Halupka-Rešetar & S. Martínez-Ferreiro (Eds.), Studies in language and mind (pp. 187–238). Novi Sad, Serbia: Filozofski fakultet uNovom Sadu.
Freyd, J. J. (1983a). Representing the dynamics of a static form. Memory & Cognition, 11(4), 342-346.
Freyd, J. J. (1983b). The mental representation of movement when static stimuli are viewed. Perception & Psychophysics, 33(6), 575- 581.
Futterweit, L. R., & Beilin, H. (1994). Recognition memory for movement in photographs: A developmental study. Journal of Experimental Child Psychology, 57(2), 163-179.
Gallese, V., & Lakoff, G. (2005). The brain’s concepts: The role of the sensory-motor system in conceptual knowledge. Cognitive Neuro-psychology, 22(3), 455-479.
Getzmann, S., & Lewald, J. (2009). Constancy of target velocity as a critical factor in the emergence of auditory and visual representa-tional momentum. Experimental Brain Research, 193(3), 437–443. https://doi.org/10.1007/s00221-008-1641-0
James, K. H., & Gauthier, I. (2006). Letter processing automatically recruits a sensory‐motor brain network. Neuropsychologia, 44(14), 2937– 2949.
Hubbard, T. L. (2005). Representational momentum and related displacements in spatial memory: A review of the findings. Psychonomic Bulletin & Review, 12(5), 822-851.
Hubbard, T. L. (2018). Influences on representational momentum. In T. L. Hubbard (Ed.). Spatial Biases in Perception and Cogni-tion (pp. 121-138). Cambridge, UK: Cambridge University Press.
Hubbard, T. L. (2019). Momentum-like effects and the dynamics of perception, cognition, and action. Attention, Perception, & Psycho-physics, 81(3), 2155–2170. https://doi.org/10.3758/s13414-019-01770-z
Kim, C. Y., & Blake, R. (2007). Seeing what you understand: Brain activity accompanying perception of implied motion in abstract paintings. Spatial Vision, 20(6), 545–560.
Kourtzi, Z., & Kanwisher, N. (2000). Activation in human MT/MST by static images with implied motion. Journal of Cognitive Neuroscience, 12(1), 48–55.
Langacker, R. W. (1986). Abstract motion. Proceedings of the 12th annual meeting of the Berkeley Linguistics Society (p. 455– 471). Berkeley: Berkeley Linguistics Society. Langacker, R. W. (1999). Grammar and Conceptualization. Berlin: Mouton de Gruyter.
Leyton, M. (1989). Inferring causal history from shape. Cognitive Science, 13(3), 357-387.
Longcamp, M., Anton, J. L., Roth, M., & Velay, J. L. (2003). Visual presentation of single letters activates a premotor area involved in writing. Neuroimage, 19(4), 1492– 1500.
Longcamp, M., Hlushchuk, Y., & Hari, R. (2011). What differs in visual recognition of handwritten vs. printed letters? An fMRI study. Human Brain Mapping, 32(8), 1250-1259.
Lorteije, J. A., Barraclough, N. E., Jellema, T., Raemaekers, M., Duijnhouwer, J., Xiao, D., et al. (2010). Implied motion activation in cortical area MT can be explained by visual low-level features. Journal of Cognitive Neuroscience, 23(6), 1533-1548.
Lorteije, J. A., Kenemans, J. L., Jellema, T., van der Lubbe, R. H., de Heer, F., & van Wezel, R. J. (2006). Delayed response to animate implied motion in human motion processing areas. Journal of Cognitive Neuroscience, 18(2),158–168.
Lorteije, J. A., Kenemans, J. L., Jellema, T., van der Lubbe, R. H., Lommers, M. W., & vanWezel, R. J. (2007). Adaptation to real motion reveals direction-selective interactions between real and implied motion processing. Journal of Cognitive Neuroscience, 19(8), 1231–1240.
Matlock, T. (2004). The conceptual motivation of fictive motion. In G. Radden & R. Dirven (eds.), Motivation in Grammar, 221–248. Amsterdam: John Benjamins. Matlock, T. (2010). Abstract motion is no longer abstract. Language and Cognition, 2(2), 243–260.
Matsumoto, Y. (1996). Subjective motion and English and Japanese verbs. Cognitive Linguistics, 7(2), 183–226.
Miklashevsky, A. (2018). Perceptual experience norms for 506 Russian nouns: Modality rating, spatial localization, manipulability, image-ability and other variables. Journal of Psycholinguistic Research, 47, 641–661.
Mishra, R. (2009). Interaction of language and visual attention: Evidence from production and comprehension. Progress in Brain Research, 176, 277–292.
Osaka, N., Matsuyoshia, D., Ikeda, T., & Osaka, M. (2010). Implied motion because of instability in Hokusai Manga activates the human motion-sensitive extrastriate visual cortex: An fMRI study of the impact of visual art. NeuroReport, 21(4), 264–267.
Pavan, A., Cuturi, L. F., Maniglia, M., Casco, C., & Campana, G. (2011). Implied motion from static photographs influences the perceived position of stationary objects. Vision Research, 51(1), 187-94. doi: https://doi.org/10.1016/j.visres.2010.11.004.
Rojo, A., & Valenzuela, J. (2003). Fictive motion in English and Spanish. International Journal of English Studies, 3(2), 123–150.
Saygin, A. P., McCullough, S., Alac, M., & Emmorey, K. (2010). Modulation of BOLD response in motion sensitive lateral temporal cortex by real and fictive motion sentences. Journal of Cognitive Neuroscience, 22(11), 2480–2490.
Sbriscia-Fioretti, B., Berchio, C., Freedberg, D., Gallese, V., Umiltà, M. A. (2013). ERP modulation during observation of abstract paintings by Franz Kline. PLoS One. 8(10):e75241. doi: https://doi.org/10.1371/ journal.pone.0075241.
Senior, C., Barnes, J., Giampietro, V., Simmons, A., Bullmore, E. T., Brammer, M., et al. (2000). The functional neuroanatomy of implicit- motion perception or representational momentum. Current Biology, 10(1), 16–22.
Speed, L. J., & Majid, A. (2017). Dutch modality exclusivity norms: Simulating perceptual modality in space. Behavior Research Methods,49, 2204–2218.
Talmy, L. (1996). Fictive motion in language and “ception”. In P. Bloom, M. A. Peterson, L. Nadel & M. F. Garrett (eds.), Language and Space, 211–276. Cambridge, MA: MIT Press.
Thakral, P. P., Moo, L. R., & Slotnick, S. D. (2012). A neural mechanism for aesthetic experience. Neuroreport, 23(5), 310-313. doi: https://doi.org/10.1097/WNR.0b013e328351759f.
Umilta', M. A., Berchio, C., Sestito, M., Freedberg, D., & Gallese, V. (2012). Abstract art and cortical motor activation: an EEG study. Frontiers in Human Neuroscience, 6, 311. https://doi.org/10.3389/fnhum.2012.00311
Wallentin, M., Lund, T. E., Østergaard, S., Østergaard, L., & Roepstorff, A. (2005). Motion verb sentences activate left posterior middle temporal cortex despite static context. NeuroReport, 16(6), 649-652.
Williams, A. L., & Wright, M. J. (2010). Static representations of speed and their neural correlates in human area MT/V5. NeuroReport, 20(16), 1466–1470.
Winawer, J., Huk, A. C., & Boroditsky, L. (2008). A motion aftereffect from still photographs depicting motion. Psychological Science, 19 (3), 276–283.
Zhao, X., Wang, J., Li, J., Luo, G., Li, T., Chatterjee, A., Zhang, W., & He, X. (2020). The neural mechanism of aesthetic judgments of dynamic landscapes: an fMRI study. Scientific Reports, 10(1), 20774. doi: https://doi.org/10.1038/s41598-020-77658-y
Go to article

Authors and Affiliations

Omid Khatin-Zadeh
1

  1. School of Foreign Languages, University of Electronic Science and Technology of China, Chengdu, China
Download PDF Download RIS Download Bibtex

Abstract

The domain of motion events is widely used to metaphorically describe abstract concepts, particularly emotional states. Why motion events are effective for describing abstract concepts is the question that this article intends to answer. In the literature of the field, several reasons have been suggested to be behind the suitability of motion events for describing these concepts, such as high concreteness of motion events, their high imageability, and the ability of comprehender to simultaneously imagine components of motion events. This article suggests that motion events are particularly effective for metaphorical description of those domains which have the feature of dynamic change over a period of time. This is particularly the case with emotional states. Since changes in emotions take place throughout a period of time, they could best be described by motion events which have the same feature. In other words, the continuous change in emotions is understood in terms of continuous change in the location of a moving object in the 3D space. Based on the arguments of embodied theories of cognition, it would be no surprise to see the involvement of similar areas of the brain in understanding emotions and motions.

Go to article

Authors and Affiliations

Omid Khatin-Zadeh
Zahra Eskandari
Sedigheh Vahdat
Hassan Banaruee
Download PDF Download RIS Download Bibtex

Abstract

The image analysis consists in extracting from the information which is available to the observer of the part that is important from the perspective of the investigated process. This process usually accompanies a considerable reduction in the amount of information from the image. In the field of two-phase flows, computer image analysis can be used to determine flow and geometric parameters of flow patterns. This article presents the possibilities of using this method to determine the void fraction, vapor quality, bubble velocity and the geometric dimensions of flow patterns. The use of computer image analysis methods is illustrated by the example of HFE 7100 refrigerant methoxynonafluorobutane condensation in a glass tubular minichannel. The high speed video camera was used for the study, and the films and individual frames received during the study were analyzed.

Go to article

Authors and Affiliations

Małgorzata Sikora
Tadeusz Bohdal

Authors and Affiliations

Piotr Karwat
1

  1. Department of Ultrasound, Institute of Fundamental Technological Research, Polish Academy of Sciences, Warsaw, Poland
Download PDF Download RIS Download Bibtex

Abstract

The world population, and thus the need for food, is increasing every day. This leads to the ultimate question of how to increase food production with limited time and scarce land. Another obstacle to meet the food demand includes the stresses a plant goes through. These may be abiotic or biotic, but the majority are biotic, i.e., plant diseases. The major challenge is to mitigate plant diseases efficiently, more quickly and with less manpower. Recently, artificial intelligence has turned to new frontiers in smart agricultural science. One novel approach in plant science is to detect and diagnose plant disease through deep learning and hyperspectral imaging. This smart technique is very advantageous for monitoring large acres of field where the availability of manpower is a major drawback. Early identification of plant diseases can be achieved through machine learning approaches. Advanced machine learning not only detects diseases but also helps to discover gene regulatory networks and select the genomic sequence to develop resistance in crop species and to mark pathogen effectors. In this review, new advancements in plant science through machine learning approaches have been discussed.
Go to article

Authors and Affiliations

Siddhartha Das
1
ORCID: ORCID
Sudeepta Pattanayak
2
ORCID: ORCID
Prateek Ranjan Behera
3

  1. Department of Plant Pathology, M.S. Swaminathan School of Agriculture, Centurion University of Technology and Management, Paralakhemundi, Odisha, India
  2. Division of Plant Pathology, ICAR – Indian Agricultural Research Institute, New Delhi, India
  3. Department of Plant Pathology, College of Agriculture, Odisha University of Agriculture and Technology, Bhubaneswar, India
Download PDF Download RIS Download Bibtex

Abstract

In modern conditions in the field of medicine, raster image analysis systems are becoming more widespread, which allow automating the process of establishing a diagnosis based on the results of instrumental monitoring of a patient. One of the most important stages of such an analysis is the detection of the mask of the object to be recognized on the image. It is shown that under the conditions of a multivariate and multifactorial task of analyzing medical images, the most promising are neural network tools for extracting masks. It has also been determined that the known detection tools are highly specialized and not sufficiently adapted to the variability of the conditions of use, which necessitates the construction of an effective neural network model adapted to the definition of a mask on medical images. An approach is proposed to determine the most effective type of neural network model, which provides for expert evaluation of the effectiveness of acceptable types of models and conducting computer experiments to make a final decision. It is shown that to evaluate the effectiveness of a neural network model, it is possible to use the Intersection over Union and Dice Loss metrics. The proposed solutions were verified by isolating the brachial plexus of nerve fibers on grayscale images presented in the public Ultrasound Nerve Segmentation database. The expediency of using neural network models U-Net, YOLOv4 and PSPNet was determined by expert evaluation, and with the help of computer experiments, it was proved that U-Net is the most effective in terms of Intersection over Union and Dice Loss, which provides a detection accuracy of about 0.89. Also, the analysis of the results of the experiments showed the need to improve the mathematical apparatus, which is used to calculate the mask detection indicators.
Go to article

Authors and Affiliations

I. Tereikovskyi
1
Oleksandr Korchenko
S. Bushuyev
2
O. Tereikovskyi
3
Ruslana Ziubina
Olga Veselska

  1. Department of System Programming and Specialised Computer Systems of the National Technical University of Ukraine, Igor Sikorsky Kyiv Polytechnic Institute, Ukraine
  2. Department of Project Management Kyiv National University of Construction and Architecture, Ukraine
  3. Department of Information Technology Security of National Aviation University, Kyiv, Ukraine
Download PDF Download RIS Download Bibtex

Abstract

In this paper a review on biometric person identification has been discussed using features from retinal fundus image. Retina recognition is claimed to be the best person identification method among the biometric recognition systems as the retina is practically impossible to forge. It is found to be most stable, reliable and most secure among all other biometric systems. Retina inherits the property of uniqueness and stability. The features used in the recognition process are either blood vessel features or non-blood vessel features. But the vascular pattern is the most prominent feature utilized by most of the researchers for retina based person identification. Processes involved in this authentication system include pre-processing, feature extraction and feature matching. Bifurcation and crossover points are widely used features among the blood vessel features. Non-blood vessel features include luminance, contrast, and corner points etc. This paper summarizes and compares the different retina based authentication system. Researchers have used publicly available databases such as DRIVE, STARE, VARIA, RIDB, ARIA, AFIO, DRIDB, and SiMES for testing their methods. Various quantitative measures such as accuracy, recognition rate, false rejection rate, false acceptance rate, and equal error rate are used to evaluate the performance of different algorithms. DRIVE database provides 100% recognition for most of the methods. Rest of the database the accuracy of recognition is more than 90%.

Go to article

Authors and Affiliations

Poonguzhali Elangovan
Malaya Kumar Nath
Download PDF Download RIS Download Bibtex

Abstract

This paper presents a study on the influence of psychophysical stimuli on facial thermal emissions. Two independent groups of stimuli are proposed to investigate facial changes resulting from human stress and physical exhaustion. One pertains to physical effort while the other is linked to stress invoked by solving a simple written test. Subjects’ face reactions were measured through collecting and analysing long-wavelength infrared images. A methodology for numerical processing of images is proposed. Results of numerical analysis with respect to different facial regions of interest are provided. An automatic deep learning based algorithm to classify specific thermal face patterns is proposed. The algorithm consists of detection of regions of interests as well as numerical analysis of thermal energy emissions of facial parts. The results of presented experiments allowed the authors to associate emission changes in specific facial regions with psychophysical stimulations of the person being examined. This work proves high usability of thermal imaging to capture changes of heat distribution of face as reactions for external stimuli.

Go to article

Authors and Affiliations

Jarosław Panasiuk
Piotr Prusaczyk
Artur Grudzień
Marcin Kowalski
Download PDF Download RIS Download Bibtex

Abstract

Glucose concentration measurement is essential for diagnosis, monitoring and treatment of various medical conditions like diabetes mellitus, hypoglycemia, etc. This paper presents a novel image-processing and machine learning based approach for glucose concentration measurement. Experimentation based on Glucose oxidase - peroxidase (GOD/POD) method has been performed to create the database. Glucose in the sample reacts with the reagent wherein the concentration of glucose is detected using colorimetric principle. Colour intensity thus produced, is proportional to the glucose concentration and varies at different levels. Existing clinical chemistry analyzers use spectrophotometry to estimate the glucose level of the sample. Instead, this developed system uses simplified hardware arrangement and estimates glucose concentration by capturing the image of the sample. After further processing, its Saturation (S) and Luminance (Y) values are extracted from the captured image. Linear regression based machine learning algorithm is used for training the dataset consists of saturation and luminance values of images at different concentration levels. Integration of machine learning provides the benefit of improved accuracy and predictability in determining glucose level. The detection of glucose concentrations in the range of 10–400 mg/dl has been evaluated. The results of the developed system were verified with the currently used spectrophotometry based Trace40 clinical chemistry analyzer. The deviation of the estimated values from the actual values was found to be around 2- 3%.
Go to article

Authors and Affiliations

Angel Thomas
1
Sangeeta Palekar
1
Jayu Kalambe
1

  1. Shri Ramdeobaba College of Engineering & Management, India

This page uses 'cookies'. Learn more