This paper presents a deep learning-based image texture recognition system. The methodology taken in this solution is formed in a bottom-up manner. It means we swipe a moving window through the image in order to categorize if a given region belongs to one of the classes seen in the training process. This categorization is done based on the Deep Neural Network (DNN) of fixed architecture. The training process is fully automated regarding the training data preparation, investigation of the best training algorithm, and its hyper-parameters. The only human input to the system is the definition of the categories for further recognition and generation of the samples (region markings) in the external application chosen by the user. The system is tested on road surface images where its task is to categorize image regions to a different road category (e.g. curb, road surface damage, etc.) and is featured with 90% and above accuracy.
Skin cancer is the most common form of cancer affecting humans. Melanoma is the most dangerous type of skin cancer; and early diagnosis is extremely vital in curing the disease. So far, the human knowledge in this field is very limited, thus, developing a mechanism capable of identifying the disease early on can save lives, reduce intervention and cut unnecessary costs. In this paper, the researchers developed a new learning technique to classify skin lesions, with the purpose of observing and identifying the presence of melanoma. This new technique is based on a convolutional neural network solution with multiple configurations; where the researchers employed an International Skin Imaging Collaboration (ISIC) dataset. Optimal results are achieved through a convolutional neural network composed of 14 layers. This proposed system can successfully and reliably predict the correct classification of dermoscopic lesions with 97.78% accuracy.
In recent years, deep learning and especially deep neural networks (DNN) have obtained amazing performance on a variety of problems, in particular in classification or pattern recognition. Among many kinds of DNNs, the convolutional neural networks (CNN) are most commonly used. However, due to their complexity, there are many problems related but not limited to optimizing network parameters, avoiding overfitting and ensuring good generalization abilities. Therefore, a number of methods have been proposed by the researchers to deal with these problems. In this paper, we present the results of applying different, recently developed methods to improve deep neural network training and operating. We decided to focus on the most popular CNN structures, namely on VGG based neural networks: VGG16, VGG11 and proposed by us VGG8. The tests were conducted on a real and very important problem of skin cancer detection. A publicly available dataset of skin lesions was used as a benchmark. We analyzed the influence of applying: dropout, batch normalization, model ensembling, and transfer learning. Moreover, the influence of the type of activation function was checked. In order to increase the objectivity of the results, each of the tested models was trained 6 times and their results were averaged. In addition, in order to mitigate the impact of the selection of learning, test and validation sets, k-fold validation was applied.
The pathologists follow a systematic and partially manual process to obtain histological tissue sections from the biological tissue extracted from patients. This process is far from being perfect and can introduce some errors in the quality of the tissue sections (distortions, deformations, folds and tissue breaks). In this paper, we propose a deep learning (DL) method for the detection and segmentation of these damaged regions in whole slide images (WSIs). The proposed technique is based on convolutional neural networks (CNNs) and uses the U-net model to achieve the pixel-wise segmentation of these unwanted regions. The results obtained show that this technique yields satisfactory results and can be applied as a pre-processing step for automatic WSI analysis in order to prevent the use of the damaged areas in the evaluation processes.
In the last few years, a great attention was paid to the deep learning Techniques used for image analysis because of their ability to use machine learning techniques to transform input data into high level presentation. For the sake of accurate diagnosis, the medical field has a steadily growing interest in such technology especially in the diagnosis of melanoma. These deep learning networks work through making coarse segmentation, conventional filters and pooling layers. However, this segmentation of the skin lesions results in image of lower resolution than the original skin image. In this paper, we present deep learning based approaches to solve the problems in skin lesion analysis using a dermoscopic image containing skin tumor. The proposed models are trained and evaluated on standard benchmark datasets from the International Skin Imaging Collaboration (ISIC) 2018 Challenge. The proposed method achieves an accuracy of 96.67% for the validation set .The experimental tests carried out on a clinical dataset show that the classification performance using deep learning-based features performs better than the state-of-the-art techniques.
Convolutional Neural Networks (CNN) have achieved huge popularity in solving problems in image analysis and in text recognition. In this work, we assess the effectiveness of CNN-based architectures where a network is trained in recognizing handwritten characters based on Latin script. European languages such as Dutch, French, German, etc., use different variants of the Latin script, so in the conducted research, the Latin alphabet was extended by certain characters with diacritics used in Polish language. To evaluate the recognition results under the same conditions, a handwritten Latin dataset was also developed. The proposed CNN architecture produced an accuracy of 96% for the extended character set. This is comparable to state-of-the-art results found in the domain of identifying handwritten characters. The presented approach extends the usage of CNN-based recognition to different variants of the Latin characters and shows it can be successfully used for a set of languages based on that script. It seems to be an effective technique for a set of languages written using the Latin script.
In industrial drive systems, one of the widest group of machines are induction motors. During normal operation, these machines are exposed to various types of damages, resulting in high economic losses. Electrical circuits damages are more than half of all damages appearing in induction motors. In connection with the above, the task of early detection of machine defects becomes a priority in modern drive systems. The article presents the possibility of using deep neural networks to detect stator and rotor damages. The opportunity of detecting shorted turns and the broken rotor bars with the use of an axial flux signal is presented.