Abstract: The management of COVID-19 patients based on chest imaging is emerging as an essential tool for evaluating the spread of the pandemic which has gripped the global community. It has already been used to monitor the situation of COVID-19 patients who have issues in respiratory status. There has been increase to use chest imaging for medical triage of patients who are showing moderate-severe clinical COVID-19 features, this is due to the fast dispersal of the pandemic to all continents and communities. This article demonstrates the development of machine learning techniques for the test of COVID-19 patients using Chest X-Ray (CXR) images in nearly real-time, to distinguish the COVID-19 infection with a significantly high level of accuracy. The testing performance has covered a combination of different datasets of CXR images of positive COVID-19 patients, patients with viral and bacterial infections, also, people with a clear chest. The proposed AI scheme successfully distinguishes CXR scans of COVID-19 infected patients from CXR scans of viral and bacterial based pneumonia as well as normal cases with an average accuracy of 94.43%, sensitivity 95%, and specificity 93.86%. Predicted decisions would be supported by visual evidence to help clinicians speed up the initial assessment process of new suspected cases, especially in a resource-constrained environment.
Abstract: With the rapid development of modern communication,
diagnosing the fiber-optic quality and faults in real-time is widely
focused. In this paper, a Labview-based system is proposed for
fiber-optic faults detection. The wavelet threshold denoising method
combined with Empirical Mode Decomposition (EMD) is applied to
denoise the optical time domain reflectometer (OTDR) signal. Then
the method based on Gabor representation is used to detect events.
Experimental measurements show that signal to noise ratio (SNR)
of the OTDR signal is improved by 1.34dB on average, compared
with using the wavelet threshold denosing method. The proposed
system has a high score in event detection capability and accuracy.
The maximum detectable fiber length of the proposed Labview-based
system can be 65km.
Abstract: ‘Bioeconomy’ is a complex concept that cuts across many sectors and covers several policy areas. To achieve an overall understanding and support a successful bioeconomy, a cross-sectorial approach is necessary. In practice, due to the concept’s wide scope and varying international approaches, fully understanding bioeconomy is challenging on policy level. This paper provides a background of the topic through an analysis of bioeconomy strategies in the Baltic Sea region. Expert interviews and a small survey were conducted to discover the current and intended focuses of these countries’ bioeconomy sectors. The research shows that supporting sustainability is one of the keys in developing the future bioeconomy. The results highlighted that the bioeconomy has to be sustainable and based on circular economy principles. Currently, traditional bioeconomy sectors like food, wood, fish & waters as well as fuel & energy, which are in the core of national bioeconomy strategies, are best known and are considered more relevant than other bioeconomy industries. However, there is increasing potential for novel sectors, such as textiles and pharmaceuticals. The present research indicates that the opportunities presented by these bioeconomy sectors should be recognised and promoted. Education, research and innovation can play key roles in developing transformative and sustainable improvements in primary production and renewable resources. Furthermore, cooperation between businesses and educators is important.
Abstract: This paper presents an overview of the methodologies
and algorithms for statistical texture analysis of 2D images. Methods
for digital-image texture analysis are reviewed based on available
literature and research work either carried out or supervised by the
authors.
Abstract: This paper shows researches in order to extract Cr, Cu and Ni from the polluted soils. Research is based on preliminary studies regarding the usage of Thiobacillus ferrooxidans bacterium (9K medium) for bioleaching of soil polluted with heavy metal (Cu, Cr and Ni). The microorganisms (Thiobacillus ferooxidans) selected directly from polluted soil samples were used in this experimental work. Soil samples used in the experimental research were taken from an area polluted with heavy metals from Romania. The soil samples are subjected to the cleaning process using the 9K medium solution (20 mL and 40 mL, respectively), stirred 200 rpm for 20 hours at a controlled temperature (30 ˚C). During the experiment (0, 2, 4, 8 and 20 h), liquid samples have been extracted and analyzed using the Atomic Absorption Spectrophotometer AA-6800 (AAS) in order to determine the Cr, Cu and Ni concentration. Experiments led to the conclusion that these soils can be depolluted by bioleaching, being a biological treatment method involving the use of microorganisms to favor the extraction of Cr, Cu and Ni from polluted soils.
Abstract: Eyes are considered to be the most sensitive and
important organ for human being. Thus, any eye disorder will affect
the patient in all aspects of life. Cataract is one of those eye disorders
that lead to blindness if not treated correctly and quickly. This paper
demonstrates a model for automatic detection, classification, and
grading of cataracts based on image processing techniques and
artificial intelligence. The proposed system is developed to ease the
cataract diagnosis process for both ophthalmologists and patients.
The wavelet transform combined with 2D Log Gabor Wavelet
transform was used as feature extraction techniques for a dataset of
120 eye images followed by a classification process that classified the
image set into three classes; normal, early, and advanced stage. A
comparison between the two used classifiers, the support vector
machine SVM and the artificial neural network ANN were done for
the same dataset of 120 eye images. It was concluded that SVM gave
better results than ANN. SVM success rate result was 96.8%
accuracy where ANN success rate result was 92.3% accuracy.
Abstract: Requirement for pole-changing motors emerged at the very early times of asynchronous motor design. Different solutions have been elaborated and some of them are generally used. An alternative is the so called 3 Y/3 Y pole-changing winding. This paper deals with high power application of this solution. A complete and comprehensive study is introduced, including features and design guidelines. The method presented in this paper is especially suitable for pole numbers being close to each other. The study also reveals that the method is more advantageous then the existing solutions for high power motors with 1:3 pole ratio. Using this motor, a new and complete drive supply system has been proposed as most appropriate arrangement of high power main naval propulsion drive. Further, the method makes possible to extend the pole ratio to 1:6, 1:9, 1:12, etc. At the end, the proposal is further extended to the here so far missing 1:4, 1:5, 1:7 etc. pole ratios. A complete proposal for the theoretically infinite range has been given in this way.
Abstract: This paper describes a preliminary work aimed at
setting a therapeutic support for autistic teenagers using three
humanoid robots NAO shared by ASD (Autism Spectrum Disorder)
subjects. The studied population had attended successfully a first
year program, and were observed with a second year program
using the robots. This paper focuses on the content and the effects
of the second year program. The approach is based on a master
puppet concept: the subjects program the robots, and use them as
an extension for communication. Twenty sessions were organized,
alternating ten preparatory sessions and ten robotics programming
sessions. During the preparatory sessions, the subjects write a story
to be played by the robots. During the robot programming sessions,
the subjects program the motions to be realized to make the robot
tell the story. The program was concluded by a public performance.
The experiment involves five ASD teenagers aged 12-15, who had
all attended the first year robotics training. As a result, a progress
in voluntary and organized communication skills of the five subjects
was observed, leading to improvements in social organization,
focus, voluntary communication, programming, reading and writing
abilities. The changes observed in the subjects general behavior
took place in a short time, and could be observed from one robotics
session to the next one. The approach allowed the subjects to
draw the limits of their body with respect to the environment, and
therefore helped them confronting the world with less anxiety.
Abstract: In this study, a spatial wavelet-based crack localization technique for a thick beam is presented. Wavelet scale in spatial wavelet transformation is optimized to enhance crack detection sensitivity. A windowing function is also employed to erase the edge effect of the wavelet transformation, which enables the method to detect and localize cracks near the beam/measurement boundaries. Theoretical model and vibration analysis considering the crack effect are first proposed and performed in MATLAB based on the Timoshenko beam model. Gabor wavelet family is applied to the beam vibration mode shapes derived from the theoretical beam model to magnify the crack effect so as to locate the crack. Relative wavelet coefficient is obtained for sensitivity analysis by comparing the coefficient values at different positions of the beam with the lowest value in the intact area of the beam. Afterward, the optimal wavelet scale corresponding to the highest relative wavelet coefficient at the crack position is obtained for each vibration mode, through numerical simulations. The same procedure is performed for cracks with different sizes and positions in order to find the optimal scale range for the Gabor wavelet family. Finally, Hanning window is applied to different vibration mode shapes in order to overcome the edge effect problem of wavelet transformation and its effect on the localization of crack close to the measurement boundaries. Comparison of the wavelet coefficients distribution of windowed and initial mode shapes demonstrates that window function eases the identification of the cracks close to the boundaries.
Abstract: The paper presents the results and industrial
applications in the production setup period estimation based on
industrial data inherited from the field of polymer cutting. The
literature of polymer cutting is very limited considering the number
of publications. The first polymer cutting machine is known since the
second half of the 20th century; however, the production of polymer
parts with this kind of technology is still a challenging research topic.
The products of the applying industrial partner must met high
technical requirements, as they are used in medical, measurement
instrumentation and painting industry branches. Typically, 20% of
these parts are new work, which means every five years almost the
entire product portfolio is replaced in their low series manufacturing
environment. Consequently, it requires a flexible production system,
where the estimation of the frequent setup periods' lengths is one of
the key success factors. In the investigation, several (input)
parameters have been studied and grouped to create an adequate
training information set for an artificial neural network as a base for
the estimation of the individual setup periods. In the first group,
product information is collected such as the product name and
number of items. The second group contains material data like
material type and colour. In the third group, surface quality and
tolerance information are collected including the finest surface and
tightest (or narrowest) tolerance. The fourth group contains the setup
data like machine type and work shift. One source of these
parameters is the Manufacturing Execution System (MES) but some
data were also collected from Computer Aided Design (CAD)
drawings. The number of the applied tools is one of the key factors
on which the industrial partners’ estimations were based previously.
The artificial neural network model was trained on several thousands
of real industrial data. The mean estimation accuracy of the setup
periods' lengths was improved by 30%, and in the same time the
deviation of the prognosis was also improved by 50%. Furthermore,
an investigation on the mentioned parameter groups considering the
manufacturing order was also researched. The paper also highlights
the manufacturing introduction experiences and further
improvements of the proposed methods, both on the shop floor and
on the quotation preparation fields. Every week more than 100 real
industrial setup events are given and the related data are collected.
Abstract: In this work, the hemodynamics in the sinuses of
Valsalva after Transcatheter Aortic Valve Implantation is numerically
examined. We focus on the physical results in the two-dimensional
case. We use a finite element methodology based on a Lagrange
multiplier technique that enables to couple the dynamics of blood
flow and the leaflets’ movement. A massively parallel implementation
of a monolithic and fully implicit solver allows more accuracy and
significant computational savings. The elastic properties of the aortic
valve are disregarded, and the numerical computations are performed
under physiologically correct pressure loads. Computational results
depict that blood flow may be subject to stagnation in the lower
domain of the sinuses of Valsalva after Transcatheter Aortic Valve
Implantation.
Abstract: This paper is concerned with the development of a
fully implicit and purely Eulerian fluid-structure interaction method
tailored for the modeling of the large deformations of elastic
membranes in a surrounding Newtonian fluid. We consider a
simplified model for the mechanical properties of the membrane, in
which the surface strain energy depends on the membrane stretching.
The fully Eulerian description is based on the advection of a modified
surface tension tensor, and the deformations of the membrane are
tracked using a level set strategy. The resulting nonlinear problem
is solved by a Newton-Raphson method, featuring a quadratic
convergence behavior. A monolithic solver is implemented, and we
report several numerical experiments aimed at model validation and
illustrating the accuracy of the presented method. We show that
stability is maintained for significantly larger time steps.
Abstract: Diabetic Retinopathy (DR) is an eye disease that leads to blindness. The earliest signs of DR are the appearance of red and yellow lesions on the retina called hemorrhages and exudates. Early diagnosis of DR prevents from blindness; hence, many automated algorithms have been proposed to extract hemorrhages and exudates. In this paper, an automated algorithm is presented to extract hemorrhages and exudates separately from retinal fundus images using different image processing techniques including Circular Hough Transform (CHT), Contrast Limited Adaptive Histogram Equalization (CLAHE), Gabor filter and thresholding. Since Optic Disc is the same color as the exudates, it is first localized and detected. The presented method has been tested on fundus images from Structured Analysis of the Retina (STARE) and Digital Retinal Images for Vessel Extraction (DRIVE) databases by using MATLAB codes. The results show that this method is perfectly capable of detecting hard exudates and the highly probable soft exudates. It is also capable of detecting the hemorrhages and distinguishing them from blood vessels.
Abstract: Diabetic Retinopathy (DR) is a severe retinal disease which is caused by diabetes mellitus. It leads to blindness when it progress to proliferative level. Early indications of DR are the appearance of microaneurysms, hemorrhages and hard exudates. In this paper, an automatic algorithm for detection of DR has been proposed. The algorithm is based on combination of several image processing techniques including Circular Hough Transform (CHT), Contrast Limited Adaptive Histogram Equalization (CLAHE), Gabor filter and thresholding. Also, Support Vector Machine (SVM) Classifier is used to classify retinal images to normal or abnormal cases including non-proliferative or proliferative DR. The proposed method has been tested on images selected from Structured Analysis of the Retinal (STARE) database using MATLAB code. The method is perfectly able to detect DR. The sensitivity specificity and accuracy of this approach are 90%, 87.5%, and 91.4% respectively.
Abstract: This research paper presents a framework for classifying Magnetic Resonance Imaging (MRI) images for Dementia. Dementia, an age-related cognitive decline is indicated by degeneration of cortical and sub-cortical structures. Characterizing morphological changes helps understand disease development and contributes to early prediction and prevention of the disease. Modelling, that captures the brain’s structural variability and which is valid in disease classification and interpretation is very challenging. Features are extracted using Gabor filter with 0, 30, 60, 90 orientations and Gray Level Co-occurrence Matrix (GLCM). It is proposed to normalize and fuse the features. Independent Component Analysis (ICA) selects features. Support Vector Machine (SVM) classifier with different kernels is evaluated, for efficiency to classify dementia. This study evaluates the presented framework using MRI images from OASIS dataset for identifying dementia. Results showed that the proposed feature fusion classifier achieves higher classification accuracy.
Abstract: Speaker recognition is performed in high Additive White Gaussian Noise (AWGN) environments using principals of Computational Auditory Scene Analysis (CASA). CASA methods often classify sounds from images in the time-frequency (T-F) plane using spectrograms or cochleargrams as the image. In this paper atomic decomposition implemented by matching pursuit performs a transform from time series speech signals to the T-F plane. The atomic decomposition creates a sparsely populated T-F vector in “weight space” where each populated T-F position contains an amplitude weight. The weight space vector along with the atomic dictionary represents a denoised, compressed version of the original signal. The arraignment or of the atomic indices in the T-F vector are used for classification. Unsupervised feature learning implemented by a sparse autoencoder learns a single dictionary of basis features from a collection of envelope samples from all speakers. The approach is demonstrated using pairs of speakers from the TIMIT data set. Pairs of speakers are selected randomly from a single district. Each speak has 10 sentences. Two are used for training and 8 for testing. Atomic index probabilities are created for each training sentence and also for each test sentence. Classification is performed by finding the lowest Euclidean distance between then probabilities from the training sentences and the test sentences. Training is done at a 30dB Signal-to-Noise Ratio (SNR). Testing is performed at SNR’s of 0 dB, 5 dB, 10 dB and 30dB. The algorithm has a baseline classification accuracy of ~93% averaged over 10 pairs of speakers from the TIMIT data set. The baseline accuracy is attributable to short sequences of training and test data as well as the overall simplicity of the classification algorithm. The accuracy is not affected by AWGN and produces ~93% accuracy at 0dB SNR.
Abstract: Cancer affects people globally with breast cancer being a leading killer. Breast cancer is due to the uncontrollable multiplication of cells resulting in a tumour or neoplasm. Tumours are called ‘benign’ when cancerous cells do not ravage other body tissues and ‘malignant’ if they do so. As mammography is an effective breast cancer detection tool at an early stage which is the most treatable stage it is the primary imaging modality for screening and diagnosis of this cancer type. This paper presents an automatic mammogram classification technique using wavelet and Gabor filter. Correlation feature selection is used to reduce the feature set and selected features are classified using different decision trees.
Abstract: In this paper, we present a novel 2.5D face recognition method based on Gabor Discrete Cosine Transform (GDCT). In the proposed method, the Gabor filter is applied to extract feature vectors from the texture and the depth information. Then, Discrete Cosine Transform (DCT) is used for dimensionality and redundancy reduction to improve computational efficiency. The system is combined texture and depth information in the decision level, which presents higher performance compared to methods, which use texture and depth information, separately. The proposed algorithm is examined on publically available Bosphorus database including models with pose variation. The experimental results show that the proposed method has a higher performance compared to the benchmark.
Abstract: The paper presents an advanced control system for
tennis ball throwing machines to improve their accuracy according to
the ball impact points. A further advantage of the system is the much
easier calibration process involving the intelligent solution of the
automatic adjustment of the stroking parameters according to the ball
elasticity, the self-calibration, the use of the safety margin at very flat
strokes and the possibility to placing the machine to any position of
the half court. The system applies mathematical methods to
determine the exact ball trajectories and special approximating
processes to access all points on the aimed half court.
Abstract: A simple adaptive voice activity detector (VAD) is
implemented using Gabor and gammatone atomic decomposition of
speech for high Gaussian noise environments. Matching pursuit is
used for atomic decomposition, and is shown to achieve optimal
speech detection capability at high data compression rates for low
signal to noise ratios. The most active dictionary elements found by
matching pursuit are used for the signal reconstruction so that the
algorithm adapts to the individual speakers dominant time-frequency
characteristics. Speech has a high peak to average ratio enabling
matching pursuit greedy heuristic of highest inner products to isolate
high energy speech components in high noise environments. Gabor
and gammatone atoms are both investigated with identical
logarithmically spaced center frequencies, and similar bandwidths.
The algorithm performs equally well for both Gabor and gammatone
atoms with no significant statistical differences. The algorithm
achieves 70% accuracy at a 0 dB SNR, 90% accuracy at a 5 dB SNR
and 98% accuracy at a 20dB SNR using 30d B SNR as a reference
for voice activity.