Improving the Performance of Deep Learning in Facial Emotion Recognition with Image Sharpening

We as humans use words with accompanying visual and facial cues to communicate effectively. Classifying facial emotion using computer vision methodologies has been an active research area in the computer vision field. In this paper, we propose a simple method for facial expression recognition that enhances accuracy. We tested our method on the FER-2013 dataset that contains static images. Instead of using Histogram equalization to preprocess the dataset, we used Unsharp Mask to emphasize texture and details and sharpened the edges. We also used ImageDataGenerator from Keras library for data augmentation. Then we used Convolutional Neural Networks (CNN) model to classify the images into 7 different facial expressions, yielding an accuracy of 69.46% on the test set. Our results show that using image preprocessing such as the sharpening technique for a CNN model can improve the performance, even when the CNN model is relatively simple.

Lung Cancer Detection and Multi Level Classification Using Discrete Wavelet Transform Approach

Uncontrolled growth of abnormal cells in the lung in the form of tumor can be either benign (non-cancerous) or malignant (cancerous). Patients with Lung Cancer (LC) have an average of five years life span expectancy provided diagnosis, detection and prediction, which reduces many treatment options to risk of invasive surgery increasing survival rate. Computed Tomography (CT), Positron Emission Tomography (PET), and Magnetic Resonance Imaging (MRI) for earlier detection of cancer are common. Gaussian filter along with median filter used for smoothing and noise removal, Histogram Equalization (HE) for image enhancement gives the best results without inviting further opinions. Lung cavities are extracted and the background portion other than two lung cavities is completely removed with right and left lungs segmented separately. Region properties measurements area, perimeter, diameter, centroid and eccentricity measured for the tumor segmented image, while texture is characterized by Gray-Level Co-occurrence Matrix (GLCM) functions, feature extraction provides Region of Interest (ROI) given as input to classifier. Two levels of classifications, K-Nearest Neighbor (KNN) is used for determining patient condition as normal or abnormal, while Artificial Neural Networks (ANN) is used for identifying the cancer stage is employed. Discrete Wavelet Transform (DWT) algorithm is used for the main feature extraction leading to best efficiency. The developed technology finds encouraging results for real time information and on line detection for future research.

Algorithm for Path Recognition in-between Tree Rows for Agricultural Wheeled-Mobile Robots

Machine vision has been widely used in recent years in agriculture, as a tool to promote the automation of processes and increase the levels of productivity. The aim of this work is the development of a path recognition algorithm based on image processing to guide a terrestrial robot in-between tree rows. The proposed algorithm was developed using the software MATLAB, and it uses several image processing operations, such as threshold detection, morphological erosion, histogram equalization and the Hough transform, to find edge lines along tree rows on an image and to create a path to be followed by a mobile robot. To develop the algorithm, a set of images of different types of orchards was used, which made possible the construction of a method capable of identifying paths between trees of different heights and aspects. The algorithm was evaluated using several images with different characteristics of quality and the results showed that the proposed method can successfully detect a path in different types of environments.

Wireless Body Area Network’s Mitigation Method Using Equalization

A wireless body area sensor network (WBASN) is composed of a central node and heterogeneous sensors to supervise the physiological signals and functions of the human body. This overwhelmimg area has stimulated new research and calibration processes, especially in the area of WBASN’s attainment and fidelity. In the era of mobility or imbricated WBASN’s, system performance incomparably degrades because of unstable signal integrity. Hence, it is mandatory to define mitigation techniques in the design to avoid interference. There are various mitigation methods available e.g. diversity techniques, equalization, viterbi decoder etc. This paper presents equalization mitigation scheme in WBASNs to improve the signal integrity. Eye diagrams are also given to represent accuracy of the signal. Maximum no. of symbols is taken to authenticate the signal which in turn results in accuracy and increases the overall performance of the system.

Object Detection in Digital Images under Non-Standardized Conditions Using Illumination and Shadow Filtering

In recent years, object detection has gained much attention and very encouraging research area in the field of computer vision. The robust object boundaries detection in an image is demanded in numerous applications of human computer interaction and automated surveillance systems. Many methods and approaches have been developed for automatic object detection in various fields, such as automotive, quality control management and environmental services. Inappropriately, to the best of our knowledge, object detection under illumination with shadow consideration has not been well solved yet. Furthermore, this problem is also one of the major hurdles to keeping an object detection method from the practical applications. This paper presents an approach to automatic object detection in images under non-standardized environmental conditions. A key challenge is how to detect the object, particularly under uneven illumination conditions. Image capturing conditions the algorithms need to consider a variety of possible environmental factors as the colour information, lightening and shadows varies from image to image. Existing methods mostly failed to produce the appropriate result due to variation in colour information, lightening effects, threshold specifications, histogram dependencies and colour ranges. To overcome these limitations we propose an object detection algorithm, with pre-processing methods, to reduce the interference caused by shadow and illumination effects without fixed parameters. We use the Y CrCb colour model without any specific colour ranges and predefined threshold values. The segmented object regions are further classified using morphological operations (Erosion and Dilation) and contours. Proposed approach applied on a large image data set acquired under various environmental conditions for wood stack detection. Experiments show the promising result of the proposed approach in comparison with existing methods.

Comparative Study of Different Enhancement Techniques for Computed Tomography Images

One of the key problems facing in the analysis of Computed Tomography (CT) images is the poor contrast of the images. Image enhancement can be used to improve the visual clarity and quality of the images or to provide a better transformation representation for further processing. Contrast enhancement of images is one of the acceptable methods used for image enhancement in various applications in the medical field. This will be helpful to visualize and extract details of brain infarctions, tumors, and cancers from the CT image. This paper presents a comparison study of five contrast enhancement techniques suitable for the contrast enhancement of CT images. The types of techniques include Power Law Transformation, Logarithmic Transformation, Histogram Equalization, Contrast Stretching, and Laplacian Transformation. All these techniques are compared with each other to find out which enhancement provides better contrast of CT image. For the comparison of the techniques, the parameters Peak Signal to Noise Ratio (PSNR) and Mean Square Error (MSE) are used. Logarithmic Transformation provided the clearer and best quality image compared to all other techniques studied and has got the highest value of PSNR. Comparison concludes with better approach for its future research especially for mapping abnormalities from CT images resulting from Brain Injuries.

Architecture and Students with Autism: Exploring Strategies for Their Inclusion in Society Mainstream

Architecture, as an art and science of designing, has always been the medium to create environments that fulfill their users’ needs. It could create an inclusive environment that would not isolate any individual regardless of his /her disabilities. It could help, hopefully, in setting the strategies that provide a supportive, educational environment that would allow the inclusion of students with autism. Architects could help in the battle against this neuro-developmental disorder by providing the accommodating environment, at home and at school, in order to prevent institutionalizing these children. Through a theoretical approach and a review of literature, this study will explore and analyze best practices in autism-friendly, supportive, teaching environments. Additionally, it would provide the range of measures, and set the strategies to deal with the students with autism sensory peculiarities, and that, in order to allow them to concentrate in the school environment, and be able to succeed, and to be integrated as an important addition to society and the social mainstream. Architects should take into consideration the general guidelines for an autism-friendly built environment, and apply them to specific buildings systems. And that, as certain design elements have great effect on children’s behavior, by appropriating architecture to provide inclusive accommodating environments, the basis for equalization of opportunities is set allowing these individuals a better, normal, non-institutional life, as the discussion presented in this study would reveal.

Online Prediction of Nonlinear Signal Processing Problems Based Kernel Adaptive Filtering

This paper presents two of the most knowing kernel adaptive filtering (KAF) approaches, the kernel least mean squares and the kernel recursive least squares, in order to predict a new output of nonlinear signal processing. Both of these methods implement a nonlinear transfer function using kernel methods in a particular space named reproducing kernel Hilbert space (RKHS) where the model is a linear combination of kernel functions applied to transform the observed data from the input space to a high dimensional feature space of vectors, this idea known as the kernel trick. Then KAF is the developing filters in RKHS. We use two nonlinear signal processing problems, Mackey Glass chaotic time series prediction and nonlinear channel equalization to figure the performance of the approaches presented and finally to result which of them is the adapted one.

Contrast Enhancement of Color Images with Color Morphing Approach

Low contrast images can result from the wrong setting of image acquisition or poor illumination conditions. Such images may not be visually appealing and can be difficult for feature extraction. Contrast enhancement of color images can be useful in medical area for visual inspection. In this paper, a new technique is proposed to improve the contrast of color images. The RGB (red, green, blue) color image is transformed into normalized RGB color space. Adaptive histogram equalization technique is applied to each of the three channels of normalized RGB color space. The corresponding channels in the original image (low contrast) and that of contrast enhanced image with adaptive histogram equalization (AHE) are morphed together in proper proportions. The proposed technique is tested on seventy color images of acne patients. The results of the proposed technique are analyzed using cumulative variance and contrast improvement factor measures. The results are also compared with decorrelation stretch. Both subjective and quantitative analysis demonstrates that the proposed techniques outperform the other techniques.

Teaching Material, Books, Publications versus the Practice: Myths and Truths about Installation and Use of Downhole Safety Valve

The paper is related to the safety of oil wells and environmental preservation on the planet, because they require great attention and commitment from oil companies and people who work with these equipments. This must occur from drilling the well until it is abandoned in order to safeguard the environment and prevent possible damage. The project had as main objective the constitution resulting from comparatives made among books, articles and publications with information gathered in technical visits to operational bases of Petrobras. After the visits, the information from methods of utilization and present managements, which were not available before, became available to the general audience. As a result, it is observed a huge flux of incorrect and out-of-date information that comprehends not only bibliographic archives, but also academic resources and materials. During the gathering of more in-depth information on the manufacturing, assembling, and use aspects of DHSVs, several issues that were previously known as correct, customary issues were discovered to be uncertain and outdated. Information of great importance resulted in affirmations about subjects as the depth of the valve installation that was before installed to 30 meters from the seabed (mud line). Despite this, the installation should vary in conformity to the ideal depth to escape from area with the biggest tendency to hydrates formation according to the temperature and pressure. Regarding to valves with nitrogen chamber, in accordance with books, they have their utilization linked to water line ≥ 700 meters, but in Brazilian exploratory fields, their use occurs from 600 meters of water line. The valves used in Brazilian fields are able to be inserted to the production column and self-equalizing, but the use of screwed valve in the column of production and equalizing is predominant. Although these valves are more expensive to acquire, they are more reliable, efficient, with a bigger shelf life and they do not cause restriction to the fluid flux. It follows that based on researches and theoretical information confronted to usual forms used in fields, the present project is important and relevant. This project will be used as source of actualization and information equalization that connects academic environment and real situations in exploratory situations and also taking into consideration the enrichment of precise and easy to understand information to future researches and academic upgrading.

Automatic Method for Exudates and Hemorrhages Detection from Fundus Retinal Images

Diabetic Retinopathy (DR) is an eye disease that leads to blindness. The earliest signs of DR are the appearance of red and yellow lesions on the retina called hemorrhages and exudates. Early diagnosis of DR prevents from blindness; hence, many automated algorithms have been proposed to extract hemorrhages and exudates. In this paper, an automated algorithm is presented to extract hemorrhages and exudates separately from retinal fundus images using different image processing techniques including Circular Hough Transform (CHT), Contrast Limited Adaptive Histogram Equalization (CLAHE), Gabor filter and thresholding. Since Optic Disc is the same color as the exudates, it is first localized and detected. The presented method has been tested on fundus images from Structured Analysis of the Retina (STARE) and Digital Retinal Images for Vessel Extraction (DRIVE) databases by using MATLAB codes. The results show that this method is perfectly capable of detecting hard exudates and the highly probable soft exudates. It is also capable of detecting the hemorrhages and distinguishing them from blood vessels.

Automatic Detection and Classification of Diabetic Retinopathy Using Retinal Fundus Images

Diabetic Retinopathy (DR) is a severe retinal disease which is caused by diabetes mellitus. It leads to blindness when it progress to proliferative level. Early indications of DR are the appearance of microaneurysms, hemorrhages and hard exudates. In this paper, an automatic algorithm for detection of DR has been proposed. The algorithm is based on combination of several image processing techniques including Circular Hough Transform (CHT), Contrast Limited Adaptive Histogram Equalization (CLAHE), Gabor filter and thresholding. Also, Support Vector Machine (SVM) Classifier is used to classify retinal images to normal or abnormal cases including non-proliferative or proliferative DR. The proposed method has been tested on images selected from Structured Analysis of the Retinal (STARE) database using MATLAB code. The method is perfectly able to detect DR. The sensitivity specificity and accuracy of this approach are 90%, 87.5%, and 91.4% respectively.

Dynamic Background Updating for Lightweight Moving Object Detection

Background subtraction and temporal difference are often used for moving object detection in video. Both approaches are computationally simple and easy to be deployed in real-time image processing. However, while the background subtraction is highly sensitive to dynamic background and illumination changes, the temporal difference approach is poor at extracting relevant pixels of the moving object and at detecting the stopped or slowly moving objects in the scene. In this paper, we propose a simple moving object detection scheme based on adaptive background subtraction and temporal difference exploiting dynamic background updates. The proposed technique consists of histogram equalization, a linear combination of background and temporal difference, followed by the novel frame-based and pixel-based background updating techniques. Finally, morphological operations are applied to the output images. Experimental results show that the proposed algorithm can solve the drawbacks of both background subtraction and temporal difference methods and can provide better performance than that of each method.

Equalization Algorithms for MIMO System

In recent years, multi-antenna techniques are being considered as a potential solution to increase the flow of future wireless communication systems. The objective of this article is to study the emission and reception system MIMO (Multiple Input Multiple Output), and present the different reception decoding techniques. First we will present the least complex technical, linear receivers such as the zero forcing equalizer (ZF) and minimum mean squared error (MMSE). Then a nonlinear technique called ordered successive cancellation of interferences (OSIC) and the optimal detector based on the maximum likelihood criterion (ML), finally, we simulate the associated decoding algorithms for MIMO system such as ZF, MMSE, OSIC and ML, thus a comparison of performance of these algorithms in MIMO context.

Study of Adaptive Filtering Algorithms and the Equalization of Radio Mobile Channel

This paper presented a study of three algorithms, the equalization algorithm to equalize the transmission channel with ZF and MMSE criteria, application of channel Bran A, and adaptive filtering algorithms LMS and RLS to estimate the parameters of the equalizer filter, i.e. move to the channel estimation and therefore reflect the temporal variations of the channel, and reduce the error in the transmitted signal. So far the performance of the algorithm equalizer with ZF and MMSE criteria both in the case without noise, a comparison of performance of the LMS and RLS algorithm.

Toward Indoor and Outdoor Surveillance Using an Improved Fast Background Subtraction Algorithm

The detection of moving objects from a video image sequences is very important for object tracking, activity recognition, and behavior understanding in video surveillance. The most used approach for moving objects detection / tracking is background subtraction algorithms. Many approaches have been suggested for background subtraction. But, these are illumination change sensitive and the solutions proposed to bypass this problem are time consuming. In this paper, we propose a robust yet computationally efficient background subtraction approach and, mainly, focus on the ability to detect moving objects on dynamic scenes, for possible applications in complex and restricted access areas monitoring, where moving and motionless persons must be reliably detected. It consists of three main phases, establishing illumination changes invariance, background/foreground modeling and morphological analysis for noise removing. We handle illumination changes using Contrast Limited Histogram Equalization (CLAHE), which limits the intensity of each pixel to user determined maximum. Thus, it mitigates the degradation due to scene illumination changes and improves the visibility of the video signal. Initially, the background and foreground images are extracted from the video sequence. Then, the background and foreground images are separately enhanced by applying CLAHE. In order to form multi-modal backgrounds we model each channel of a pixel as a mixture of K Gaussians (K=5) using Gaussian Mixture Model (GMM). Finally, we post process the resulting binary foreground mask using morphological erosion and dilation transformations to remove possible noise. For experimental test, we used a standard dataset to challenge the efficiency and accuracy of the proposed method on a diverse set of dynamic scenes.

Blind Identification and Equalization of CDMA Signals Using the Levenvberg-Marquardt Algorithm

In this paper we describe the Levenvberg-Marquardt (LM) algorithm for identification and equalization of CDMA signals received by an antenna array in communication channels. The synthesis explains the digital separation and equalization of signals after propagation through multipath generating intersymbol interference (ISI). Exploiting discrete data transmitted and three diversities induced at the reception, the problem can be composed by the Block Component Decomposition (BCD) of a tensor of order 3 which is a new tensor decomposition generalizing the PARAFAC decomposition. We optimize the BCD decomposition by Levenvberg-Marquardt method gives encouraging results compared to classical alternating least squares algorithm (ALS). In the equalization part, we use the Minimum Mean Square Error (MMSE) to perform the presented method. The simulation results using the LM algorithm are important.

Channel Estimation/Equalization with Adaptive Modulation and Coding over Multipath Faded Channels for WiMAX

Different order modulations combined with different coding schemes, allow sending more bits per symbol, thus achieving higher throughputs and better spectral efficiencies. However, it must also be noted that when using a modulation technique such as 64- QAM with less overhead bits, better signal-to-noise ratios (SNRs) are needed to overcome any Inter symbol Interference (ISI) and maintain a certain bit error ratio (BER). The use of adaptive modulation allows wireless technologies to yielding higher throughputs while also covering long distances. The aim of this paper is to implement an Adaptive Modulation and Coding (AMC) features of the WiMAX PHY in MATLAB and to analyze the performance of the system in different channel conditions (AWGN, Rayleigh and Rician fading channel) with channel estimation and blind equalization. Simulation results have demonstrated that the increment in modulation order causes to increment in throughput and BER values. These results derived a trade-off among modulation order, FFT length, throughput, BER value and spectral efficiency. The BER changes gradually for AWGN channel and arbitrarily for Rayleigh and Rician fade channels.

Spatial Audio Player Using Musical Genre Classification

In this paper, we propose a smart music player that combines the musical genre classification and the spatial audio processing. The musical genre is classified based on content analysis of the musical segment detected from the audio stream. In parallel with the classification, the spatial audio quality is achieved by adding an artificial reverberation in a virtual acoustic space to the input mono sound. Thereafter, the spatial sound is boosted with the given frequency gains based on the musical genre when played back. Experiments measured the accuracy of detecting the musical segment from the audio stream and its musical genre classification. A listening test was performed based on the virtual acoustic space based spatial audio processing.

Blind Identification Channel Using Higher Order Cumulants with Application to Equalization for MC−CDMA System

In this paper we propose an algorithm based on higher order cumulants, for blind impulse response identification of frequency radio channels and downlink (MC−CDMA) system Equalization. In order to test its efficiency, we have compared with another algorithm proposed in the literature, for that we considered on theoretical channel as the Proakis’s ‘B’ channel and practical frequency selective fading channel, called Broadband Radio Access Network (BRAN C), normalized for (MC−CDMA) systems, excited by non-Gaussian sequences. In the part of (MC−CDMA), we use the Minimum Mean Square Error (MMSE) equalizer after the channel identification to correct the channel’s distortion. The simulation results, in noisy environment and for different signal to noise ratio (SNR), are presented to illustrate the accuracy of the proposed algorithm.