Abstract: This paper presents a method for identification
of a linear time invariant (LTI) autonomous all pole system
using singular value decomposition. The novelty of this paper
is two fold: First, MUSIC algorithm for estimating complex
frequencies from real measurements is proposed. Secondly,
using the proposed algorithm, we can identify the coefficients
of differential equation that determines the LTI system by
switching off our input signal. For this purpose, we need only
to switch off the input, apply our complex MUSIC algorithm
and determine the coefficients as symmetric polynomials in the
complex frequencies. This method can be applied to unstable
system and has higher resolution as compared to time series
solution when, noisy data are used. The classical performance
bound, Cramer Rao bound (CRB), has been used as a basis for
performance comparison of the proposed method for multiple
poles estimation in noisy exponential signal.
Abstract: In Multiuser MIMO communication systems, interuser interference has a strong impact on the transmitted signals. Precoding technique schemes are employed for multiuser broadcast channels to suppress an interuser interference. Different Linear and nonlinear precoding schemes are there. For the massive system dimension, it is difficult to design an appropriate precoding algorithm with low computational complexity and good error rate performance at the same time over fading channels. This paper describes the error rate performance of precoding schemes over fading channels with the assumption of perfect channel state information at the transmitter. To estimate the bit error rate performance, different propagation environments namely, Rayleigh, Rician and Nakagami fading channels have been offered. This paper presents the error rate performance comparison of these fading channels based on precoding methods like Channel Inversion and Dirty paper coding for multiuser broadcasting system. MATLAB simulation has been used. It is observed that multiuser system achieves better error rate performance by Dirty paper coding over Rayleigh fading channel.
Abstract: Hammerstein–Wiener model is a block-oriented model
where a linear dynamic system is surrounded by two static
nonlinearities at its input and output and could be used to model
various processes. This paper contains an optimization approach
method for analysing the problem of Hammerstein–Wiener systems
identification. The method relies on reformulate the identification
problem; solve it as constraint quadratic problem and analysing its
solutions. During the formulation of the problem, effects of adding
noise to both input and output signals of nonlinear blocks and
disturbance to linear block, in the emerged equations are discussed.
Additionally, the possible parametric form of matrix operations
to reduce the equation size is presented. To analyse the possible
solutions to the mentioned system of equations, a method to reduce
the difference between the number of equations and number of
unknown variables by formulate and importing existing knowledge
about nonlinear functions is presented. Obtained equations are applied
to an instance H–W system to validate the results and illustrate the
proposed method.
Abstract: High-speed infrared vertical-cavity surface-emitting laser diodes (VCSELs) with Cu-plated heat sinks were fabricated and tested. VCSELs with 10 mm aperture diameter and 4 mm of electroplated copper demonstrated a -3dB modulation bandwidth (f-3dB) of 14 GHz and a resonance frequency (fR) of 9.5 GHz at a bias current density (Jbias) of only 4.3 kA/cm2, which corresponds to an improved f-3dB2/Jbias ratio of 44 GHz2/kA/cm2. At higher and lower bias current densities, the f-3dB2/ Jbias ratio decreased to about 30 GHz2/kA/cm2 and 18 GHz2/kA/cm2, respectively. Examination of the analogue modulation response demonstrated that the presented VCSELs displayed a steady f-3dB/ fR ratio of 1.41±10% over the whole range of the bias current (1.3Ith to 6.2Ith). The devices also demonstrated a maximum modulation bandwidth (f-3dB max) of more than 16 GHz at a bias current less than the industrial bias current standard for reliability by 25%.
Abstract: In general, state-of-the-art Data Acquisition Systems
(DAQ) in high energy physics experiments must satisfy high
requirements in terms of reliability, efficiency and data rate capability.
This paper presents the development and deployment of a debugging
tool named DAQ Debugger for the intelligent, FPGA-based Data
Acquisition System (iFDAQ) of the COMPASS experiment at CERN.
Utilizing a hardware event builder, the iFDAQ is designed to be
able to readout data at the average maximum rate of 1.5 GB/s of
the experiment. In complex softwares, such as the iFDAQ, having
thousands of lines of code, the debugging process is absolutely
essential to reveal all software issues. Unfortunately, conventional
debugging of the iFDAQ is not possible during the real data taking.
The DAQ Debugger is a tool for identifying a problem, isolating
the source of the problem, and then either correcting the problem
or determining a way to work around it. It provides the layer
for an easy integration to any process and has no impact on the
process performance. Based on handling of system signals, the
DAQ Debugger represents an alternative to conventional debuggers
provided by most integrated development environments. Whenever
problem occurs, it generates reports containing all necessary
information important for a deeper investigation and analysis. The
DAQ Debugger was fully incorporated to all processes in the iFDAQ
during the run 2016. It helped to reveal remaining software issues
and improved significantly the stability of the system in comparison
with the previous run. In the paper, we present the DAQ Debugger
from several insights and discuss it in a detailed way.
Abstract: In this paper, we present the comparative subjective analysis of Improved Minima Controlled Recursive Averaging (IMCRA) Algorithm, the Kalman filter and the cascading of IMCRA and Kalman filter algorithms. Performance of speech enhancement algorithms can be predicted in two different ways. One is the objective method of evaluation in which the speech quality parameters are predicted computationally. The second is a subjective listening test in which the processed speech signal is subjected to the listeners who judge the quality of speech on certain parameters. The comparative objective evaluation of these algorithms was analyzed in terms of Global SNR, Segmental SNR and Perceptual Evaluation of Speech Quality (PESQ) by the authors and it was reported that with cascaded algorithms there is a substantial increase in objective parameters. Since subjective evaluation is the real test to judge the quality of speech enhancement algorithms, the authenticity of superiority of cascaded algorithms over individual IMCRA and Kalman algorithms is tested through subjective analysis in this paper. The results of subjective listening tests have confirmed that the cascaded algorithms perform better under all types of noise conditions.
Abstract: This paper investigates MIMO (Multiple-Input
Multiple-Output) adaptive filtering techniques for the application
of supervised source separation in the context of convolutive
mixtures. From the observation that there is correlation among the
signals of the different mixtures, an improvement in the NSAF
(Normalized Subband Adaptive Filter) algorithm is proposed in
order to accelerate its convergence rate. Simulation results with
mixtures of speech signals in reverberant environments show the
superior performance of the proposed algorithm with respect to the
performances of the NLMS (Normalized Least-Mean-Square) and
conventional NSAF, considering both the convergence speed and
SIR (Signal-to-Interference Ratio) after convergence.
Abstract: Characterizing the fatigue and fracture properties of nanostructures is one of the most challenging tasks in nanoscience and nanotechnology due to lack of a MEMS/NEMS device for generating uniform cyclic loadings at high frequencies. Here, the dynamic response of a recently proposed MEMS/NEMS device under different inputs signals is completely investigated. This MEMS/NEMS device is designed and modeled based on the electromagnetic force induced between paired parallel wires carrying electrical currents, known as Ampere’s Force Law (AFL). Since this MEMS/NEMS device only uses two paired wires for actuation part and sensing part, it represents highly sensitive and linear response for nanostructures with any stiffness and shapes (single or arrays of nanowires, nanotubes, nanosheets or nanowalls). In addition to studying the maximum gains at different resonance frequencies of the MEMS/NEMS device, its dynamical responses are investigated for different inputs and nanostructure properties to demonstrate the capability, usability, and reliability of the device for wide range of nanostructures. This MEMS/NEMS device can be readily integrated into SEM/TEM instruments to provide real time study of the fatigue and fracture properties of nanostructures as well as their softening or hardening behaviors, and initiation and/or propagation of nanocracks in them.
Abstract: Most of the recent wireless LANs, broadband access networks, and digital broadcasting use Orthogonal Frequency Division Multiplexing techniques. In addition, the increasing demand of Data and Internet makes fiber optics an important technology, as fiber optics has many characteristics that make it the best solution for transferring huge frames of Data from a point to another. Radio over fiber is the place where high quality RF is converted to optical signals over single mode fiber. Optimum values for the bias level and the switching voltage for Mach-Zehnder modulator are important for the performance of radio over fiber links. In this paper, we propose a method to optimize the two parameters simultaneously; the bias and the switching voltage point of the external modulator of a radio over fiber system considering RF gain. Simulation results show the optimum gain value under these two parameters.
Abstract: In a modern society the factor corresponding to the increase in the level of quality in industrial production demand new techniques of control and machinery automation. In this context, this work presents the implementation of a Paraconsistent-Fuzzy Digital PID controller. The controller is based on the treatment of inconsistencies both in the Paraconsistent Logic and in the Fuzzy Logic. Paraconsistent analysis is performed on the signals applied to the system inputs using concepts from the Paraconsistent Annotated Logic with annotation of two values (PAL2v). The signals resulting from the paraconsistent analysis are two values defined as Dc - Degree of Certainty and Dct - Degree of Contradiction, which receive a treatment according to the Fuzzy Logic theory, and the resulting output of the logic actions is a single value called the crisp value, which is used to control dynamic system. Through an example, it was demonstrated the application of the proposed model. Initially, the Paraconsistent-Fuzzy Digital PID controller was built and tested in an isolated MATLAB environment and then compared to the equivalent Digital PID function of this software for standard step excitation. After this step, a level control plant was modeled to execute the controller function on a physical model, making the tests closer to the actual. For this, the control parameters (proportional, integral and derivative) were determined for the configuration of the conventional Digital PID controller and of the Paraconsistent-Fuzzy Digital PID, and the control meshes in MATLAB were assembled with the respective transfer function of the plant. Finally, the results of the comparison of the level control process between the Paraconsistent-Fuzzy Digital PID controller and the conventional Digital PID controller were presented.
Abstract: Diabetes Mellitus (Diabetes) is a disease based on insulin hormone disorders and causes high blood glucose. Clinical findings determine that diabetes can be diagnosed by electrophysiological signals obtained from the vital organs. 'Diabetic Retinopathy' is one of the most common eye diseases resulting on diabetes and it is the leading cause of vision loss due to structural alteration of the retinal layer vessels. In this study, features of horizontal and vertical Video-Oculography (VOG) signals have been used to classify non-proliferative and proliferative diabetic retinopathy disease. Twenty-five features are acquired by using discrete wavelet transform with VOG signals which are taken from 21 subjects. Two models, based on multi-layer perceptron and radial basis function, are recommended in the diagnosis of Diabetic Retinopathy. The proposed models also can detect level of the disease. We show comparative classification performance of the proposed models. Our results show that proposed the RBF model (100%) results in better classification performance than the MLP model (94%).
Abstract: A mechanical wave or vibration propagating through
granular media exhibits a specific signature in time. A coherent
pulse or wavefront arrives first with multiply scattered waves (coda)
arriving later. The coherent pulse is micro-structure independent i.e.
it depends only on the bulk properties of the disordered granular
sample, the sound wave velocity of the granular sample and hence
bulk and shear moduli. The coherent wavefront attenuates (decreases
in amplitude) and broadens with distance from its source. The
pulse attenuation and broadening effects are affected by disorder
(polydispersity; contrast in size of the granules) and have often been
attributed to dispersion and scattering. To study the effect of disorder
and initial amplitude (non-linearity) of the pulse imparted to the
system on the coherent wavefront, numerical simulations have been
carried out on one-dimensional sets of particles (granular chains).
The interaction force between the particles is given by a Hertzian
contact model. The sizes of particles have been selected randomly
from a Gaussian distribution, where the standard deviation of this
distribution is the relevant parameter that quantifies the effect of
disorder on the coherent wavefront. Since, the coherent wavefront is
system configuration independent, ensemble averaging has been used
for improving the signal quality of the coherent pulse and removing
the multiply scattered waves. The results concerning the width of the
coherent wavefront have been formulated in terms of scaling laws. An
experimental set-up of photoelastic particles constituting a granular
chain is proposed to validate the numerical results.
Abstract: This paper presents a system designed for wireless acquisition, the recording of electrocardiogram (ECG) signals and the monitoring of the heart’s health during surgery. This wireless recording system allows us to visualize and monitor the state of the heart’s health during a surgery, even if the patient is moved from the operating theater to post anesthesia care unit. The acquired signal is transmitted via a Bluetooth unit to a PC where the data are displayed, stored and processed. To test the reliability of our system, a comparison between ECG signals processed by a conventional ECG monitoring system (Datex-Ohmeda) and by our wireless system is made. The comparison is based on the shape of the ECG signal, the duration of the QRS complex, the P and T waves, as well as the position of the ST segments with respect to the isoelectric line. The proposed system is presented and discussed. The results have confirmed that the use of Bluetooth during surgery does not affect the devices used and vice versa. Pre- and post-processing steps are briefly discussed. Experimental results are also provided.
Abstract: In this study, a 50-W CO2 laser was used for the clad of 304L powders on the stainless steel substrate with a temperature sensor and image monitoring system. The laser power and cladding speed and focal position were modified to achieve the requirement of the workpiece flatness and mechanical properties. The numerical calculation is based on ANSYS to analyze the temperature change of the moving heat source at different surface positions when coating the workpiece, and the effect of the process parameters on the bath size was discussed. The temperature of stainless steel powder in the nozzle outlet reacting with the laser was simulated as a process parameter. In the experiment, the difference of the thermal conductivity in three-dimensional space is compared with single-layer cladding and multi-layer cladding. The heat dissipation pattern of the single-layer cladding is the steel plate and the multi-layer coating is the workpiece itself. The relationship between the multi-clad temperature and the profile was analyzed by the temperature signal from an IR pyrometer.
Abstract: Speech recognition is of an important contribution in promoting new technologies in human computer interaction. Today, there is a growing need to employ speech technology in daily life and business activities. However, speech recognition is a challenging task that requires different stages before obtaining the desired output. Among automatic speech recognition (ASR) components is the feature extraction process, which parameterizes the speech signal to produce the corresponding feature vectors. Feature extraction process aims at approximating the linguistic content that is conveyed by the input speech signal. In speech processing field, there are several methods to extract speech features, however, Mel Frequency Cepstral Coefficients (MFCC) is the popular technique. It has been long observed that the MFCC is dominantly used in the well-known recognizers such as the Carnegie Mellon University (CMU) Sphinx and the Markov Model Toolkit (HTK). Hence, this paper focuses on the MFCC method as the standard choice to identify the different speech segments in order to obtain the language phonemes for further training and decoding steps. Due to MFCC good performance, the previous studies show that the MFCC dominates the Arabic ASR research. In this paper, we demonstrate MFCC as well as the intermediate steps that are performed to get these coefficients using the HTK toolkit.
Abstract: This paper presents the processing and analysis of ECG signals. The study is based on wavelet transform and uses exclusively the MATLAB environment. This study includes removing Baseline wander and further de-noising through wavelet transform and metrics such as signal-to noise ratio (SNR), Peak signal-to-noise ratio (PSNR) and the mean squared error (MSE) are used to assess the efficiency of the de-noising techniques. Feature extraction is subsequently performed whereby signal features such as heart rate, rise and fall levels are extracted and the QRS complex was detected which helped in classifying the ECG signal. The classification is the last step in the analysis of the ECG signals and it is shown that these are successfully classified as Normal rhythm or Abnormal rhythm. The final result proved the adequacy of using wavelet transform for the analysis of ECG signals.
Abstract: One of the key problems facing in the analysis of Computed Tomography (CT) images is the poor contrast of the images. Image enhancement can be used to improve the visual clarity and quality of the images or to provide a better transformation representation for further processing. Contrast enhancement of images is one of the acceptable methods used for image enhancement in various applications in the medical field. This will be helpful to visualize and extract details of brain infarctions, tumors, and cancers from the CT image. This paper presents a comparison study of five contrast enhancement techniques suitable for the contrast enhancement of CT images. The types of techniques include Power Law Transformation, Logarithmic Transformation, Histogram Equalization, Contrast Stretching, and Laplacian Transformation. All these techniques are compared with each other to find out which enhancement provides better contrast of CT image. For the comparison of the techniques, the parameters Peak Signal to Noise Ratio (PSNR) and Mean Square Error (MSE) are used. Logarithmic Transformation provided the clearer and best quality image compared to all other techniques studied and has got the highest value of PSNR. Comparison concludes with better approach for its future research especially for mapping abnormalities from CT images resulting from Brain Injuries.
Abstract: Natural disasters are inevitable to the biodiversity. Disasters such as flood, tsunami and tornadoes could be brutal, harsh and devastating. In Australia, flooding is a major issue experienced by different parts of the country. In such crisis, delays in evacuation could decide the life and death of the people living in those regions. Congestion management could become a mammoth task if there are no steps taken before such situations. In the past to manage congestion in such circumstances, many strategies were utilised such as converting the road shoulders to extra lanes or changing the road geometry by adding more lanes. However, expansion of road to resolving congestion problems is not considered a viable option nowadays. The authorities avoid this option due to many reasons, such as lack of financial support and land space. They tend to focus their attention on optimising the current resources they possess and use traffic signals to overcome congestion problems. Traffic Signal Management strategy was considered a viable option, to alleviate congestion problems in the City of Geelong, Victoria. Arterial road with signalised intersections considered in this paper and the traffic data required for modelling collected from VicRoads. Traffic signalling software SIDRA used to model the roads, and the information gathered from VicRoads. In this paper, various signal parameters utilised to assess and improve the corridor performance to achieve the best possible Level of Services (LOS) for the arterial road.
Abstract: This paper presents a dynamic architecture of digital duty-cycle modulation control drivers. Compared to most oversampling digital modulation schemes encountered in industrial electronics, its novelty is founded on a number of relevant merits including; embedded positive and negative feedback loops, internal modulation clock, structural simplicity, elementary building operators, no explicit need of samples of the nonlinear duty-cycle function when computing the switching modulated signal, and minimum number of design parameters. A prototyping digital control driver is synthesized and well tested within MATLAB/Simulink workspace. Then, the virtual simulation results and performance obtained under a sample of relevant instrumentation and control systems are presented, in order to show the feasibility, the reliability, and the versatility of target applications, of the proposed class of low cost and high quality digital control drivers in industrial electronics.
Abstract: Space-time adaptive processing (STAP) techniques have
been motivated as a key enabling technology for advanced airborne
radar applications. In this paper, the notion of cognitive radar is
extended to STAP technique, and cognitive STAP is discussed.
The principle for improving signal-to-clutter ratio (SCNR) based
on slow-time coding is given, and the corresponding optimization
algorithm based on cyclic and power-like algorithms is presented.
Numerical examples show the effectiveness of the proposed method.