Abstract: The mitigation of crop loss due to damaging freezes
requires accurate air temperature prediction models. Previous work
established that the Ward-style artificial neural network (ANN) is a
suitable tool for developing such models. The current research
focused on developing ANN models with reduced average prediction
error by increasing the number of distinct observations used in
training, adding additional input terms that describe the date of an
observation, increasing the duration of prior weather data included in
each observation, and reexamining the number of hidden nodes used
in the network. Models were created to predict air temperature at
hourly intervals from one to 12 hours ahead. Each ANN model,
consisting of a network architecture and set of associated parameters,
was evaluated by instantiating and training 30 networks and
calculating the mean absolute error (MAE) of the resulting networks
for some set of input patterns. The inclusion of seasonal input terms,
up to 24 hours of prior weather information, and a larger number of
processing nodes were some of the improvements that reduced
average prediction error compared to previous research across all
horizons. For example, the four-hour MAE of 1.40°C was 0.20°C, or
12.5%, less than the previous model. Prediction MAEs eight and 12
hours ahead improved by 0.17°C and 0.16°C, respectively,
improvements of 7.4% and 5.9% over the existing model at these
horizons. Networks instantiating the same model but with different
initial random weights often led to different prediction errors. These
results strongly suggest that ANN model developers should consider
instantiating and training multiple networks with different initial
weights to establish preferred model parameters.
Abstract: One of the methods for detecting the target position
error in the laser tracking systems is using Four Quadrant (4Q)
detectors. If the coordinates of the target center is yielded through the
usual relations of the detector outputs, the results will be nonlinear,
dependent on the shape, target size and its position on the detector
screen. In this paper we have designed an algorithm with using
neural network that coordinates of the target center in laser tracking
systems is calculated by using detector outputs obtained from visual
modeling. With this method, the results except from the part related
to the detector intrinsic limitation, are linear and dependent from the
shape and target size.
Abstract: A method is presented for obtaining the error probability for block codes. The method is based on the eigenvalueeigenvector properties of the code correlation matrix. It is found that under a unary transformation and for an additive white Gaussian noise environment, the performance evaluation of a block code becomes a one-dimensional problem in which only one eigenvalue and its corresponding eigenvector are needed in the computation. The obtained error rate results show remarkable agreement between simulations and analysis.
Abstract: An evolutionary method whose selection and recombination
operations are based on generalization error-bounds of
support vector machine (SVM) can select a subset of potentially
informative genes for SVM classifier very efficiently [7]. In this
paper, we will use the derivative of error-bound (first-order criteria)
to select and recombine gene features in the evolutionary process,
and compare the performance of the derivative of error-bound with
the error-bound itself (zero-order) in the evolutionary process. We
also investigate several error-bounds and their derivatives to compare
the performance, and find the best criteria for gene selection
and classification. We use 7 cancer-related human gene expression
datasets to evaluate the performance of the zero-order and first-order
criteria of error-bounds. Though both criteria have the same strategy
in theoretically, experimental results demonstrate the best criterion
for microarray gene expression data.
Abstract: To compress, improve bit error performance and also enhance 2D images, a new scheme, called Iterative Cellular-Turbo System (IC-TS) is introduced. In IC-TS, the original image is partitioned into 2N quantization levels, where N is denoted as bit planes. Then each of the N-bit-plane is coded by Turbo encoder and transmitted over Additive White Gaussian Noise (AWGN) channel. At the receiver side, bit-planes are re-assembled taking into consideration of neighborhood relationship of pixels in 2-D images. Each of the noisy bit-plane values of the image is evaluated iteratively using IC-TS structure, which is composed of equalization block; Iterative Cellular Image Processing Algorithm (ICIPA) and Turbo decoder. In IC-TS, there is an iterative feedback link between ICIPA and Turbo decoder. ICIPA uses mean and standard deviation of estimated values of each pixel neighborhood. It has extra-ordinary satisfactory results of both Bit Error Rate (BER) and image enhancement performance for less than -1 dB Signal-to-Noise Ratio (SNR) values, compared to traditional turbo coding scheme and 2-D filtering, applied separately. Also, compression can be achieved by using IC-TS systems. In compression, less memory storage is used and data rate is increased up to N-1 times by simply choosing any number of bit slices, sacrificing resolution. Hence, it is concluded that IC-TS system will be a compromising approach in 2-D image transmission, recovery of noisy signals and image compression.
Abstract: In this paper, the detection of a fault in the Global Positioning System (GPS) measurement is addressed. The class of faults considered is a bias in the GPS pseudorange measurements. This bias is modeled as an unknown constant. The fault could be the result of a receiver fault or signal fault such as multipath error. A bias bank is constructed based on set of possible fault hypotheses. Initially, there is equal probability of occurrence for any of the biases in the bank. Subsequently, as the measurements are processed, the probability of occurrence for each of the biases is sequentially updated. The fault with a probability approaching unity will be declared as the current fault in the GPS measurement. The residual formed from the GPS and Inertial Measurement Unit (IMU) measurements is used to update the probability of each fault. Results will be presented to show the performance of the presented algorithm.
Abstract: A measurement system for pH array sensors is
introduced to increase accuracy, and decrease non-ideal effects
successfully. An array readout circuit reads eight potentiometric
signals at the same time, and obtains an average value. The deviation
value or the extreme value is counteracted and the output voltage is a
relatively stable value. The errors of measuring pH buffer solutions are
decreased obviously with this measurement system, and the non-ideal
effects, drift and hysteresis, are lowered to 1.638mV/hr and 1.118mV,
respectively. The efficiency and stability are better than single sensor.
The whole sensing characteristics are improved.
Abstract: Maximal Ratio Combining (MRC) is considered the most complex combining technique as it requires channel coefficients estimation. It results in the lowest bit error rate (BER) compared to all other combining techniques. However the BER starts to deteriorate as errors are introduced in the channel coefficients estimation. A novel combining technique, termed Generalized Maximal Ratio Combining (GMRC) with a polynomial kernel, yields an identical BER as MRC with perfect channel estimation and a lower BER in the presence of channel estimation errors. We show that GMRC outperforms the optimal MRC scheme in general and we hereinafter introduce it to the scientific community as a new “supraoptimal" algorithm. Since diversity combining is especially effective in small femto- and pico-cells, internet-associated wireless peripheral systems are to benefit most from GMRC. As a result, many spinoff applications can be made to IP-based 4th generation networks.
Abstract: In metal cutting industries, mathematical/statistical
models are typically used to predict tool replacement time. These
off-line methods usually result in less than optimum replacement
time thereby either wasting resources or causing quality problems.
The few online real-time methods proposed use indirect measurement
techniques and are prone to similar errors. Our idea is based on
identifying the optimal replacement time using an electronic nose to
detect the airborne compounds released when the tool wear reaches
to a chemical substrate doped into tool material during the
fabrication. The study investigates the feasibility of the idea, possible
doping materials and methods along with data stream mining
techniques for detection and monitoring different phases of tool
wear.
Abstract: The Petri net tool INA is a well known tool by the
Petri net community. However, it lacks a graphical environment to
cerate and analyse INA models. Building a modelling tool for the
design and analysis from scratch (for INA tool for example) is
generally a prohibitive task. Meta-Modelling approach is useful to
deal with such problems since it allows the modelling of the
formalisms themselves. In this paper, we propose an approach based
on the combined use of Meta-modelling and Graph Grammars to
automatically generate a visual modelling tool for INA for analysis
purposes. In our approach, the UML Class diagram formalism is
used to define a meta-model of INA models. The meta-modelling
tool ATOM3 is used to generate a visual modelling tool according to
the proposed INA meta-model. We have also proposed a graph
grammar to automatically generate INA description of the
graphically specified Petri net models. This allows the user to avoid
the errors when this description is done manually. Then the INA tool
is used to perform the simulation and the analysis of the resulted INA
description. Our environment is illustrated through an example.
Abstract: The primary objective of this paper was to construct a
“kinematic parameter-independent modeling of three-axis machine
tools for geometric error measurement" technique. Improving the
accuracy of the geometric error for three-axis machine tools is one of
the machine tools- core techniques. This paper first applied the
traditional method of HTM to deduce the geometric error model for
three-axis machine tools. This geometric error model was related to the
three-axis kinematic parameters where the overall errors was relative
to the machine reference coordinate system. Given that the
measurement of the linear axis in this model should be on the ideal
motion axis, there were practical difficulties. Through a measurement
method consolidating translational errors and rotational errors in the
geometric error model, we simplified the three-axis geometric error
model to a kinematic parameter-independent model. Finally, based on
the new measurement method corresponding to this error model, we
established a truly practical and more accurate error measuring
technique for three-axis machine tools.
Abstract: Flexible macroblock ordering (FMO), adopted in the
H.264 standard, allows to partition all macroblocks (MBs) in a frame
into separate groups of MBs called Slice Groups (SGs). FMO can not
only support error-resilience, but also control the size of video packets
for different network types. However, it is well-known that the number
of bits required for encoding the frame is increased by adopting FMO.
In this paper, we propose a novel algorithm that can reduce the bitrate
overhead caused by utilizing FMO. In the proposed algorithm, all MBs
are grouped in SGs based on the similarity of the transform
coefficients. Experimental results show that our algorithm can reduce
the bitrate as compared with conventional FMO.
Abstract: Measurement of the quality of image compression is important for image processing application. In this paper, we propose an objective image quality assessment to measure the quality of gray scale compressed image, which is correlation well with subjective quality measurement (MOS) and least time taken. The new objective image quality measurement is developed from a few fundamental of objective measurements to evaluate the compressed image quality based on JPEG and JPEG2000. The reliability between each fundamental objective measurement and subjective measurement (MOS) is found. From the experimental results, we found that the Maximum Difference measurement (MD) and a new proposed measurement, Structural Content Laplacian Mean Square Error (SCLMSE), are the suitable measurements that can be used to evaluate the quality of JPEG200 and JPEG compressed image, respectively. In addition, MD and SCLMSE measurements are scaled to make them equivalent to MOS, given the rate of compressed image quality from 1 to 5 (unacceptable to excellent quality).
Abstract: The clinical usefulness of heart rate variability is
limited to the range of Holter monitoring software available. These
software algorithms require a normal sinus rhythm to accurately
acquire heart rate variability (HRV) measures in the frequency
domain. Premature ventricular contractions (PVC) or more
commonly referred to as ectopic beats, frequent in heart failure,
hinder this analysis and introduce ambiguity. This investigation
demonstrates an algorithm to automatically detect ectopic beats by
analyzing discrete wavelet transform coefficients. Two techniques
for filtering and replacing the ectopic beats from the RR signal are
compared. One technique applies wavelet hard thresholding
techniques and another applies linear interpolation to replace ectopic
cycles. The results demonstrate through simulation, and signals
acquired from a 24hr ambulatory recorder, that these techniques can
accurately detect PVC-s and remove the noise and leakage effects
produced by ectopic cycles retaining smooth spectra with the
minimum of error.
Abstract: A new OTA-based logarithmic-control variable gain
current amplifier (LCCA) is presented. It consists of two Operational
Transconductance Amplifier (OTA) and two PMOS transistors
biased in weak inversion region. The circuit operates from 0.6V DC
power supply and consumes 0.6 μW. The linear-dB controllable
output range is 43 dB with maximum error less than 0.5dB. The
functionality of the proposed design was confirmed using HSPICE in
0.35μm CMOS process technology.
Abstract: We investigate efficient spreading codes for transmitter based techniques of code division multiple access (CDMA) systems. The channel is considered to be known at the transmitter which is usual in a time division duplex (TDD) system where the channel is assumed to be the same on uplink and downlink. For such a TDD/CDMA system, both bitwise and blockwise multiuser transmission schemes are taken up where complexity is transferred to the transmitter side so that the receiver has minimum complexity. Different spreading codes are considered at the transmitter to spread the signal efficiently over the entire spectrum. The bit error rate (BER) curves portray the efficiency of the codes in presence of multiple access interference (MAI) as well as inter symbol interference (ISI).
Abstract: IEEE has designed 802.11i protocol to address the
security issues in wireless local area networks. Formal analysis is
important to ensure that the protocols work properly without having
to resort to tedious testing and debugging which can only show the
presence of errors, never their absence. In this paper, we present
the formal verification of an abstract protocol model of 802.11i.
We translate the 802.11i protocol into the Strand Space Model and
then prove the authentication property of the resulting model using
the Strand Space formalism. The intruder in our model is imbued
with powerful capabilities and repercussions to possible attacks are
evaluated. Our analysis proves that the authentication of 802.11i is
not compromised in the presented model. We further demonstrate
how changes in our model will yield a successful man-in-the-middle
attack.
Abstract: This paper deals with wireless relay communication
systems in which multiple sources transmit information to the
destination node by the help of multiple relays. We consider a
signal forwarding technique based on the minimum mean-square
error (MMSE) approach with multiple antennas for each relay. A
source-relay-destination joint design strategy is proposed with power
constraints at the destination and the source nodes. Simulation results
confirm that the proposed joint design method improves the average
MSE performance compared with that of conventional MMSE relaying
schemes.
Abstract: This paper proposes an effective adaptation learning
algorithm based on artificial neural networks for speed control of an
induction motor assumed to operate in a high-performance drives
environment. The structure scheme consists of a neural network
controller and an algorithm for changing the NN weights in order that
the motor speed can accurately track of the reference command. This
paper also makes uses a very realistic and practical scheme to
estimate and adaptively learn the noise content in the speed load
torque characteristic of the motor. The availability of the proposed
controller is verified by through a laboratory implementation and
under computation simulations with Matlab-software. The process is
also tested for the tracking property using different types of reference
signals. The performance and robustness of the proposed control
scheme have evaluated under a variety of operating conditions of the
induction motor drives. The obtained results demonstrate the
effectiveness of the proposed control scheme system performances,
both in steady state error in speed and dynamic conditions, was found
to be excellent and those is not overshoot.
Abstract: In this paper a mixed method by combining an evolutionary and a conventional technique is proposed for reduction of Single Input Single Output (SISO) continuous systems into Reduced Order Model (ROM). In the conventional technique, the mixed advantages of Mihailov stability criterion and continued Fraction Expansions (CFE) technique is employed where the reduced denominator polynomial is derived using Mihailov stability criterion and the numerator is obtained by matching the quotients of the Cauer second form of Continued fraction expansions. Then, retaining the numerator polynomial, the denominator polynomial is recalculated by an evolutionary technique. In the evolutionary method, the recently proposed Differential Evolution (DE) optimization technique is employed. DE method is based on the minimization of the Integral Squared Error (ISE) between the transient responses of original higher order model and the reduced order model pertaining to a unit step input. The proposed method is illustrated through a numerical example and compared with ROM where both numerator and denominator polynomials are obtained by conventional method to show its superiority.