Abstract: Series of experimental tests were conducted on a
section of a 660 kW wind turbine blade to measure the pressure
distribution of this model oscillating in plunging motion. In order to
minimize the amount of data required to predict aerodynamic loads
of the airfoil, a General Regression Neural Network, GRNN, was
trained using the measured experimental data. The network once
proved to be accurate enough, was used to predict the flow behavior
of the airfoil for the desired conditions.
Results showed that with using a few of the acquired data, the
trained neural network was able to predict accurate results with
minimal errors when compared with the corresponding measured
values. Therefore with employing this trained network the
aerodynamic coefficients of the plunging airfoil, are predicted
accurately at different oscillation frequencies, amplitudes, and angles
of attack; hence reducing the cost of tests while achieving acceptable
accuracy.
Abstract: System development life cycle (SDLC) is a
process uses during the development of any system. SDLC
consists of four main phases: analysis, design, implement and
testing. During analysis phase, context diagram and data flow
diagrams are used to produce the process model of a system.
A consistency of the context diagram to lower-level data flow
diagrams is very important in smoothing up developing
process of a system. However, manual consistency check from
context diagram to lower-level data flow diagrams by using a
checklist is time-consuming process. At the same time, the
limitation of human ability to validate the errors is one of the
factors that influence the correctness and balancing of the
diagrams. This paper presents a tool that automates the
consistency check between Data Flow Diagrams (DFDs)
based on the rules of DFDs. The tool serves two purposes: as
an editor to draw the diagrams and as a checker to check the
correctness of the diagrams drawn. The consistency check
from context diagram to lower-level data flow diagrams is
embedded inside the tool to overcome the manual checking
problem.
Abstract: It is well known that the channel capacity of Multiple-
Input-Multiple-Output (MIMO) system increases as the number of
antenna pairs between transmitter and receiver increases but it suffers
from multiple expensive RF chains. To reduce the cost of RF chains,
Antenna Selection (AS) method can offer a good tradeoff between
expense and performance. In a transmitting AS system, Channel
State Information (CSI) feedback is necessarily required to choose
the best subset of antennas in which the effects of delays and errors
occurred in feedback channels are the most dominant factors
degrading the performance of the AS method. This paper presents the
concept of AS method using CSI from channel reciprocity instead of
feedback method. Reciprocity technique can easily archive CSI by
utilizing a reverse channel where the forward and reverse channels
are symmetrically considered in time, frequency and location. In this
work, the capacity performance of MIMO system when using AS
method at transmitter with reciprocity channels is investigated by
own developing Testbed. The obtained results show that reciprocity
technique offers capacity close to a system with a perfect CSI and
gains a higher capacity than a system without AS method from 0.9 to
2.2 bps/Hz at SNR 10 dB.
Abstract: The quality of short term load forecasting can improve the efficiency of planning and operation of electric utilities. Artificial Neural Networks (ANNs) are employed for nonlinear short term load forecasting owing to their powerful nonlinear mapping capabilities. At present, there is no systematic methodology for optimal design and training of an artificial neural network. One has often to resort to the trial and error approach. This paper describes the process of developing three layer feed-forward large neural networks for short-term load forecasting and then presents a heuristic search algorithm for performing an important task of this process, i.e. optimal networks structure design. Particle Swarm Optimization (PSO) is used to develop the optimum large neural network structure and connecting weights for one-day ahead electric load forecasting problem. PSO is a novel random optimization method based on swarm intelligence, which has more powerful ability of global optimization. Employing PSO algorithms on the design and training of ANNs allows the ANN architecture and parameters to be easily optimized. The proposed method is applied to STLF of the local utility. Data are clustered due to the differences in their characteristics. Special days are extracted from the normal training sets and handled separately. In this way, a solution is provided for all load types, including working days and weekends and special days. The experimental results show that the proposed method optimized by PSO can quicken the learning speed of the network and improve the forecasting precision compared with the conventional Back Propagation (BP) method. Moreover, it is not only simple to calculate, but also practical and effective. Also, it provides a greater degree of accuracy in many cases and gives lower percent errors all the time for STLF problem compared to BP method. Thus, it can be applied to automatically design an optimal load forecaster based on historical data.
Abstract: Flash memory has become an important storage device
in many embedded systems because of its high performance, low
power consumption and shock resistance. Multi-level cell (MLC) is
developed as an effective solution for reducing the cost and increasing
the storage density in recent years. However, most of flash file system
cannot handle the error correction sufficiently. To correct more errors
for MLC, we implement Reed-Solomon (RS) code to YAFFS, what is
widely used for flash-based file system. RS code has longer computing
time but the correcting ability is much higher than that of Hamming
code.
Abstract: Adapting wireless devices to communicate within grid
networks empowers us by providing range of possibilities.. These
devices create a mechanism for consumers and publishers to create
modern networks with or without peer device utilization. Emerging
mobile networks creates new challenges in the areas of reliability,
security, and adaptability. In this paper, we propose a system
encompassing mobility management using AAA context transfer for
mobile grid networks. This system ultimately results in seamless task
processing and reduced packet loss, communication delays,
bandwidth, and errors.
Abstract: The log periodogram regression is widely used in empirical
applications because of its simplicity, since only a least squares
regression is required to estimate the memory parameter, d, its good
asymptotic properties and its robustness to misspecification of the
short term behavior of the series. However, the asymptotic distribution
is a poor approximation of the (unknown) finite sample distribution
if the sample size is small. Here the finite sample performance of different
nonparametric residual bootstrap procedures is analyzed when
applied to construct confidence intervals. In particular, in addition to
the basic residual bootstrap, the local and block bootstrap that might
adequately replicate the structure that may arise in the errors of the
regression are considered when the series shows weak dependence in
addition to the long memory component. Bias correcting bootstrap
to adjust the bias caused by that structure is also considered. Finally,
the performance of the bootstrap in log periodogram regression based
confidence intervals is assessed in different type of models and how
its performance changes as sample size increases.
Abstract: In contrast to existing methods which do not take into account multiconnectivity in a broad sense of this term, we develop mathematical models and highly effective combination (BIEM and FDM) numerical methods of calculation of stationary and quasistationary temperature field of a profile part of a blade with convective cooling (from the point of view of realization on PC). The theoretical substantiation of these methods is proved by appropriate theorems. For it, converging quadrature processes have been developed and the estimations of errors in the terms of A.Ziqmound continuity modules have been received. For visualization of profiles are used: the method of the least squares with automatic conjecture, device spline, smooth replenishment and neural nets. Boundary conditions of heat exchange are determined from the solution of the corresponding integral equations and empirical relationships. The reliability of designed methods is proved by calculation and experimental investigations heat and hydraulic characteristics of the gas turbine first stage nozzle blade.
Abstract: This paper addresses the problem of trajectory
tracking control of an underactuated autonomous underwater vehicle
(AUV) in the horizontal plane. The underwater vehicle under
consideration is not actuated in the sway direction, and the system
matrices are not assumed to be diagonal and linear, as often found in
the literature. In addition, the effect of constant bias of environmental
disturbances is considered. Using backstepping techniques and the
tracking error dynamics, the system states are stabilized by forcing
the tracking errors to an arbitrarily small neighborhood of zero. The
effectiveness of the proposed control method is demonstrated through
numerical simulations. Simulations are carried out for an
experimental vehicle for smooth, inertial, two dimensional (2D)
reference trajectories such as constant velocity trajectory (a circle
maneuver – constant yaw rate), and time varying velocity trajectory
(a sinusoidal path – sinusoidal yaw rate).
Abstract: In this paper as showed a non-invasive 3D eye tracker
for optometry clinical applications. Measurements of biomechanical
variables in clinical practice have many font of errors associated with
traditional procedments such cover test (CT), near point of
accommodation (NPC), eye ductions (ED), eye vergences (EG) and,
eye versions (ES). Ocular motility should always be tested but all
evaluations have a subjective interpretations by practitioners, the
results is based in clinical experiences, repeatability and accuracy
don-t exist. Optometric-lab is a tool with 3 (tree) analogical video
cameras triggered and synchronized in one acquisition board AD.
The variables globe rotation angle and velocity can be quantified.
Data record frequency was performed with 27Hz, camera calibration
was performed in a know volume and image radial distortion
adjustments.
Abstract: Although the STL (stereo lithography) file format is
widely used as a de facto industry standard in the rapid prototyping
industry due to its simplicity and ability to tessellation of almost all
surfaces, but there are always some defects and shortcoming in their
usage, which many of them are difficult to correct manually. In
processing the complex models, size of the file and its defects grow
extremely, therefore, correcting STL files become difficult. In this
paper through optimizing the exiting algorithms, size of the files and
memory usage of computers to process them will be reduced. In spite
of type and extent of the errors in STL files, the tail-to-head
searching method and analysis of the nearest distance between tails
and heads techniques were used. As a result STL models sliced
rapidly, and fully closed contours produced effectively and errorless.
Abstract: In this research, Forming Limit Diagrams for supertension
sheet metals which are using in automobile industry have
been obtained. The exerted strains to sheet metals have been
measured with four different methods and the errors of each method
have also been represented. These methods have been compared with
together and the most efficient and economic way of extracting of the
exerted strains to sheet metals has been introduced. In this paper total
error and uncertainty of FLD extraction procedures have been
derived. Determination of the measurement uncertainty in extracting
of FLD has a great importance in design and analysis of the sheet
metal forming process.
Abstract: Ratio and regression type estimators have been used by previous authors to estimate a population mean for the principal variable from samples in which both auxiliary x and principal y variable data are available. However, missing data are a common problem in statistical analyses with real data. Ratio and regression type estimators have also been used for imputing values of missing y data. In this paper, six new ratio and regression type estimators are proposed for imputing values for any missing y data and estimating a population mean for y from samples with missing x and/or y data. A simulation study has been conducted to compare the six ratio and regression type estimators with a previous estimator of Rueda. Two population sizes N = 1,000 and 5,000 have been considered with sample sizes of 10% and 30% and with correlation coefficients between population variables X and Y of 0.5 and 0.8. In the simulations, 10 and 40 percent of sample y values and 10 and 40 percent of sample x values were randomly designated as missing. The new ratio and regression type estimators give similar mean absolute percentage errors that are smaller than the Rueda estimator for all cases. The new estimators give a large reduction in errors for the case of 40% missing y values and sampling fraction of 30%.
Abstract: In this study, a mathematical model was proposed and
the accuracy of this model was assessed to predict the growth of
Pseudomonas aeruginosa and rhamnolipid production under nitrogen
limiting (sodium nitrate) fed-batch fermentation. All of the
parameters used in this model were achieved individually without
using any data from the literature.
The overall growth kinetic of the strain was evaluated using a
dual-parallel substrate Monod equation which was described by
several batch experimental data. Fed-batch data under different
glycerol (as the sole carbon source, C/N=10) concentrations and feed
flow rates were used to describe the proposed fed-batch model and
other parameters. In order to verify the accuracy of the proposed
model several verification experiments were performed in a vast
range of initial glycerol concentrations. While the results showed an
acceptable prediction for rhamnolipid production (less than 10%
error), in case of biomass prediction the errors were less than 23%. It
was also found that the rhamnolipid production by P. aeruginosa was
more sensitive at low glycerol concentrations.
Based on the findings of this work, it was concluded that the
proposed model could effectively be employed for rhamnolipid
production by this strain under fed-batch fermentation on up to 80 g l-
1 glycerol.
Abstract: Segmentation, filtering out of measurement errors and
identification of breakpoints are integral parts of any analysis of
microarray data for the detection of copy number variation (CNV).
Existing algorithms designed for these tasks have had some successes
in the past, but they tend to be O(N2) in either computation time or
memory requirement, or both, and the rapid advance of microarray
resolution has practically rendered such algorithms useless. Here we
propose an algorithm, SAD, that is much faster and much less thirsty
for memory – O(N) in both computation time and memory requirement
-- and offers higher accuracy. The two key ingredients of SAD are the
fundamental assumption in statistics that measurement errors are
normally distributed and the mathematical relation that the product of
two Gaussians is another Gaussian (function). We have produced a
computer program for analyzing CNV based on SAD. In addition to
being fast and small it offers two important features: quantitative
statistics for predictions and, with only two user-decided parameters,
ease of use. Its speed shows little dependence on genomic profile.
Running on an average modern computer, it completes CNV analyses
for a 262 thousand-probe array in ~1 second and a 1.8 million-probe
array in 9 seconds
Abstract: The hidden-point bar method is useful in many
surveying applications. The method involves determining the
coordinates of a hidden point as a function of horizontal and vertical
angles measured to three fixed points on the bar. Using these
measurements, the procedure involves calculating the slant angles,
the distances from the station to the fixed points, the coordinates of
the fixed points, and then the coordinates of the hidden point. The
propagation of the measurement errors in this complex process has
not been fully investigated in the literature. This paper evaluates the
effect of the bar geometry on the position accuracy of the hidden
point which depends on the measurement errors of the horizontal and
vertical angles. The results are used to establish some guidelines
regarding the inclination angle of the bar and the location of the
observed points that provide the best accuracy.
Abstract: Text Mining is around applying knowledge discovery
techniques to unstructured text is termed knowledge discovery in text
(KDT), or Text data mining or Text Mining. In decision tree
approach is most useful in classification problem. With this
technique, tree is constructed to model the classification process.
There are two basic steps in the technique: building the tree and
applying the tree to the database. This paper describes a proposed
C5.0 classifier that performs rulesets, cross validation and boosting
for original C5.0 in order to reduce the optimization of error ratio.
The feasibility and the benefits of the proposed approach are
demonstrated by means of medial data set like hypothyroid. It is
shown that, the performance of a classifier on the training cases from
which it was constructed gives a poor estimate by sampling or using a
separate test file, either way, the classifier is evaluated on cases that
were not used to build and evaluate the classifier are both are large. If
the cases in hypothyroid.data and hypothyroid.test were to be
shuffled and divided into a new 2772 case training set and a 1000
case test set, C5.0 might construct a different classifier with a lower
or higher error rate on the test cases. An important feature of see5 is
its ability to classifiers called rulesets. The ruleset has an error rate
0.5 % on the test cases. The standard errors of the means provide an
estimate of the variability of results. One way to get a more reliable
estimate of predictive is by f-fold –cross- validation. The error rate of
a classifier produced from all the cases is estimated as the ratio of the
total number of errors on the hold-out cases to the total number of
cases. The Boost option with x trials instructs See5 to construct up to
x classifiers in this manner. Trials over numerous datasets, large and
small, show that on average 10-classifier boosting reduces the error
rate for test cases by about 25%.
Abstract: The ideal sinc filter, ignoring the noise statistics, is often
applied for generating an arbitrary sample of a bandlimited signal by
using the uniformly sampled data. In this article, an optimal interpolator is proposed; it reaches a minimum mean square error (MMSE)
at its output in the presence of noise. The resulting interpolator is
thus a Wiener filter, and both the optimal infinite impulse response
(IIR) and finite impulse response (FIR) filters are presented. The
mean square errors (MSE-s) for the interpolator of different length
impulse responses are obtained by computer simulations; it shows that
the MSE-s of the proposed interpolators with a reasonable length are
improved about 0.4 dB under flat power spectra in noisy environment with signal-to-noise power ratio (SNR) equal 10 dB. As expected,
the results also demonstrate the improvements for the MSE-s with various fractional delays of the optimal interpolator against the ideal
sinc filter under a fixed length impulse response.
Abstract: This paper proposes an efficient learning method for the layered neural networks based on the selection of training data and input characteristics of an output layer unit. Comparing to recent neural networks; pulse neural networks, quantum neuro computation, etc, the multilayer network is widely used due to its simple structure. When learning objects are complicated, the problems, such as unsuccessful learning or a significant time required in learning, remain unsolved. Focusing on the input data during the learning stage, we undertook an experiment to identify the data that makes large errors and interferes with the learning process. Our method devides the learning process into several stages. In general, input characteristics to an output layer unit show oscillation during learning process for complicated problems. The multi-stage learning method proposes by the authors for the function approximation problems of classifying learning data in a phased manner, focusing on their learnabilities prior to learning in the multi layered neural network, and demonstrates validity of the multi-stage learning method. Specifically, this paper verifies by computer experiments that both of learning accuracy and learning time are improved of the BP method as a learning rule of the multi-stage learning method. In learning, oscillatory phenomena of a learning curve serve an important role in learning performance. The authors also discuss the occurrence mechanisms of oscillatory phenomena in learning. Furthermore, the authors discuss the reasons that errors of some data remain large value even after learning, observing behaviors during learning.
Abstract: Switched-mode converters play now a significant role in
modern society. Their operation are often crucial in various electrical
applications affecting the every day life. Therefore, the quality of
the converters needs to be reliably verified. Recent studies have
shown that the converters can be fully characterized by a set of
frequency responses which can be efficiently used to validate the
proper operation of the converters. Consequently, several methods
have been proposed to measure the frequency responses fast and
accurately. Most often correlation-based techniques have been applied.
The presented measurement methods are highly sensitive to
external errors and system nonlinearities. This fact has been often
forgotten and the necessary uncertainty analysis of the measured
responses has been neglected. This paper presents a simple approach
to analyze the noise and nonlinearities in the frequency-response
measurements of switched-mode converters. Coherence analysis is
applied to form a confidence interval characterizing the noise and
nonlinearities involved in the measurements. The presented method is
verified by practical measurements from a high-frequency switchedmode
converter.