Abstract: Reduction of Single Input Single Output (SISO) continuous systems into Reduced Order Model (ROM), using a conventional and an evolutionary technique is presented in this paper. In the conventional technique, the mixed advantages of Mihailov stability criterion and continued fraction expansions (CFE) technique is employed where the reduced denominator polynomial is derived using Mihailov stability criterion and the numerator is obtained by matching the quotients of the Cauer second form of Continued fraction expansions. In the evolutionary technique method Particle Swarm Optimization (PSO) is employed to reduce the higher order model. PSO method is based on the minimization of the Integral Squared Error (ISE) between the transient responses of original higher order model and the reduced order model pertaining to a unit step input. Both the methods are illustrated through numerical example.
Abstract: In this paper we present simulation results for the
application of a bandwidth efficient algorithm (mapping algorithm)
to an image transmission system. This system considers three
different real valued transforms to generate energy compact
coefficients. First results are presented for gray scale and color image
transmission in the absence of noise. It is seen that the system
performs its best when discrete cosine transform is used. Also the
performance of the system is dominated more by the size of the
transform block rather than the number of coefficients transmitted or
the number of bits used to represent each coefficient. Similar results
are obtained in the presence of additive white Gaussian noise. The
varying values of the bit error rate have very little or no impact on
the performance of the algorithm. Optimum results are obtained for
the system considering 8x8 transform block and by transmitting 15
coefficients from each block using 8 bits.
Abstract: In this paper, we propose a dual version of the first
threshold ring signature scheme based on error-correcting code proposed
by Aguilar et. al in [1]. Our scheme uses an improvement of
Véron zero-knowledge identification scheme, which provide smaller
public and private key sizes and better computation complexity than
the Stern one. This scheme is secure in the random oracle model.
Abstract: This paper presents a generalized form of the
mechanistic deconvolution technique (GMD) to modeling image sensors applicable in various pan–tilt planes of view. The mechanistic deconvolution technique (UMD) is modified with the
given angles of a pan–tilt plane of view to formulate constraint parameters and characterize distortion effects, and thereby, determine
the corrected image data. This, as a result, does not require experimental setup or calibration. Due to the mechanistic nature of
the sensor model, the necessity for the sensor image plane to be
orthogonal to its z-axis is eliminated, and it reduces the dependency on image data. An experiment was constructed to evaluate the
accuracy of a model created by GMD and its insensitivity to changes in sensor properties and in pan and tilt angles. This was compared
with a pre-calibrated model and a model created by UMD using two sensors with different specifications. It achieved similar accuracy
with one-seventh the number of iterations and attained lower mean error by a factor of 2.4 when compared to the pre-calibrated and
UMD model respectively. The model has also shown itself to be robust and, in comparison to pre-calibrated and UMD model, improved the accuracy significantly.
Abstract: In this document, we have proposed a robust
conceptual strategy, in order to improve the robustness against the manufacturing defects and thus the reliability of logic CMOS circuits. However, in order to enable the use of future CMOS
technology nodes this strategy combines various types of design:
DFR (Design for Reliability), techniques of tolerance: hardware
redundancy TMR (Triple Modular Redundancy) for hard error
tolerance, the DFT (Design for Testability. The Results on largest ISCAS and ITC benchmark circuits show that our approach improves
considerably the reliability, by reducing the key factors, the area costs and fault tolerance probability.
Abstract: On-board Error Detection and Correction (EDAC)
devices aim to secure data transmitted between the central
processing unit (CPU) of a satellite onboard computer and its local
memory. This paper presents a comparison of the performance of
four low complexity EDAC techniques for application in Random
Access Memories (RAMs) on-board small satellites. The
performance of a newly proposed EDAC architecture is measured
and compared with three different EDAC strategies, using the same
FPGA technology. A statistical analysis of single-event upset (SEU)
and multiple-bit upset (MBU) activity in commercial memories
onboard Alsat-1 is given for a period of 8 years
Abstract: The electromagnetic spectrum is a natural resource
and hence well-organized usage of the limited natural resources is the
necessities for better communication. The present static frequency
allocation schemes cannot accommodate demands of the rapidly
increasing number of higher data rate services. Therefore, dynamic
usage of the spectrum must be distinguished from the static usage to
increase the availability of frequency spectrum. Cognitive radio is not
a single piece of apparatus but it is a technology that can incorporate
components spread across a network. It offers great promise for
improving system efficiency, spectrum utilization, more effective
applications, reduction in interference and reduced complexity of
usage for users. Cognitive radio is aware of its environmental,
internal state, and location, and autonomously adjusts its operations
to achieve designed objectives. It first senses its spectral environment
over a wide frequency band, and then adapts the parameters to
maximize spectrum efficiency with high performance. This paper
only focuses on the analysis of Bit-Error-Rate in cognitive radio by
using Particle Swarm Optimization Algorithm. It is theoretically as
well as practically analyzed and interpreted in the sense of
advantages and drawbacks and how BER affects the efficiency and
performance of the communication system.
Abstract: A number of automated shot-change detection
methods for indexing a video sequence to facilitate browsing and
retrieval have been proposed in recent years. This paper emphasizes
on the simulation of video shot boundary detection using one of the
methods of the color histogram wherein scaling of the histogram
metrics is an added feature. The difference between the histograms of
two consecutive frames is evaluated resulting in the metrics. Further
scaling of the metrics is performed to avoid ambiguity and to enable
the choice of apt threshold for any type of videos which involves
minor error due to flashlight, camera motion, etc. Two sample videos
are used here with resolution of 352 X 240 pixels using color
histogram approach in the uncompressed media. An attempt is made
for the retrieval of color video. The simulation is performed for the
abrupt change in video which yields 90% recall and precision value.
Abstract: In this paper we report a study aimed at determining
the effects of animation on usability and appeal of educational
software user interfaces. Specifically, the study compares 3
interfaces developed for the Mathsigner™ program: a static
interface, an interface with highlighting/sound feedback, and an
interface that incorporates five Disney animation principles. The
main objectives of the comparative study were to: (1) determine
which interface is the most effective for the target users of
Mathsigner™ (e.g., children ages 5-11), and (2) identify any Gender
and Age differences in using the three interfaces. To accomplish
these goals we have designed an experiment consisting of a
cognitive walkthrough and a survey with rating questions. Sixteen
children ages 7-11 participated in the study, ten males and six
females. Results showed no significant interface effect on user task
performance (e.g., task completion time and number of errors);
however, interface differences were seen in rating of appeal, with
the animated interface rated more 'likeable' than the other two.
Task performance and rating of appeal were not affected
significantly by Gender or Age of the subjects.
Abstract: Security has been an important issue and concern in the
smart home systems. Smart home networks consist of a wide range of
wired or wireless devices, there is possibility that illegal access to
some restricted data or devices may happen. Password-based
authentication is widely used to identify authorize users, because this
method is cheap, easy and quite accurate. In this paper, a neural
network is trained to store the passwords instead of using verification
table. This method is useful in solving security problems that
happened in some authentication system. The conventional way to
train the network using Backpropagation (BPN) requires a long
training time. Hence, a faster training algorithm, Resilient
Backpropagation (RPROP) is embedded to the MLPs Neural
Network to accelerate the training process. For the Data Part, 200
sets of UserID and Passwords were created and encoded into binary
as the input. The simulation had been carried out to evaluate the
performance for different number of hidden neurons and combination
of transfer functions. Mean Square Error (MSE), training time and
number of epochs are used to determine the network performance.
From the results obtained, using Tansig and Purelin in hidden and
output layer and 250 hidden neurons gave the better performance. As
a result, a password-based user authentication system for smart home
by using neural network had been developed successfully.
Abstract: An adaptive software reliability prediction model
using evolutionary connectionist approach based on Recurrent Radial
Basis Function architecture is proposed. Based on the currently
available software failure time data, Fuzzy Min-Max algorithm is
used to globally optimize the number of the k Gaussian nodes. The
corresponding optimized neural network architecture is iteratively
and dynamically reconfigured in real-time as new actual failure time
data arrives. The performance of our proposed approach has been
tested using sixteen real-time software failure data. Numerical results
show that our proposed approach is robust across different software
projects, and has a better performance with respect to next-steppredictability
compared to existing neural network model for failure
time prediction.
Abstract: This article provides partial evaluation index and its
standard of sports aerobics, including the following 12 indexes: health
vitality, coordination, flexibility, accuracy, pace, endurance, elasticity,
self-confidence, form, control, uniformity and musicality. The
three-layer BP artificial neural network model including input layer,
hidden layer and output layer is established. The result shows that the
model can well reflect the non-linear relationship between the
performance of 12 indexes and the overall performance. The predicted
value of each sample is very close to the true value, with a relative
error fluctuating around of 5%, and the network training is successful.
It shows that BP network has high prediction accuracy and good
generalization capacity if being applied in sports aerobics performance
evaluation after effective training.
Abstract: The use of the mechanical simulation (in particular the finite element analysis) requires the management of assumptions in order to analyse a real complex system. In finite element analysis (FEA), two modeling steps require assumptions to be able to carry out the computations and to obtain some results: the building of the physical model and the building of the simulation model. The simplification assumptions made on the analysed system in these two steps can generate two kinds of errors: the physical modeling errors (mathematical model, domain simplifications, materials properties, boundary conditions and loads) and the mesh discretization errors. This paper proposes a mesh adaptive method based on the use of an h-adaptive scheme in combination with an error estimator in order to choose the mesh of the simulation model. This method allows us to choose the mesh of the simulation model in order to control the cost and the quality of the finite element analysis.
Abstract: The purpose of this research is to develop and apply the
RSCMAC to enhance the dynamic accuracy of Global Positioning
System (GPS). GPS devices provide services of accurate positioning,
speed detection and highly precise time standard for over 98% area on
the earth. The overall operation of Global Positioning System includes
24 GPS satellites in space; signal transmission that includes 2
frequency carrier waves (Link 1 and Link 2) and 2 sets random
telegraphic codes (C/A code and P code), on-earth monitoring stations
or client GPS receivers. Only 4 satellites utilization, the client position
and its elevation can be detected rapidly. The more receivable
satellites, the more accurate position can be decoded. Currently, the
standard positioning accuracy of the simplified GPS receiver is greatly
increased, but due to affected by the error of satellite clock, the
troposphere delay and the ionosphere delay, current measurement
accuracy is in the level of 5~15m. In increasing the dynamic GPS
positioning accuracy, most researchers mainly use inertial navigation
system (INS) and installation of other sensors or maps for the
assistance. This research utilizes the RSCMAC advantages of fast
learning, learning convergence assurance, solving capability of
time-related dynamic system problems with the static positioning
calibration structure to improve and increase the GPS dynamic
accuracy. The increasing of GPS dynamic positioning accuracy can be
achieved by using RSCMAC system with GPS receivers collecting
dynamic error data for the error prediction and follows by using the
predicted error to correct the GPS dynamic positioning data. The
ultimate purpose of this research is to improve the dynamic positioning
error of cheap GPS receivers and the economic benefits will be
enhanced while the accuracy is increased.
Abstract: Missing data is a persistent problem in almost all
areas of empirical research. The missing data must be treated very
carefully, as data plays a fundamental role in every analysis.
Improper treatment can distort the analysis or generate biased results.
In this paper, we compare and contrast various imputation techniques
on missing data sets and make an empirical evaluation of these
methods so as to construct quality software models. Our empirical
study is based on NASA-s two public dataset. KC4 and KC1. The
actual data sets of 125 cases and 2107 cases respectively, without
any missing values were considered. The data set is used to create
Missing at Random (MAR) data Listwise Deletion(LD), Mean
Substitution(MS), Interpolation, Regression with an error term and
Expectation-Maximization (EM) approaches were used to compare
the effects of the various techniques.
Abstract: This paper presents a studyof the impact of reference
node locations on the accuracy of the indoor positioning systems. In
particular, we analyze the localization accuracy of the RSSI database
mapping techniques, deploying on the IEEE 802.15.4 wireless
networks. The results show that the locations of the reference nodes
used in the positioning systems affect the signal propagation
characteristics in the service area. Thisin turn affects the accuracy of the wireless indoor positioning system. We found that suitable
location of reference nodes could reduce the positioning error upto 35 %.
Abstract: The accuracy of estimated stability and control
derivatives of a light aircraft from flight test data were evaluated. The light aircraft, named ChangGong-91, is the first certified aircraft from
the Korean government. The output error method, which is a maximum likelihood estimation technique and considers measurement
noise only, was used to analyze the aircraft responses measures. The
multi-step control inputs were applied in order to excite the short period mode for the longitudinal and Dutch-roll mode for the lateral-directional motion. The estimated stability/control derivatives of Chan Gong-91 were analyzed for the assessment of handling
qualities comparing them with those of similar aircraft. The accuracy of the flight derivative estimates derived from flight test measurement
was examined in engineering judgment, scatter and Cramer-Rao bound, which turned out to be satisfactory with minor defects..
Abstract: In this paper, the performance of two adaptive
observers applied to interconnected systems is studied. The
nonlinearity of systems can be written in a fractional form. The first
adaptive observer is an adaptive sliding mode observer for a Lipchitz
nonlinear system and the second one is an adaptive sliding mode
observer having a filtered error as a sliding surface. After comparing
their performances throughout the inverted pendulum mounted on a
car system, it was shown that the second one is more robust to
estimate the state.
Abstract: The mobile users with Laptops need to have an
efficient access to i.e. their home personal data or to the Internet from
any place in the world, regardless of their location or point of
attachment, especially while roaming outside the home subnet. An
efficient interpretation of packet losses problem that is encountered
from this roaming is to the centric of all aspects in this work, to be
over-highlighted. The main previous works, such as BER-systems,
Amigos, and ns-2 implementation that are considered to be in
conjunction with that problem under study are reviewed and
discussed. Their drawbacks and limitations, of stopping only at
monitoring, and not to provide an actual solution for eliminating or
even restricting these losses, are mentioned. Besides that, the
framework around which we built a Triple-R sequence as a costeffective
solution to eliminate the packet losses and bridge the gap
between subnets, an area that until now has been largely neglected, is
presented. The results show that, in addition to the high bit error rate
of wireless mobile networks, mainly the low efficiency of mobile-IP
registration procedure is a direct cause of these packet losses.
Furthermore, the output of packet losses interpretation resulted an
illustrated triangle of the registration process. This triangle should be
further researched and analyzed in our future work.
Abstract: The objective of this paper is to a design of pattern
classification model based on the back-propagation (BP) algorithm for
decision support system. Standard BP model has done full connection
of each node in the layers from input to output layers. Therefore, it
takes a lot of computing time and iteration computing for good
performance and less accepted error rate when we are doing some
pattern generation or training the network.
However, this model is using exclusive connection in between
hidden layer nodes and output nodes. The advantage of this model is
less number of iteration and better performance compare with standard
back-propagation model. We simulated some cases of classification
data and different setting of network factors (e.g. hidden layer number
and nodes, number of classification and iteration). During our
simulation, we found that most of simulations cases were satisfied by
BP based using exclusive connection network model compared to
standard BP. We expect that this algorithm can be available to
identification of user face, analysis of data, mapping data in between
environment data and information.