Abstract: Inherited complexity is one of the difficult tasks in software engineering field. Further, it is said that there is no physical laws or standard guidelines suit for designing different types of software. Hence, to make the software engineering as a matured engineering discipline like others, it is necessary that it has its own theoretical frameworks and laws. Software designing and development is a human effort which takes a lot of time and considers various parameters for successful completion of the software. The cognitive informatics plays an important role for understanding the essential characteristics of the software. The aim of this work is to consider the fundamental characteristics of the source code of Object-Oriented software i.e. complexity and understandability. The complexity of the programs is analyzed with the help of extracted important attributes of the source code, which is further utilized to evaluate the understandability factor. The aforementioned characteristics are analyzed on the basis of 16 C++ programs by distributing them to forty MCA students. They all tried to understand the source code of the given program and mean time is taken as the actual time needed to understand the program. For validation of this work, Briand’s framework is used and the presented metric is also evaluated comparatively with existing metric which proves its robustness.
Abstract: Image steganography is the best aspect of information hiding. In this, the information is hidden within an image and the image travels openly on the Internet. The Least Significant Bit (LSB) is one of the most popular methods of image steganography. In this method, the information bit is hidden at the LSB of the image pixel. In one bit LSB steganography method, the total numbers of the pixels and the total number of message bits are equal to each other. In this paper, the LSB method of image steganography is used for watermarking. The watermarking is an application of the steganography. The watermark contains 80*88 pixels and each pixel requirs 8 bits for its binary equivalent form so, the total number of bits required to hide the watermark are 80*88*8(56320). The experiment was performed on standard 256*256 and 512*512 size images. After the watermark insertion, histogram analysis was performed. A noise factor (salt and pepper) of 0.02 was added to the stego image in order to evaluate the robustness of the method. The watermark was successfully retrieved after insertion of noise. An experiment was performed in order to know the imperceptibility of stego and the retrieved watermark. It is clear that the LSB watermarking scheme is robust to the salt and pepper noise.
Abstract: The mobile cloud computing (MCC) with wireless sensor networks (WSNs) technology gets more attraction by research scholars because its combines the sensors data gathering ability with the cloud data processing capacity. This approach overcomes the limitation of data storage capacity and computational ability of sensor nodes. Finally, the stored data are sent to the mobile users when the user sends the request. The most of the integrated sensor-cloud schemes fail to observe the following criteria: 1) The mobile users request the specific data to the cloud based on their present location. 2) Power consumption since most of them are equipped with non-rechargeable batteries. Mostly, the sensors are deployed in hazardous and remote areas. This paper focuses on above observations and introduces an approach known as collaborative location-based sleep scheduling (CLSS) scheme. Both awake and asleep status of each sensor node is dynamically devised by schedulers and the scheduling is done purely based on the of mobile users’ current location; in this manner, large amount of energy consumption is minimized at WSN. CLSS work depends on two different methods; CLSS1 scheme provides lower energy consumption and CLSS2 provides the scalability and robustness of the integrated WSN.
Abstract: Urban intersection control without the use of the traffic light has the potential to vastly improve the efficiency of the urban traffic flow. For most proposals in the literature, such lightless intersection control depends on the mass market commercialization of highly intelligent autonomous vehicles (AV), which limits the prospects of near future implementation. We present an efficient lightless intersection traffic control scheme that only requires Level 1 AV as defined by NHTSA. The technological barriers of such lightless intersection control are thus very low. Our algorithm can also accommodate a mixture of AVs and conventional vehicles. We also carry out large scale numerical analysis to illustrate the feasibility, safety and robustness, comfort level, and control efficiency of our intersection control scheme.
Abstract: Within geostatistics research, effective estimation of
the variogram points has been examined, particularly in developing
robust alternatives. The parametric fit of these variogram points which
eventually defines the kriging weights, however, has not received
the same attention from a robust perspective. This paper proposes
the use of the non-linear Wilcoxon norm over weighted non-linear
least squares as a robust variogram fitting alternative. First, we
introduce the concept of variogram estimation and fitting. Then, as
an alternative to non-linear weighted least squares, we discuss the
non-linear Wilcoxon estimator. Next, the robustness properties of the
non-linear Wilcoxon are demonstrated using a contaminated spatial
data set. Finally, under simulated conditions, increasing levels of
contaminated spatial processes have their variograms points estimated
and fit. In the fitting of these variogram points, both non-linear
Weighted Least Squares and non-linear Wilcoxon fits are examined
for efficiency. At all levels of contamination (including 0%), using
a robust estimation and robust fitting procedure, the non-weighted
Wilcoxon outperforms weighted Least Squares.
Abstract: In this paper, the problem of posture stabilization for a kinematic model of differential drive robots is studied. A more complex model of the kinematics of differential drive robots is used for the design of stabilizing control. This model is formulated in terms of the physical parameters of the system such as the radius of the wheels, and velocity of the wheels are the control inputs of it. In this paper, the framework of Lyapunov-based control design has been used to solve posture stabilization problem for the comprehensive model of differential drive robots. The results of the simulations show that the devised controller successfully solves the posture regulation problem. Finally, robustness and performance of the controller have been studied under system parameter uncertainty.
Abstract: This paper is to compare the parameter estimation of
the mean in normal distribution by Maximum Likelihood (ML),
Bayes, and Markov Chain Monte Carlo (MCMC) methods. The ML
estimator is estimated by the average of data, the Bayes method is
considered from the prior distribution to estimate Bayes estimator,
and MCMC estimator is approximated by Gibbs sampling from
posterior distribution. These methods are also to estimate a parameter
then the hypothesis testing is used to check a robustness of the
estimators. Data are simulated from normal distribution with the true
parameter of mean 2, and variance 4, 9, and 16 when the sample
sizes is set as 10, 20, 30, and 50. From the results, it can be seen
that the estimation of MLE, and MCMC are perceivably different
from the true parameter when the sample size is 10 and 20 with
variance 16. Furthermore, the Bayes estimator is estimated from the
prior distribution when mean is 1, and variance is 12 which showed
the significant difference in mean with variance 9 at the sample size
10 and 20.
Abstract: This paper demonstrates dynamic performance evaluation of load frequency control (LFC) with different intelligent techniques. All non-linearities and physical constraints have been considered in simulation studies such as governor dead band (GDB), generation rate constraint (GRC) and boiler dynamics. The conventional integral time absolute error has been considered as objective function. The design problem is formulated as an optimisation problem and particle swarm optimisation (PSO), bacterial foraging optimisation algorithm (BFOA) and differential evolution (DE) are employed to search optimal controller parameters. The superiority of the proposed approach has been shown by comparing the results with published fuzzy logic control (FLC) for the same interconnected power system. The comparison is done using various performance measures like overshoot, undershoot, settling time and standard error criteria of frequency and tie-line power deviation following a step load perturbation (SLP). It is noticed that, the dynamic performance of proposed controller is better than FLC. Further, robustness analysis is carried out by varying the time constants of speed governor, turbine, tie-line power in the range of +40% to -40% to demonstrate the robustness of the proposed DE optimized PID controller.
Abstract: This paper presents the performance analysis of dynamic search space reduction (DSR) based gravitational search algorithm (GSA) to solve dynamic economic dispatch of thermal generating units with valve point effects. Dynamic economic dispatch basically dictates the best setting of generator units with anticipated load demand over a definite period of time. In this paper, the presented technique is considered that deals an inequality constraints treatment mechanism known as DSR strategy to accelerate the optimization process. The presented method is demonstrated through five-unit test systems to verify its effectiveness and robustness. The simulation results are compared with other existing evolutionary methods reported in the literature. It is intuited from the comparison that the fuel cost and other performances of the presented approach yield fruitful results with marginal value of simulation time.
Abstract: For those who have lost the ability to move their hand, going through repetitious motions with the assistance of a therapist is the main method of recovery. We have been developed a robotic assistive device to rehabilitate the hand motions in place of the traditional therapy. The developed assistive device (RAD-HR) is comprised of four degrees of freedom enabling basic movements, hand function, and assists in supporting the hand during rehabilitation. We used a nonlinear computed torque control technique to control the RAD-HR. The accuracy of the controller was evaluated in simulations (MATLAB/Simulink environment). To see the robustness of the controller external disturbance as modelling uncertainty (±10% of joint torques) were added in each joints.
Abstract: A repetitive training movement is an efficient method
to improve the ability and movement performance of stroke survivors
and help them to recover their lost motor function and acquire new
skills. The ETS-MARSE is seven degrees of freedom (DOF)
exoskeleton robot developed to be worn on the lateral side of the
right upper-extremity to assist and rehabilitate the patients with
upper-extremity dysfunction resulting from stroke. Practically,
rehabilitation activities are repetitive tasks, which make the
assistive/robotic systems to suffer from repetitive/periodic
uncertainties and external perturbations induced by the high-order
dynamic model (seven DOF) and interaction with human muscle
which impact on the tracking performance and even on the stability
of the exoskeleton. To ensure the robustness and the stability of the
robot, a new nonlinear backstepping control was implemented with
designed tests performed by healthy subjects. In order to limit and to
reject the periodic/repetitive disturbances, an iterative estimator was
integrated into the control of the system. The estimator does not need
the precise dynamic model of the exoskeleton. Experimental results
confirm the robustness and accuracy of the controller performance to
deal with the external perturbation, and the effectiveness of the
iterative estimator to reject the repetitive/periodic disturbances.
Abstract: The aerodynamic coefficients are important in the evaluation of an aircraft performance and stability-control characteristics. These coefficients also can be used in the automatic flight control systems and mathematical model of flight simulator. The study of the aerodynamic aspect of flying systems is a reserved domain and inaccessible for the developers. Doing tests in a wind tunnel to extract aerodynamic forces and moments requires a specific and expensive means. Besides, the glaring lack of published documentation in this field of study makes the aerodynamic coefficients determination complicated. This work is devoted to the identification of an aerodynamic model, by using an aircraft in virtual simulated environment. We deal with the identification of the system, we present an environment framework based on Software In the Loop (SIL) methodology and we use MicrosoftTM Flight Simulator (FS-2004) as the environment for plane simulation. We propose The Total Least Squares Estimation technique (TLSE) to identify the aerodynamic parameters, which are unknown, variable, classified and used in the expression of the piloting law. In this paper, we define each aerodynamic coefficient as the mean of its numerical values. All other variations are considered as modeling uncertainties that will be compensated by the robustness of the piloting control.
Abstract: Traditionally, the dimensioning of storage tanks is conducted with a deterministic approach based on partial coefficients of safety. These coefficients are applied to take into account the uncertainties related to hazards on properties of materials used and applied loads. However, the use of these safety factors in the design process does not assure an optimal and reliable solution and can sometimes lead to a lack of robustness of the structure. The reliability theory based on a probabilistic formulation of constructions safety can respond in an adapted manner. It allows constructing a modelling in which uncertain data are represented by random variables, and therefore allows a better appreciation of safety margins with confidence indicators. The work presented in this paper consists of a mecano-reliability analysis of a concrete storage tank placed on ground. The classical method of Monte Carlo simulation is used to evaluate the failure probability of concrete tank by considering the seismic acceleration as random variable.
Abstract: The purpose of this article is to find a method
of comparing designs for ordinal regression models using
quantile dispersion graphs in the presence of linear predictor
misspecification. The true relationship between response variable
and the corresponding control variables are usually unknown.
Experimenter assumes certain form of the linear predictor of the
ordinal regression models. The assumed form of the linear predictor
may not be correct always. Thus, the maximum likelihood estimates
(MLE) of the unknown parameters of the model may be biased due to
misspecification of the linear predictor. In this article, the uncertainty
in the linear predictor is represented by an unknown function. An
algorithm is provided to estimate the unknown function at the
design points where observations are available. The unknown function
is estimated at all points in the design region using multivariate
parametric kriging. The comparison of the designs are based on
a scalar valued function of the mean squared error of prediction
(MSEP) matrix, which incorporates both variance and bias of the
prediction caused by the misspecification in the linear predictor. The
designs are compared using quantile dispersion graphs approach.
The graphs also visually depict the robustness of the designs on the
changes in the parameter values. Numerical examples are presented
to illustrate the proposed methodology.
Abstract: Connected vehicles are equipped with wireless sensors
that aid in Vehicle to Vehicle (V2V) and Vehicle to Infrastructure
(V2I) communication. These vehicles will in the near future
provide road safety, improve transport efficiency, and reduce traffic
congestion. One of the challenges for connected vehicles is how
to ensure that information sent across the network is secure. If
security of the network is not guaranteed, several attacks can occur,
thereby compromising the robustness, reliability, and efficiency of
the network. This paper discusses existing security mechanisms and
unique properties of connected vehicles. The methodology employed
in this work is exploratory. The paper reviews existing security
solutions for connected vehicles. More concretely, it discusses
various cryptographic mechanisms available, and suggests areas
of improvement. The study proposes a combination of symmetric
key encryption and public key cryptography to improve security.
The study further proposes message aggregation as a technique to
overcome message redundancy. This paper offers a comprehensive
overview of connected vehicles technology, its applications, its
security mechanisms, open challenges, and potential areas of future
research.
Abstract: In this paper numerous robust fitting procedures are considered in estimating spatial variograms. In spatial statistics, the conventional variogram fitting procedure (non-linear weighted least squares) suffers from the same outlier problem that has plagued this method from its inception. Even a 3-parameter model, like the variogram, can be adversely affected by a single outlier. This paper uses the Hogg-Type adaptive procedures to select an optimal score function for a rank-based estimator for these non-linear models. Numeric examples and simulation studies will demonstrate the robustness, utility, efficiency, and validity of these estimates.
Abstract: This paper presents a subband adaptive filter (SAF)
for a system identification where an impulse response is sparse
and disturbed with an impulsive noise. Benefiting from the uses
of l1-norm optimization and l0-norm penalty of the weight vector
in the cost function, the proposed l0-norm sign SAF (l0-SSAF)
achieves both robustness against impulsive noise and much improved
convergence behavior than the classical adaptive filters. Simulation
results in the system identification scenario confirm that the proposed
l0-norm SSAF is not only more robust but also faster and more
accurate than its counterparts in the sparse system identification in
the presence of impulsive noise.
Abstract: We present a normalized LMS (NLMS) algorithm
with robust regularization. Unlike conventional NLMS with the
fixed regularization parameter, the proposed approach dynamically
updates the regularization parameter. By exploiting a gradient
descent direction, we derive a computationally efficient and robust
update scheme for the regularization parameter. In simulation, we
demonstrate the proposed algorithm outperforms conventional NLMS
algorithms in terms of convergence rate and misadjustment error.
Abstract: Information security plays a major role in uplifting the standard of secured communications via global media. In this paper, we have suggested a technique of encryption followed by insertion before transmission. Here, we have implemented two different concepts to carry out the above-specified tasks. We have used a two-point crossover technique of the genetic algorithm to facilitate the encryption process. For each of the uniquely identified rows of pixels, different mathematical methodologies are applied for several conditions checking, in order to figure out all the parent pixels on which we perform the crossover operation. This is done by selecting two crossover points within the pixels thereby producing the newly encrypted child pixels, and hence the encrypted cover image. In the next lap, the first and second order derivative operators are evaluated to increase the security and robustness. The last lap further ensures reapplication of the crossover procedure to form the final stego-image. The complexity of this system as a whole is huge, thereby dissuading the third party interferences. Also, the embedding capacity is very high. Therefore, a larger amount of secret image information can be hidden. The imperceptible vision of the obtained stego-image clearly proves the proficiency of this approach.
Abstract: In this paper, we present a pedestrian detection descriptor called Fused Structure and Texture (FST) features based on the combination of the local phase information with the texture features. Since the phase of the signal conveys more structural information than the magnitude, the phase congruency concept is used to capture the structural features. On the other hand, the Center-Symmetric Local Binary Pattern (CSLBP) approach is used to capture the texture information of the image. The dimension less quantity of the phase congruency and the robustness of the CSLBP operator on the flat images, as well as the blur and illumination changes, lead the proposed descriptor to be more robust and less sensitive to the light variations. The proposed descriptor can be formed by extracting the phase congruency and the CSLBP values of each pixel of the image with respect to its neighborhood. The histogram of the oriented phase and the histogram of the CSLBP values for the local regions in the image are computed and concatenated to construct the FST descriptor. Several experiments were conducted on INRIA and the low resolution DaimlerChrysler datasets to evaluate the detection performance of the pedestrian detection system that is based on the FST descriptor. A linear Support Vector Machine (SVM) is used to train the pedestrian classifier. These experiments showed that the proposed FST descriptor has better detection performance over a set of state of the art feature extraction methodologies.