Abstract: This paper proposes a new approach for image encryption
using a combination of different permutation techniques.
The main idea behind the present work is that an image can be
viewed as an arrangement of bits, pixels and blocks. The intelligible
information present in an image is due to the correlations among the
bits, pixels and blocks in a given arrangement. This perceivable information
can be reduced by decreasing the correlation among the bits,
pixels and blocks using certain permutation techniques. This paper
presents an approach for a random combination of the aforementioned
permutations for image encryption. From the results, it is observed
that the permutation of bits is effective in significantly reducing the
correlation thereby decreasing the perceptual information, whereas
the permutation of pixels and blocks are good at producing higher
level security compared to bit permutation. A random combination
method employing all the three techniques thus is observed to be
useful for tactical security applications, where protection is needed
only against a casual observer.
Abstract: Reliable secure multicast communication in mobile
adhoc networks is challenging due to its inherent characteristics of
infrastructure-less architecture with lack of central authority, high
packet loss rates and limited resources such as bandwidth, time and
power. Many emerging commercial and military applications require
secure multicast communication in adhoc environments. Hence key
management is the fundamental challenge in achieving reliable
secure communication using multicast key distribution for mobile
adhoc networks. Thus in designing a reliable multicast key
distribution scheme, reliability and congestion control over
throughput are essential components. This paper proposes and
evaluates the performance of an enhanced optimized multicast cluster
tree algorithm with destination sequenced distance vector routing
protocol to provide reliable multicast key distribution. Simulation
results in NS2 accurately predict the performance of proposed
scheme in terms of key delivery ratio and packet loss rate under
varying network conditions. This proposed scheme achieves
reliability, while exhibiting low packet loss rate with high key
delivery ratio compared with the existing scheme.
Abstract: Design for cost (DFC) is a method that reduces life
cycle cost (LCC) from the angle of designers. Multiple domain
features mapping (MDFM) methodology was given in DFC. Using
MDFM, we can use design features to estimate the LCC. From the
angle of DFC, the design features of family cars were obtained, such
as all dimensions, engine power and emission volume. At the
conceptual design stage, cars- LCC were estimated using back
propagation (BP) artificial neural networks (ANN) method and
case-based reasoning (CBR). Hamming space was used to measure the
similarity among cases in CBR method. Levenberg-Marquardt (LM)
algorithm and genetic algorithm (GA) were used in ANN. The
differences of LCC estimation model between CBR and artificial
neural networks (ANN) were provided. ANN and CBR separately
each method has its shortcomings. By combining ANN and CBR
improved results accuracy was obtained. Firstly, using ANN selected
some design features that affect LCC. Then using LCC estimation
results of ANN could raise the accuracy of LCC estimation in CBR
method. Thirdly, using ANN estimate LCC errors and correct errors in
CBR-s estimation results if the accuracy is not enough accurate.
Finally, economically family cars and sport utility vehicle (SUV) was
given as LCC estimation cases using this hybrid approach combining
ANN and CBR.
Abstract: This study deals with the experimental investigation
and theoretical modeling of Semi crystalline polymeric materials with
a rubbery amorphous phase (HDPE) subjected to a uniaxial cyclic
tests with various maximum strain levels, even at large deformation.
Each cycle is loaded in tension up to certain maximum strain and
then unloaded down to zero stress with N number of cycles. This
work is focuses on the measure of the volume strain due to the
phenomena of damage during this kind of tests. On the basis of
thermodynamics of relaxation processes, a constitutive model for
large strain deformation has been developed, taking into account the
damage effect, to predict the complex elasto-viscoelastic-viscoplastic
behavior of material. A direct comparison between the model
predictions and the experimental data show that the model accurately
captures the material response. The model is also capable of
predicting the influence damage causing volume variation.
Abstract: In this paper, a Bayesian Network (BN) based system
is presented for providing clinical decision support to healthcare
practitioners in rural or remote areas of India for young infants or
children up to the age of 5 years. The government is unable to
appoint child specialists in rural areas because of inadequate number
of available pediatricians. It leads to a high Infant Mortality Rate
(IMR). In such a scenario, Intelligent Pediatric System provides a
realistic solution. The prototype of an intelligent system has been
developed that involves a knowledge component called an Intelligent
Pediatric Assistant (IPA); and User Agents (UA) along with their
Graphical User Interfaces (GUI). The GUI of UA provides the
interface to the healthcare practitioner for submitting sign-symptoms
and displaying the expert opinion as suggested by IPA. Depending
upon the observations, the IPA decides the diagnosis and the
treatment plan. The UA and IPA form client-server architecture for
knowledge sharing.
Abstract: Variable channel conditions in underwater networks,
and variable distances between sensors due to water current, leads to
variable bit error rate (BER). This variability in BER has great
effects on energy efficiency of error correction techniques used. In
this paper an efficient energy adaptive hybrid error correction
technique (AHECT) is proposed. AHECT adaptively changes error
technique from pure retransmission (ARQ) in a low BER case to a
hybrid technique with variable encoding rates (ARQ & FEC) in a
high BER cases. An adaptation algorithm depends on a precalculated
packet acceptance rate (PAR) look-up table, current BER,
packet size and error correction technique used is proposed. Based
on this adaptation algorithm a periodically 3-bit feedback is added to
the acknowledgment packet to state which error correction technique
is suitable for the current channel conditions and distance.
Comparative studies were done between this technique and other
techniques, and the results show that AHECT is more energy
efficient and has high probability of success than all those
techniques.
Abstract: Full search block matching algorithm is widely used for hardware implementation of motion estimators in video compression algorithms. In this paper we are proposing a new architecture, which consists of a 2D parallel processing unit and a 1D unit both working in parallel. The proposed architecture reduces both data access power and computational power which are the main causes of power consumption in integer motion estimation. It also completes the operations with nearly the same number of clock cycles as compared to a 2D systolic array architecture. In this work sum of absolute difference (SAD)-the most repeated operation in block matching, is calculated in two steps. The first step is to calculate the SAD for alternate rows by a 2D parallel unit. If the SAD calculated by the parallel unit is less than the stored minimum SAD, the SAD of the remaining rows is calculated by the 1D unit. Early termination, which stops avoidable computations has been achieved with the help of alternate rows method proposed in this paper and by finding a low initial SAD value based on motion vector prediction. Data reuse has been applied to the reference blocks in the same search area which significantly reduced the memory access.
Abstract: Opportunistic Data Forwarding (ODF) has drawn much attention in mobile adhoc networking research in recent years. The effectiveness of ODF in MANET depends on a suitable routing protocol which provides a powerful source routing services. PLSR is featured by source routing, loop free and small routing overhead. The update messages in PLSR are integrated into a tree structure and no need to time stamp routing updates which reduces the routing overhead.
Abstract: Data security in u-Health system can be an important
issue because wireless network is vulnerable to hacking. However, it is
not easy to implement a proper security algorithm in an embedded
u-health monitoring because of hardware constraints such as low
performance, power consumption and limited memory size and etc. To
secure data that contain personal and biosignal information, we
implemented several security algorithms such as Blowfish, data
encryption standard (DES), advanced encryption standard (AES) and
Rivest Cipher 4 (RC4) for our u-Health monitoring system and the
results were successful. Under the same experimental conditions, we
compared these algorithms. RC4 had the fastest execution time.
Memory usage was the most efficient for DES. However, considering
performance and safety capability, however, we concluded that AES
was the most appropriate algorithm for a personal u-Health monitoring
system.
Abstract: in this work, we present a new strategy of direct adaptive control denoted: Extended minimal controller synthesis (EMCS). This algorithm is designed for an induction motor, which includes both electrical and mechanical dynamics under the assumptions of linear magnetic circuits. The main motivation of the EMCS control is to enhance the robustness of the MRAC algorithms, i.e. the rejection of bounded effects of rapidly varying external disturbances.
Abstract: This paper introduces two decoders for binary linear
codes based on Metaheuristics. The first one uses a genetic algorithm
and the second is based on a combination genetic algorithm with
a feed forward neural network. The decoder based on the genetic
algorithms (DAG) applied to BCH and convolutional codes give good
performances compared to Chase-2 and Viterbi algorithm respectively
and reach the performances of the OSD-3 for some Residue
Quadratic (RQ) codes. This algorithm is less complex for linear
block codes of large block length; furthermore their performances
can be improved by tuning the decoder-s parameters, in particular the
number of individuals by population and the number of generations.
In the second algorithm, the search space, in contrast to DAG which
was limited to the code word space, now covers the whole binary
vector space. It tries to elude a great number of coding operations
by using a neural network. This reduces greatly the complexity of
the decoder while maintaining comparable performances.
Abstract: Globalization and therefore increasing tight competition among companies, have resulted to increase the importance of making well-timed decision. Devising and employing effective strategies, that are flexible and adaptive to changing market, stand a greater chance of being effective in the long-term. In other side, a clear focus on managing the entire product lifecycle has emerged as critical areas for investment. Therefore, applying wellorganized tools to employ past experience in new case, helps to make proper and managerial decisions. Case based reasoning (CBR) is based on a means of solving a new problem by using or adapting solutions to old problems. In this paper, an adapted CBR model with k-nearest neighbor (K-NN) is employed to provide suggestions for better decision making which are adopted for a given product in the middle of life phase. The set of solutions are weighted by CBR in the principle of group decision making. Wrapper approach of genetic algorithm is employed to generate optimal feature subsets. The dataset of the department store, including various products which are collected among two years, have been used. K-fold approach is used to evaluate the classification accuracy rate. Empirical results are compared with classical case based reasoning algorithm which has no special process for feature selection, CBR-PCA algorithm based on filter approach feature selection, and Artificial Neural Network. The results indicate that the predictive performance of the model, compare with two CBR algorithms, in specific case is more effective.
Abstract: The success of IT-projects concerning the
implementation of business application Software is strongly
depending upon the application of an efficient requirements
management, to understand the business requirements and to realize
them in the IT. But in fact, the Potentials of the requirements
management are not fully exhausted by small and medium sized
enterprises (SME) of the IT sector. To work out recommendations for
action and furthermore a possible solution, allowing a better exhaust
of potentials, it shall be examined in a scientific research project,
which problems occur out of which causes. In the same place, the
storage of knowledge from the requirements management, and its
later reuse are important, to achieve sustainable improvements of the
competitive of the IT-SMEs. Requirements Engineering is one of the
most important topics in Product Management for Software to
achieve the goal of optimizing the success of the software product.
Abstract: This paper is a review on the aspects and approaches of design an image cryptosystem. First a general introduction given for cryptography and images encryption and followed by different techniques in image encryption and related works for each technique surveyed. Finally, general security analysis methods for encrypted images are mentioned.
Abstract: This paper deals with condition monitoring of electric switch machine for railway points. Point machine, as a complex electro-mechanical device, switch the track between two alternative routes. There has been an increasing interest in railway safety and the optimal management of railway equipments maintenance, e.g. point machine, in order to enhance railway service quality and reduce system failure. This paper explores the development of Kolmogorov- Smirnov (K-S) test to detect some point failures (external to the machine, slide chairs, fixing, stretchers, etc), while the point machine (inside the machine) is in its proper condition. Time-domain stator Current signatures of normal (healthy) and faulty points are taken by 3 Hall Effect sensors and are analyzed by K-S test. The test is simulated by creating three types of such failures, namely putting a hard stone and a soft stone between stock rail and switch blades as obstacles and also slide chairs- friction. The test has been applied for those three faults which the results show that K-S test can effectively be developed for the aim of other point failures detection, which their current signatures deviate parametrically from the healthy current signature. K-S test as an analysis technique, assuming that any defect has a specific probability distribution. Empirical cumulative distribution functions (ECDF) are used to differentiate these probability distributions. This test works based on the null hypothesis that ECDF of target distribution is statistically similar to ECDF of reference distribution. Therefore by comparing a given current signature (as target signal) from unknown switch state to a number of template signatures (as reference signal) from known switch states, it is possible to identify which is the most likely state of the point machine under analysis.
Abstract: Wireless sensor networks are consisted of hundreds or
thousands of small sensors that have limited resources.
Energy-efficient techniques are the main issue of wireless sensor
networks. This paper proposes an energy efficient agent-based
framework in wireless sensor networks. We adopt biologically
inspired approaches for wireless sensor networks. Agent operates
automatically with their behavior policies as a gene. Agent aggregates
other agents to reduce communication and gives high priority to nodes
that have enough energy to communicate. Agent behavior policies are
optimized by genetic operation at the base station. Simulation results
show that our proposed framework increases the lifetime of each node.
Each agent selects a next-hop node with neighbor information and
behavior policies. Our proposed framework provides self-healing,
self-configuration, self-optimization properties to sensor nodes.
Abstract: This work is to study a roll of the fluctuating density
gradient in the compressible flows for the computational fluid dynamics
(CFD). A new anisotropy tensor with the fluctuating density
gradient is introduced, and is used for an invariant modeling technique
to model the turbulent density gradient correlation equation derived
from the continuity equation. The modeling equation is decomposed
into three groups: group proportional to the mean velocity, and that
proportional to the mean strain rate, and that proportional to the mean
density. The characteristics of the correlation in a wake are extracted
from the results by the two dimensional direct simulation, and shows
the strong correlation with the vorticity in the wake near the body.
Thus, it can be concluded that the correlation of the density gradient
is a significant parameter to describe the quick generation of the
turbulent property in the compressible flows.
Abstract: Modeling of the distributed systems allows us to
represent the whole its functionality. The working system instance
rarely fulfils the whole functionality represented by model; usually
some parts of this functionality should be accessible periodically.
The reporting system based on the Data Warehouse concept seams to
be an intuitive example of the system that some of its functionality is
required only from time to time. Analyzing an enterprise risk
associated with the periodical change of the system functionality, we
should consider not only the inaccessibility of the components
(object) but also their functions (methods), and the impact of such a
situation on the system functionality from the business point of view.
In the paper we suggest that the risk attributes should be estimated
from risk attributes specified at the requirements level (Use Case in
the UML model) on the base of the information about the structure of
the model (presented at other levels of the UML model). We argue
that it is desirable to consider the influence of periodical changes in
requirements on the enterprise risk estimation. Finally, the
proposition of such a solution basing on the UML system model is
presented.
Abstract: The Deoxyribonucleic Acid (DNA) which is a doublestranded helix of nucleotides consists of: Adenine (A), Cytosine (C), Guanine (G) and Thymine (T). In this work, we convert this genetic code into an equivalent digital signal representation. Applying a wavelet transform, such as Haar wavelet, we will be able to extract details that are not so clear in the original genetic code. We compare between different organisms using the results of the Haar wavelet Transform. This is achieved by using the trend part of the signal since the trend part bears the most energy of the digital signal representation. Consequently, we will be able to quantitatively reconstruct different biological families.
Abstract: In the effort to reduce water consumption for resorts,
more water conservation practices need to be implemented. Hence
water audits need to be performed to obtain a baseline of water
consumption, before planning water conservation practices. In this
study, a water audit framework specifically for resorts was created,
and the audit was performed on two resorts: Resort A in Langkawi,
Malaysia; and Resort B in Miri, Malaysia. From the audit, the total
daily water consumption for Resorts A and B were estimated to be
180m3 and 330 m3 respectively, while the actual water consumption
(based on water meter readings) were 175 m3 and 325 m3. This
suggests that the audit framework is reasonably accurate and may be
used to account for most of the water consumption sources in a
resort. The daily water consumption per guest is about 500 litres. The
water consumption of both resorts is poorly rated compared with
established benchmarks. Water conservation measures were
suggested for both resorts.