Abstract: The challenge in the case of image authentication is that in many cases images need to be subjected to non malicious operations like compression, so the authentication techniques need to be compression tolerant. In this paper we propose an image authentication system that is tolerant to JPEG lossy compression operations. A scheme for JPEG grey scale images is proposed based on a data embedding method that is based on a secret key and a secret mapping vector in the frequency domain. An encrypted feature vector extracted from the image DCT coefficients, is embedded redundantly, and invisibly in the marked image. On the receiver side, the feature vector from the received image is derived again and compared against the extracted watermark to verify the image authenticity. The proposed scheme is robust against JPEG compression up to a maximum compression of approximately 80%,, but sensitive to malicious attacks such as cutting and pasting.
Abstract: For the past couple of decades Weak signal detection
is of crucial importance in various engineering and scientific
applications. It finds its application in areas like Wireless
communication, Radars, Aerospace engineering, Control systems and
many of those. Usually weak signal detection requires phase sensitive
detector and demodulation module to detect and analyze the signal.
This article gives you a preamble to intrusion detection system which
can effectively detect a weak signal from a multiplexed signal. By
carefully inspecting and analyzing the respective signal, this
system can successfully indicate any peripheral intrusion. Intrusion
detection system (IDS) is a comprehensive and easy approach
towards detecting and analyzing any signal that is weakened and
garbled due to low signal to noise ratio (SNR). This approach
finds significant importance in applications like peripheral security
systems.
Abstract: Traditional development of wireless sensor network
mote is generally based on SoC1 platform. Such method of
development faces three main drawbacks: lack of flexibility in terms
of development due to low resource and rigid architecture of SoC;
low capability of evolution and portability versus performance if
specific micro-controller architecture features are used; and the rapid
obsolescence of micro-controller comparing to the long lifetime of
power plants or any industrial installations. To overcome these
drawbacks, we have explored a new approach of development of
wireless sensor network mote using a hybrid FPGA technology. The
application of such approach is illustrated through the
implementation of an innovative wireless sensor network protocol
called OCARI.
Abstract: Verification of real-time software systems can be
expensive in terms of time and resources. Testing is the main method
of proving correctness but has been shown to be a long and time
consuming process. Everyday engineers are usually unwilling to
adopt formal approaches to correctness because of the overhead
associated with developing their knowledge of such techniques.
Performance modelling techniques allow systems to be evaluated
with respect to timing constraints. This paper describes PARTES, a
framework which guides the extraction of performance models from
programs written in an annotated subset of C.
Abstract: Nowadays, the focus on renewable energy and alternative fuels has increased due to increasing oil prices, environment pollution, and also concern on preserving the nature. Biodiesel has been known as an attractive alternative fuel although biodiesel produced from edible oil is very expensive than conventional diesel. Therefore, the uses of biodiesel produced from non-edible oils are much better option. Currently Jatropha biodiesel (JBD) is receiving attention as an alternative fuel for diesel engine. Biodiesel is non-toxic, biodegradable, high lubricant ability, highly renewable, and its use therefore produces real reduction in petroleum consumption and carbon dioxide (CO2) emissions. Although biodiesel has many advantages, but it still has several properties need to improve, such as lower calorific value, lower effective engine power, higher emission of nitrogen oxides (NOX) and greater sensitivity to low temperature. Exhaust gas recirculation (EGR) is effective technique to reduce NOX emission from diesel engines because it enables lower flame temperature and oxygen concentration in the combustion chamber. Some studies succeeded to reduce the NOX emission from biodiesel by EGR but they observed increasing soot emission. The aim of this study was to investigate the engine performance and soot emission by using blended Jatropha biodiesel with different EGR rates. A CI engine that is water-cooled, turbocharged, using indirect injection system was used for the investigation. Soot emission, NOX, CO2, carbon monoxide (CO) were recorded and various engine performance parameters were also evaluated.
Abstract: Firstly, research and development on RFID focuses on
manufacturing and retail sectors, because it can improve supply chain
efficiency. But, now a variety of field is considered the next research
area for Radio Frequency Identification (RFID). Although RFID is
infancy, RFID technology has great potential in power industry to
significantly reduce cost, and improve quality of power supply. To
complement the limitation of RFID, we adopt the WSN (Wireless
Sensor Network) technology. However, relevant experience is limited,
the challenge will be to derive requirement from business practice and
to determine whether it is possible or not. To explore this issue, we
conduct a case study on implementing power facility management
system using RFID/WSN in Korea Electric Power Corporation
(KEPCO). In this paper we describe requirement from power industry.
And we introduce design and implementation of the test bed.
Abstract: In this paper, a new secure watermarking scheme for
color image is proposed. It splits the watermark into two shares using
(2, 2)- threshold Visual Cryptography Scheme (V CS) with Adaptive
Order Dithering technique and embeds one share into high textured
subband of Luminance channel of the color image. The other share
is used as the key and is available only with the super-user or the
author of the image. In this scheme only the super-user can reveal
the original watermark. The proposed scheme is dynamic in the sense
that to maintain the perceptual similarity between the original and the
watermarked image the selected subband coefficients are modified
by varying the watermark scaling factor. The experimental results
demonstrate the effectiveness of the proposed scheme. Further, the
proposed scheme is able to resist all common attacks even with strong
amplitude.
Abstract: Indices summarizing community structure are used to
evaluate fundamental community ecology, species interaction,
biogeographical factors, and environmental stress. Some of these
indices are insensitive to gross community changes induced by
contaminants of pollution. Diversity indices and similarity indices are
reviewed considering their ecological application, both theoretical
and practical. For some useful indices, empirical equations are given
to calculate the expected maximum value of the indices to which the
observed values can be related at any combination of sample sizes at
the experimental sites. This paper examines the effects of sample size
and diversity on the expected values of diversity indices and
similarity indices, using various formulae. It has been shown that all
indices are strongly affected by sample size and diversity. In some
indices, this influence is greater than the others and an attempt has
been made to deal with these influences.
Abstract: A novel calibration approach that aims to reduce
ASM2d parameter subsets and decrease the model complexity is
presented. This approach does not require high computational
demand and reduces the number of modeling parameters required to
achieve the ASMs calibration by employing a sensitivity and iteration
methodology. Parameter sensitivity is a crucial factor and the
iteration methodology enables refinement of the simulation parameter
values. When completing the iteration process, parameters values are
determined in descending order of their sensitivities. The number of
iterations required is equal to the number of model parameters of the
parameter significance ranking. This approach was used for the
ASM2d model to the evaluated EBPR phosphorus removal and it was
successful. Results of the simulation provide calibration parameters.
These included YPAO, YPO4, YPHA, qPHA, qPP, μPAO, bPAO, bPP, bPHA,
KPS, YA, μAUT, bAUT, KO2 AUT, and KNH4 AUT. Those parameters were
corresponding to the experimental data available.
Abstract: In this paper we consider the problem of distributed adaptive estimation in wireless sensor networks for two different observation noise conditions. In the first case, we assume that there are some sensors with high observation noise variance (noisy sensors) in the network. In the second case, different variance for observation noise is assumed among the sensors which is more close to real scenario. In both cases, an initial estimate of each sensor-s observation noise is obtained. For the first case, we show that when there are such sensors in the network, the performance of conventional distributed adaptive estimation algorithms such as incremental distributed least mean square (IDLMS) algorithm drastically decreases. In addition, detecting and ignoring these sensors leads to a better performance in a sense of estimation. In the next step, we propose a simple algorithm to detect theses noisy sensors and modify the IDLMS algorithm to deal with noisy sensors. For the second case, we propose a new algorithm in which the step-size parameter is adjusted for each sensor according to its observation noise variance. As the simulation results show, the proposed methods outperforms the IDLMS algorithm in the same condition.
Abstract: The nature of wireless ad hoc and sensor networks
make them very attractive to attackers. One of the most popular and
serious attacks in wireless ad hoc networks is wormhole attack and
most proposed protocols to defend against this attack used
positioning devices, synchronized clocks, or directional antennas.
This paper analyzes the nature of wormhole attack and existing
methods of defending mechanism and then proposes round trip time
(RTT) and neighbor numbers based wormhole detection mechanism.
The consideration of proposed mechanism is the RTT between two
successive nodes and those nodes- neighbor number which is needed
to compare those values of other successive nodes. The identification
of wormhole attacks is based on the two faces. The first consideration
is that the transmission time between two wormhole attack affected
nodes is considerable higher than that between two normal neighbor
nodes. The second detection mechanism is based on the fact that by
introducing new links into the network, the adversary increases the
number of neighbors of the nodes within its radius. This system does
not require any specific hardware, has good performance and little
overhead and also does not consume extra energy. The proposed
system is designed in ad hoc on-demand distance vector (AODV)
routing protocol and analysis and simulations of the proposed system
are performed in network simulator (ns-2).
Abstract: This paper deals with a periodic-review substitutable
inventory system for a finite and an infinite number of periods. Here
an upward substitution structure, a substitution of a more costly item
by a less costly one, is assumed, with two products. At the beginning
of each period, a stochastic demand comes for the first item only,
which is quality-wise better and hence costlier. Whenever an arriving
demand finds zero inventory of this product, a fraction of unsatisfied
customers goes for its substitutable second item. An optimal ordering
policy has been derived for each period. The results are illustrated
with numerical examples. A sensitivity analysis has been done to
examine how sensitive the optimal solution and the maximum profit
are to the values of the discount factor, when there is a large number
of periods.
Abstract: Today, Higher Education in a global scope is subordinated to the greater institutional controls through the policies of the Quality of Education. These include processes of over evaluation of all the academic activities: students- and professors- performance, educational logistics, managerial standards for the administration of institutions of higher education, as well as the establishment of the imaginaries of excellence and prestige as the foundations on which universities of the XXI century will focus their present and future goals and interests. But at the same time higher education systems worldwide are facing the most profound crisis of sense and meaning and attending enormous mutations in their identity. Based in a qualitative research approach, this paper shows the social configurations that the scholars at the Universities in Mexico build around the discourse of the Quality of Education, and how these policies put in risk the social recognition of these individuals.
Abstract: This study examined a habitat-suitability assessment method namely the Ecological Niche Factor Analysis (ENFA). A virtual species was created and then dispatched in a geographic information system model of a real landscape in three historic scenarios: (1) spreading, (2) equilibrium, and (3) overabundance. In each scenario, the virtual species was sampled and these simulated data sets were used as inputs for the ENFA to reconstruct the habitat suitability model. The 'equilibrium' scenario gives the highest quantity and quality among three scenarios. ENFA was sensitive to the distribution scenarios but not sensitive to sample sizes. The use of a virtual species proved to be a very efficient method, allowing one to fully control the quality of the input data as well as to accurately evaluate the predictive power of the analyses.
Abstract: Noise level has critical effects on the diagnostic
performance of signal-averaged electrocardiogram (SAECG), because
the true starting and end points of QRS complex would be masked by
the residual noise and sensitive to the noise level. Several studies and
commercial machines have used a fixed number of heart beats
(typically between 200 to 600 beats) or set a predefined noise level
(typically between 0.3 to 1.0 μV) in each X, Y and Z lead to perform
SAECG analysis. However different criteria or methods used to
perform SAECG would cause the discrepancies of the noise levels
among study subjects. According to the recommendations of 1991
ESC, AHA and ACC Task Force Consensus Document for the use of
SAECG, the determinations of onset and offset are related closely to
the mean and standard deviation of noise sample. Hence this study
would try to perform SAECG using consistent root-mean-square
(RMS) noise levels among study subjects and analyze the noise level
effects on SAECG. This study would also evaluate the differences
between normal subjects and chronic renal failure (CRF) patients in
the time-domain SAECG parameters.
The study subjects were composed of 50 normal Taiwanese and 20
CRF patients. During the signal-averaged processing, different RMS
noise levels were adjusted to evaluate their effects on three time
domain parameters (1) filtered total QRS duration (fQRSD), (2) RMS
voltage of the last QRS 40 ms (RMS40), and (3) duration of the low
amplitude signals below 40 μV (LAS40). The study results
demonstrated that the reduction of RMS noise level can increase
fQRSD and LAS40 and decrease the RMS40, and can further increase
the differences of fQRSD and RMS40 between normal subjects and
CRF patients. The SAECG may also become abnormal due to the
reduction of RMS noise level. In conclusion, it is essential to establish
diagnostic criteria of SAECG using consistent RMS noise levels for
the reduction of the noise level effects.
Abstract: The dispersion of heavy particles line in an isotropic
and incompressible three-dimensional turbulent flow has been
studied using the Kinematic Simulation techniques to find out the
evolution of the line fractal dimension. In this study, the fractal
dimension of the line is found for different cases of heavy particles
inertia (different Stokes numbers) in the absence of the particle
gravity with a comparison with the fractal dimension obtained in the
diffusion case of material line at the same Reynolds number. It can
be concluded for the dispersion of heavy particles line in turbulent
flow that the particle inertia affect the fractal dimension of a line
released in a turbulent flow for Stokes numbers 0.02 < St < 2. At the
beginning for small times, most of the different cases are not affected
by the inertia until a certain time, the particle response time τa, with
larger time as the particles inertia increases, the fractal dimension of
the line increases owing to the particles becoming more sensitive to
the small scales which cause the change in the line shape during its
journey.
Abstract: The Virtual Reality (VR) is becoming increasingly
important for business, education, and entertainment, therefore VR
technology have been applied for training purposes in the areas of
military, safety training and flying simulators. In particular, the
superior and high reliability VR training system is very important in
immersion. Manipulation training in immersive virtual environments
is difficult partly because users must do without the hap contact with
real objects they rely on in the real world to orient themselves and
their manipulated.
In this paper, we create a convincing questionnaire of immersion
and an experiment to assess the influence of immersion on
performance in VR training system. The Immersion Questionnaire
(IQ) included spatial immersion, Psychological immersion, and
Sensory immersion. We show that users with a training system
complete visual attention and detection of signals. Twenty subjects
were allocated to a factorial design consisting of two different VR
systems (Desktop VR and Projector VR). The results indicated that
different VR representation methods significantly affected the
participants- Immersion dimensions.
Abstract: One of the determinants of a firm-s prosperity is the
customers- perceived service quality and satisfaction. While service
quality is wide in scope, and consists of various dimensions, there
may be differences in the relative importance of these dimensions in
affecting customers- overall satisfaction of service quality.
Identifying the relative rank of different dimensions of service quality
is very important in that it can help managers to find out which
service dimensions have a greater effect on customers- overall
satisfaction. Such an insight will consequently lead to more effective
resource allocation which will finally end in higher levels of
customer satisfaction. This issue – despite its criticality- has not
received enough attention so far. Therefore, using a sample of 240
bank customers in Iran, an artificial neural network is developed to
address this gap in the literature. As customers- evaluation of service
quality is a subjective process, artificial neural networks –as a brain
metaphor- may appear to have a potentiality to model such a
complicated process. Proposing a neural network which is able to
predict the customers- overall satisfaction of service quality with a
promising level of accuracy is the first contribution of this study. In
addition, prioritizing the service quality dimensions in affecting
customers- overall satisfaction –by using sensitivity analysis of
neural network- is the second important finding of this paper.
Abstract: An autonomous environmental monitoring system
(Smart Landfill) has been constructed for the quantitative
measurement of the components of landfill gas found at borehole
wells at the perimeter of landfill sites. The main components of
landfill gas are the greenhouse gases, methane and carbon dioxide
and have been monitored in the range 0-5 % volume. This monitoring
system has not only been tested in the laboratory but has been
deployed in multiple field trials and the data collected successfully
compared with on-site monitors. This success shows the potential of
this system for application in environments where reliable gas
monitoring is crucial.
Abstract: The information revealed by derivatives can help to
better characterize digital near-end crosstalk signatures with the
ultimate goal of identifying the specific aggressor signal.
Unfortunately, derivatives tend to be very sensitive to even low
levels of noise. In this work we approximated the derivatives of both
quiet and noisy digital signals using a wavelet-based technique. The
results are presented for Gaussian digital edges, IBIS Model digital
edges, and digital edges in oscilloscope data captured from an actual
printed circuit board. Tradeoffs between accuracy and noise
immunity are presented. The results show that the wavelet technique
can produce first derivative approximations that are accurate to
within 5% or better, even under noisy conditions. The wavelet
technique can be used to calculate the derivative of a digital signal
edge when conventional methods fail.