Abstract: Signature amortization schemes have been introduced
for authenticating multicast streams, in which, a single signature is
amortized over several packets. The hash value of each packet is
computed, some hash values are appended to other packets, forming
what is known as hash chain. These schemes divide the stream into
blocks, each block is a number of packets, the signature packet in
these schemes is either the first or the last packet of the block.
Amortization schemes are efficient solutions in terms of computation
and communication overhead, specially in real-time environment.
The main effictive factor of amortization schemes is it-s hash chain
construction. Some studies show that signing the first packet of each
block reduces the receiver-s delay and prevents DoS attacks, other
studies show that signing the last packet reduces the sender-s delay.
To our knowledge, there is no studies that show which is better, to
sign the first or the last packet in terms of authentication probability
and resistance to packet loss.
In th is paper we will introduce another scheme for authenticating
multicast streams that is robust against packet loss, reduces the
overhead, and prevents the DoS attacks experienced by the receiver
in the same time. Our scheme-The Multiple Connected Chain signing
the First packet (MCF) is to append the hash values of specific
packets to other packets,then append some hashes to the signature
packet which is sent as the first packet in the block. This scheme
is aspecially efficient in terms of receiver-s delay. We discuss and
evaluate the performance of our proposed scheme against those that
sign the last packet of the block.
Abstract: In this paper, an efficient method for personal identification based on the pattern of human iris is proposed. It is composed of image acquisition, image preprocessing to make a flat iris then it is converted into eigeniris and decision is carried out using only reduction of iris in one dimension. By comparing the eigenirises it is determined whether two irises are similar. The results show that proposed method is quite effective.
Abstract: Short-Term Load Forecasting (STLF) plays an important role for the economic and secure operation of power systems. In this paper, Continuous Genetic Algorithm (CGA) is employed to evolve the optimum large neural networks structure and connecting weights for one-day ahead electric load forecasting problem. This study describes the process of developing three layer feed-forward large neural networks for load forecasting and then presents a heuristic search algorithm for performing an important task of this process, i.e. optimal networks structure design. The proposed method is applied to STLF of the local utility. Data are clustered due to the differences in their characteristics. Special days are extracted from the normal training sets and handled separately. In this way, a solution is provided for all load types, including working days and weekends and special days. We find good performance for the large neural networks. The proposed methodology gives lower percent errors all the time. Thus, it can be applied to automatically design an optimal load forecaster based on historical data.
Abstract: The performance of the Optical Code Division Multiplexing/ Wavelength Division Multiplexing (WDM/OCDM) technique for Optical Packet Switch is investigated. The impact on the performance of the impairment due to both Multiple Access Interference and Beat noise is studied. The Packet Loss Probability due to output packet contentions is evaluated as a function of the main switch and traffic parameters when Gold coherent optical codes are adopted. The Packet Loss Probability of the OCDM/WDM switch can reach 10-9 when M=16 wavelengths, Gold code of length L=511 and only 24 wavelength converters are used in the switch.
Abstract: This paper presents a predictive model of sensor readings for mobile robot. The model predicts sensor readings for given time horizon based on current sensor readings and velocities of wheels assumed for this horizon. Similar models for such anticipation have been proposed in the literature. The novelty of the model presented in the paper comes from the fact that its structure takes into account physical phenomena and is not just a black box, for example a neural network. From this point of view it may be regarded as a semi-phenomenological model. The model is developed for the Khepera robot, but after certain modifications, it may be applied for any robot with distance sensors such as infrared or ultrasonic sensors.
Abstract: The software evolution control requires a deep
understanding of the changes and their impact on different system
heterogeneous artifacts. And an understanding of descriptive
knowledge of the developed software artifacts is a prerequisite
condition for the success of the evolutionary process.
The implementation of an evolutionary process is to make changes
more or less important to many heterogeneous software artifacts such
as source code, analysis and design models, unit testing, XML
deployment descriptors, user guides, and others. These changes can
be a source of degradation in functional, qualitative or behavioral
terms of modified software. Hence the need for a unified approach
for extraction and representation of different heterogeneous artifacts
in order to ensure a unified and detailed description of heterogeneous
software artifacts, exploitable by several software tools and allowing
to responsible for the evolution of carry out the reasoning change
concerned.
Abstract: The problems with high complexity had been the challenge in combinatorial problems. Due to the none-determined and polynomial characteristics, these problems usually face to unreasonable searching budget. Hence combinatorial optimizations attracted numerous researchers to develop better algorithms. In recent academic researches, most focus on developing to enhance the conventional evolutional algorithms and facilitate the local heuristics, such as VNS, 2-opt and 3-opt. Despite the performances of the introduction of the local strategies are significant, however, these improvement cannot improve the performance for solving the different problems. Therefore, this research proposes a meta-heuristic evolutional algorithm which can be applied to solve several types of problems. The performance validates BBEA has the ability to solve the problems even without the design of local strategies.
Abstract: A challenged control problem is when the
performance is pushed to the limit. The state-derivative feedback
control strategy directly uses acceleration information for feedback
and state estimation. The derivative part is concerned with the rateof-
change of the error with time. If the measured variable approaches
the set point rapidly, then the actuator is backed off early to allow it
to coast to the required level. Derivative action makes a control
system behave much more intelligently. A sensor measures the
variable to be controlled and the measured in formation is fed back to
the controller to influence the controlled variable. A high gain
problem can be also formulated for proportional plus derivative
feedback transformation. Using MATLAB Simulink dynamic
simulation tool this paper examines a system with a proportional plus
derivative feedback and presents an automatic implementation of
finding an acceptable controlled system. Using feedback
transformations the system is transformed into another system.
Abstract: Chromite is one of the principal ore of chromium in which the metal exists as a complex oxide (FeO.Cr2O3).The prepared chromite can be widely used as refractory in high temperature applications. This study describes the use of local chromite ore as refractory material. To study the feasibility of local chromite, chemical analysis and refractoriness are firstly measured. To produce chromite refractory brick, it is pressed under a press of 400 tons, dried and fired at 1580°C for fifty two hours. Then, the standard properties such as cold crushing strength, apparent porosity, apparent specific gravity, bulk density and water absorption that the chromite brick should possess were measured. According to the results obtained, the brick made by local chromite ore was suitable for use as refractory brick.
Abstract: Most known methods for measuring the structural similarity of document structures are based on, e.g., tag measures, path metrics and tree measures in terms of their DOM-Trees. Other methods measures the similarity in the framework of the well known vector space model. In contrast to these we present a new approach to measuring the structural similarity of web-based documents represented by so called generalized trees which are more general than DOM-Trees which represent only directed rooted trees.We will design a new similarity measure for graphs representing web-based hypertext structures. Our similarity measure is mainly based on a novel representation of a graph as strings of linear integers, whose components represent structural properties of the graph. The similarity of two graphs is then defined as the optimal alignment of the underlying property strings. In this paper we apply the well known technique of sequence alignments to solve a novel and challenging problem: Measuring the structural similarity of generalized trees. More precisely, we first transform our graphs considered as high dimensional objects in linear structures. Then we derive similarity values from the alignments of the property strings in order to measure the structural similarity of generalized trees. Hence, we transform a graph similarity problem to a string similarity problem. We demonstrate that our similarity measure captures important structural information by applying it to two different test sets consisting of graphs representing web-based documents.
Abstract: The present work involves measurements to examine
the effects of initial conditions on aerodynamic and acoustic
characteristics of a Jet at M=0.8 by changing the orientation of sharp
edged orifice plate. A thick plate with chamfered orifice presented divergent and convergent openings when it was flipped over. The centerline velocity was found to decay more rapidly for divergent
orifice and that was consistent with the enhanced mass entrainment
suggesting quicker spread of the jet compared with that from the convergent orifice. The mixing layer region elucidated this effect of
initial conditions at an early stage – the growth was found to be comparatively more pronounced for the divergent orifice resulting in
reduced potential core size. The acoustic measurements, carried out in the near field noise region outside the jet within potential core
length, showed the jet from the divergent orifice to be less noisy. The frequency spectra of the noise signal exhibited that in the initial
region of comparatively thin mixing layer for the convergent orifice,
the peak registered a higher SPL and a higher frequency as well. The noise spectra and the mixing layer development suggested a direct correlation between the coherent structures developing in the initial
region of the jet and the noise captured in the surrounding near field.
Abstract: In this paper the influence of heterogeneous traffic on
the temporal variation of ambient PM10, PM2.5 and PM1
concentrations at a busy arterial route (Sardar Patel Road) in the
Chennai city has been analyzed. The hourly PM concentration, traffic
counts and average speed of the vehicles have been monitored at the
study site for one week (19th-25th January 2009). Results indicated
that the concentrations of coarse (PM10) and fine PM (PM2.5 and
PM1) concentrations at SP road are having similar trend during peak
and non-peak hours, irrespective of the days. The PM concentrations
showed daily two peaks corresponding to morning (8 to 10 am) and
evening (7 to 9 pm) peak hour traffic flow. The PM10 concentration is
dominated by fine particles (53% of PM2.5 and 45% of PM1). The
high PM2.5/PM10 ratio indicates that the majority of PM10 particles
originate from re-suspension of road dust. The analysis of traffic flow
at the study site showed that 2W, 3W and 4W are having similar
diurnal trend as PM concentrations. This confirms that the 2W, 3W
and 4W are the main emission source contributing to ambient PM
concentration at SP road. The speed measurement at SP road showed
that the average speed of 2W, 3W, 4W, LCV and HCV are 38, 40,
38, 40 and 38 km/hr and 43, 41, 42, 40 and 41 km/hr respectively for
the weekdays and weekdays.
Abstract: In recent years fuel cell vehicles are rapidly appearing
all over the globe. In less than 10 years, fuel cell vehicles have gone
from mere research novelties to operating prototypes and demonstration
models. At the same time, government and industry in development
countries have teamed up to invest billions of dollars in partnerships
intended to commercialize fuel cell vehicles within the early
years of the 21st century.
The purpose of this study is evaluation of model and performance
of fuel cell hybrid electric vehicle in different drive cycles. A fuel
cell system model developed in this work is a semi-experimental
model that allows users to use the theory and experimental relationships
in a fuel cell system. The model can be used as part of a complex
fuel cell vehicle model in advanced vehicle simulator (ADVISOR).
This work reveals that the fuel consumption and energy efficiency
vary in different drive cycles. Arising acceleration and speed in a
drive cycle leads to Fuel consumption increase. In addition, energy
losses in drive cycle relates to fuel cell system power request. Parasitic
power in different parts of fuel cell system will increase when
power request increases. Finally, most of energy losses in drive cycle
occur in fuel cell system because of producing a lot of energy by fuel
cell stack.
Abstract: The conventional assessment of human semen is a
highly subjective assessment, with considerable intra- and interlaboratory
variability. Computer-Assisted Sperm Analysis (CASA)
systems provide a rapid and automated assessment of the sperm
characteristics, together with improved standardization and quality
control. However, the outcome of CASA systems is sensitive to the
method of experimentation. While conventional CASA systems use
digital microscopes with phase-contrast accessories, producing
higher contrast images, we have used raw semen samples (no
staining materials) and a regular light microscope, with a digital
camera directly attached to its eyepiece, to insure cost benefits and
simple assembling of the system. However, since the accurate finding
of sperms in the semen image is the first step in the examination and
analysis of the semen, any error in this step can affect the outcome of
the analysis. This article introduces and explains an algorithm for
finding sperms in low contrast images: First, an image enhancement
algorithm is applied to remove extra particles from the image. Then,
the foreground particles (including sperms and round cells) are
segmented form the background. Finally, based on certain features
and criteria, sperms are separated from other cells.
Abstract: Creative design requires new approaches to assessment
in vocational and technological education. To date, there has been little
discussion on instruments used to evaluate dies produced by students
in vocational and technological education. Developing a generic
instrument has been very difficult due to the diversity of creative
domains, the specificity of content, and the subjectivity involved in
judgment. This paper presents an instrument for measuring the
creativity in the design of products by expanding the Consensual
Assessment Technique (CAT). The content-based scale was evaluated
for content validity by 5 experts. The scale comprises 5 criteria:
originality; practicability; precision; aesthetics; and exchangeability.
Nine experts were invited to evaluate the dies produced by 38 college
students who enrolled in a Product Design and Development course.
To further explore the degree of rater agreement, inter-rater reliability
was calculated for each dimension using Kendall's coefficient of
concordance test. The inter-judge reliability scores achieved
significance, with coefficients ranging from 0.53 to 0.71.
Abstract: Debates on residential satisfaction topic have been
vigorously discussed in family house setting. Nonetheless, less or
lack of attention was given to survey on student residential
satisfaction in the campus house setting. This study, however, tried to
fill in the gap by focusing more on the relationship between students-
socio-economic backgrounds and student residential satisfaction with
their on-campus student housing facilities. Two-stage cluster
sampling method was employed to classify the respondents. Then,
self-administered questionnaires were distributed face-to-face to the
students. In general, it was confirmed that the students- socioeconomic
backgrounds have significantly influence the students-
satisfaction with their on-campus student housing facilities. The main
influential factors were revealed as the economic status, sense of
sharing, and the ethnicity of roommates. Likewise, this study could
also provide some useful feedback for the universities administration
in order to improve their student housing facilities.
Abstract: Microscopic emission and fuel consumption models
have been widely recognized as an effective method to quantify real
traffic emission and energy consumption when they are applied with
microscopic traffic simulation models. This paper presents a
framework for developing the Microscopic Emission (HC, CO, NOx,
and CO2) and Fuel consumption (MEF) models for light-duty
vehicles. The variable of composite acceleration is introduced into
the MEF model with the purpose of capturing the effects of historical
accelerations interacting with current speed on emission and fuel
consumption. The MEF model is calibrated by multivariate
least-squares method for two types of light-duty vehicle using
on-board data collected in Beijing, China by a Portable Emission
Measurement System (PEMS). The instantaneous validation results
shows the MEF model performs better with lower Mean Absolute
Percentage Error (MAPE) compared to other two models. Moreover,
the aggregate validation results tells the MEF model produces
reasonable estimations compared to actual measurements with
prediction errors within 12%, 10%, 19%, and 9% for HC, CO, NOx
emissions and fuel consumption, respectively.
Abstract: An adaptive Chinese hand-talking system is presented
in this paper. By analyzing the 3 data collecting strategies for new
users, the adaptation framework including supervised and unsupervised
adaptation methods is proposed. For supervised adaptation,
affinity propagation (AP) is used to extract exemplar subsets, and enhanced
maximum a posteriori / vector field smoothing (eMAP/VFS)
is proposed to pool the adaptation data among different models. For
unsupervised adaptation, polynomial segment models (PSMs) are
used to help hidden Markov models (HMMs) to accurately label
the unlabeled data, then the "labeled" data together with signerindependent
models are inputted to MAP algorithm to generate
signer-adapted models. Experimental results show that the proposed
framework can execute both supervised adaptation with small amount
of labeled data and unsupervised adaptation with large amount
of unlabeled data to tailor the original models, and both achieve
improvements on the performance of recognition rate.
Abstract: We propose a phenomenological model for the
process of polymer desorption. In so doing, we omit the usual
theoretical approach of incorporating a fictitious viscoelastic
stress term into the flux equation. As a result, we obtain a
model that captures the essence of the phenomenon of trapping
skinning, while preserving the integrity of the experimentally
verified Fickian law for diffusion. An appropriate asymptotic
analysis is carried out, and a parameter is introduced to represent
the speed of the desorption front. Numerical simulations are
performed to illustrate the desorption dynamics of the model.
Recommendations are made for future modifications of the
model, and provisions are made for the inclusion of experimentally
determined frontal speeds.
Abstract: This paper presents a new Quality-Controlled, wavelet based, compression method for electrocardiogram (ECG) signals. Initially, an ECG signal is decomposed using the wavelet transform. Then, the resulting coefficients are iteratively thresholded to guarantee that a predefined goal percent root mean square difference (GPRD) is matched within tolerable boundaries. The quantization strategy of extracted non-zero wavelet coefficients (NZWC), according to the combination of RLE, HUFFMAN and arithmetic encoding of the NZWC and a resulting look up table, allow the accomplishment of high compression ratios with good quality reconstructed signals.