Abstract: Since communications between tag and reader in RFID
system are by radio, anyone can access the tag and obtain its any
information. And a tag always replies with the same ID so that it is
hard to distinguish between a real and a fake tag. Thus, there are many
security problems in today-s RFID System. Firstly, unauthorized
reader can easily read the ID information of any Tag. Secondly,
Adversary can easily cheat the legitimate reader using the collected
Tag ID information, such as the any legitimate Tag. These security
problems can be typically solved by encryption of messages
transmitted between Tag and Reader and by authentication for Tag.
In this paper, to solve these security problems on RFID system, we
propose the Tag Authentication Scheme based on self shrinking
generator (SSG). SSG Algorithm using in our scheme is proposed by
W.Meier and O.Staffelbach in EUROCRYPT-94. This Algorithm is
organized that only one LFSR and selection logic in order to generate
random stream. Thus it is optimized to implement the hardware logic
on devices with extremely limited resource, and the output generating
from SSG at each time do role as random stream so that it is allow our
to design the light-weight authentication scheme with security against
some network attacks. Therefore, we propose the novel tag
authentication scheme which use SSG to encrypt the Tag-ID
transmitted from tag to reader and achieve authentication of tag.
Abstract: In this paper, an efficient local appearance feature
extraction method based the multi-resolution Curvelet transform is
proposed in order to further enhance the performance of the well
known Linear Discriminant Analysis(LDA) method when applied
to face recognition. Each face is described by a subset of band
filtered images containing block-based Curvelet coefficients. These
coefficients characterize the face texture and a set of simple statistical
measures allows us to form compact and meaningful feature vectors.
The proposed method is compared with some related feature extraction
methods such as Principal component analysis (PCA), as well
as Linear Discriminant Analysis LDA, and independent component
Analysis (ICA). Two different muti-resolution transforms, Wavelet
(DWT) and Contourlet, were also compared against the Block Based
Curvelet-LDA algorithm. Experimental results on ORL, YALE and
FERET face databases convince us that the proposed method provides
a better representation of the class information and obtains much
higher recognition accuracies.
Abstract: The pipe inspection operation is the difficult detective
performance. Almost applications are mainly relies on a manual
recognition of defective areas that have carried out detection by an
engineer. Therefore, an automation process task becomes a necessary
in order to avoid the cost incurred in such a manual process. An
automated monitoring method to obtain a complete picture of the
sewer condition is proposed in this work. The focus of the research is
the automated identification and classification of discontinuities in
the internal surface of the pipe. The methodology consists of several
processing stages including image segmentation into the potential
defect regions and geometrical characteristic features. Automatic
recognition and classification of pipe defects are carried out by means
of using an artificial neural network technique (ANN) based on
Radial Basic Function (RBF). Experiments in a realistic environment
have been conducted and results are presented.
Abstract: The design of a pattern classifier includes an attempt
to select, among a set of possible features, a minimum subset of
weakly correlated features that better discriminate the pattern classes.
This is usually a difficult task in practice, normally requiring the
application of heuristic knowledge about the specific problem
domain. The selection and quality of the features representing each
pattern have a considerable bearing on the success of subsequent
pattern classification. Feature extraction is the process of deriving
new features from the original features in order to reduce the cost of
feature measurement, increase classifier efficiency, and allow higher
classification accuracy. Many current feature extraction techniques
involve linear transformations of the original pattern vectors to new
vectors of lower dimensionality. While this is useful for data
visualization and increasing classification efficiency, it does not
necessarily reduce the number of features that must be measured
since each new feature may be a linear combination of all of the
features in the original pattern vector. In this paper a new approach is
presented to feature extraction in which feature selection, feature
extraction, and classifier training are performed simultaneously using
a genetic algorithm. In this approach each feature value is first
normalized by a linear equation, then scaled by the associated weight
prior to training, testing, and classification. A knn classifier is used to
evaluate each set of feature weights. The genetic algorithm optimizes
a vector of feature weights, which are used to scale the individual
features in the original pattern vectors in either a linear or a nonlinear
fashion. By this approach, the number of features used in classifying
can be finely reduced.
Abstract: A new approach based on the consideration that electroencephalogram (EEG) signals are chaotic signals was presented for automated diagnosis of electroencephalographic changes. This consideration was tested successfully using the nonlinear dynamics tools, like the computation of Lyapunov exponents. This paper presented the usage of statistics over the set of the Lyapunov exponents in order to reduce the dimensionality of the extracted feature vectors. Since classification is more accurate when the pattern is simplified through representation by important features, feature extraction and selection play an important role in classifying systems such as neural networks. Multilayer perceptron neural network (MLPNN) architectures were formulated and used as basis for detection of electroencephalographic changes. Three types of EEG signals (EEG signals recorded from healthy volunteers with eyes open, epilepsy patients in the epileptogenic zone during a seizure-free interval, and epilepsy patients during epileptic seizures) were classified. The selected Lyapunov exponents of the EEG signals were used as inputs of the MLPNN trained with Levenberg- Marquardt algorithm. The classification results confirmed that the proposed MLPNN has potential in detecting the electroencephalographic changes.
Abstract: This paper describes the experimental efficiency of a
compact organic Rankine cycle (ORC) system with a compact
rotary-vane-type expander. The compact ORC system can be used for
power generation from low-temperature heat sources such as waste
heat from various small-scale heat engines, fuel cells, electric devices,
and solar thermal energy. The purpose of this study is to develop an
ORC system with a low power output of less than 1 kW with a hot
temperature source ranging from 60°C to 100°C and a cold
temperature source ranging from 10°C to 30°C. The power output of
the system is rather less due to limited heat efficiency. Therefore, the
system should have an economically optimal efficiency. In order to
realize such a system, an efficient and low-cost expander is
indispensable. An experimental ORC system was developed using the
rotary-vane-type expander which is one of possible candidates of the
expander. The experimental results revealed the expander
performance for various rotation speeds, expander efficiencies, and
thermal efficiencies. Approximately 30 W of expander power output
with 48% expander efficiency and 4% thermal efficiency with a
temperature difference between the hot and cold sources of 80°C was
achieved.
Abstract: The intention of this study to design the probability optimized sewing sack-s workstation based on ergonomics for productivity improvement and decreasing musculoskeletal disorders. The physical dimensions of two workers were using to design the new workstation. The physical dimensions are (1) sitting height, (2) mid shoulder height sitting, (3) shoulder breadth, (4) knee height, (5) popliteal height, (6) hip breadth and (7) buttock-knee length. The 5th percentile of buttock knee length sitting (51 cm), the 50th percentile of mid shoulder height sitting (62 cm) and the 95th percentile of popliteal height (43 cm) and hip breadth (45 cm) applied to design the workstation for sewing sack-s operator and the others used to adjust the components of this workstation. The risk assessment by RULA before and after using the probability optimized workstation were 7 and 7 scores and REBA scores were 11 and 5, respectively. Body discomfort-abnormal index was used to assess muscle fatigue of operators before adjustment workstation found that neck muscles, arm muscles area, muscles on the back and the lower back muscles fatigue. Therefore, the extension and flexion exercise was applied to relief musculoskeletal stresses. The workers exercised 15 minutes before the beginning and the end of work for 5 days. After that, the capability of flexion and extension muscles- workers were increasing in 3 muscles (arm, leg, and back muscles).
Abstract: In multi-parameter family of distributions, conditions
for a modified maximum likelihood estimator to be second order
admissible are given. Applying these results to the multi-parameter
logistic regression model, it is shown that the maximum likelihood
estimator is always second order inadmissible. Also, conditions for
the Berkson estimator to be second order admissible are given.
Abstract: The rapid adoption of Internet has turned the Millennial Teens- life like a lightning speed. Empirical evidence has illustrated that Pathological Internet Use (PIU) among them ensure long-term success to the market players in the children industry. However, it creates concerns among their care takers as it generates mental disorder among some of them. The purpose of this paper is to examine the determinants of PIU and identify its outcomes among urban Millennial Teens. It aims to develop a theoretical framework based on a modified Media System Dependency (MSD) Theory that integrates important systems and components that determine and resulted from PIU.
Abstract: Using efficient classification methods is necessary for automatic fingerprint recognition system. This paper introduces a new structural approach to fingerprint classification by using the directional image of fingerprints to increase the number of subclasses. In this method, the directional image of fingerprints is segmented into regions consisting of pixels with the same direction. Afterwards the relational graph to the segmented image is constructed and according to it, the super graph including prominent information of this graph is formed. Ultimately we apply a matching technique to compare obtained graph with the model graphs in order to classify fingerprints by using cost function. Increasing the number of subclasses with acceptable accuracy in classification and faster processing in fingerprints recognition, makes this system superior.
Abstract: Rational Emotive Behaviour Therapy is the first
cognitive behavior therapy which was introduced by Albert Ellis.
This is a systematic and structured psychotherapy which is effective
in treating various psychological problems. A patient, 25 years old
male, experienced intense fear and situational panic attack to return
to his faculty and to face his class-mates after a long absence (2
years). This social anxiety disorder was a major factor that impeded
the progress of his study. He was treated with the use of behavioural
technique such as relaxation breathing technique and cognitive
techniques such as imagery, cognitive restructuring, rationalization
technique and systematic desensitization. The patient reported
positive improvement in the anxiety disorder, able to progress well in
studies and lead a better quality of life as a student.
Abstract: The SOM has several beneficial features which make
it a useful method for data mining. One of the most important
features is the ability to preserve the topology in the projection.
There are several measures that can be used to quantify the goodness
of the map in order to obtain the optimal projection, including the
average quantization error and many topological errors. Many
researches have studied how the topology preservation should be
measured. One option consists of using the topographic error which
considers the ratio of data vectors for which the first and second best
BMUs are not adjacent. In this work we present a study of the
behaviour of the topographic error in different kinds of maps. We
have found that this error devaluates the rectangular maps and we
have studied the reasons why this happens. Finally, we suggest a new
topological error to improve the deficiency of the topographic error.
Abstract: There are several means to measure the oxidation of edible oils, such as the acid value, the peroxide value, and the anisidine value. However, these means require large quantities of reagents and are time-consuming tasks. Therefore, a more convenient and time-saving way to measure the oxidation of edible oils is required. In this report, an edible oil condition sensor was fabricated by using single-walled nanotubes (SWNT). In order to test the sensor, oxidized edible oils, each one at a different acid value, were prepared. The SWNT sensors were immersed into these oxidized oils and the resistance changes in the sensors were measured. It was found that the conductivity of the sensors decreased as the oxidation level of oil increased. This result suggests that a change of the oil components induced by the oxidation process in edible oils is related to the conductivity change in the SWNT sensor.
Abstract: The implicit block methods based on the backward
differentiation formulae (BDF) for the solution of stiff initial value
problems (IVPs) using variable step size is derived. We construct a
variable step size block methods which will store all the coefficients
of the method with a simplified strategy in controlling the step size
with the intention of optimizing the performance in terms of
precision and computation time. The strategy involves constant,
halving or increasing the step size by 1.9 times the previous step size.
Decision of changing the step size is determined by the local
truncation error (LTE). Numerical results are provided to support the
enhancement of method applied.
Abstract: Delamination between layers in composite materials is a major structural failure. The delamination resistance is quantified by the critical strain energy release rate (SERR). The present investigation deals with the strain energy release rate of two woven fabric composites. Materials used are made of two types of glass fiber (360 gsm and 600 gsm) of plain weave and epoxy as matrix. The fracture behavior is studied using the mode I, double cantilever beam test and the mode II, end notched flexure test, in order to determine the energy required for the initiation and growth of an artificial crack. The delamination energy of these two materials is compared in order to study the effect of weave and reinforcement on mechanical properties. The fracture mechanism is also analyzed by means of scanning electron microscopy (SEM). It is observed that the plain weave fabric composite with lesser strand width has higher inter laminar fracture properties compared to the plain weave fabric composite with more strand width.
Abstract: In this paper a way of hiding text message (Steganography) in the gray image has been presented. In this method tried to find binary value of each character of text message and then in the next stage, tried to find dark places of gray image (black) by converting the original image to binary image for labeling each object of image by considering on 8 connectivity. Then these images have been converted to RGB image in order to find dark places. Because in this way each sequence of gray color turns into RGB color and dark level of grey image is found by this way if the Gary image is very light the histogram must be changed manually to find just dark places. In the final stage each 8 pixels of dark places has been considered as a byte and binary value of each character has been put in low bit of each byte that was created manually by dark places pixels for increasing security of the main way of steganography (LSB).
Abstract: This research study the application of the immobilized
TiO2 layer and Cu-TiO2 layer on graphite substrate as a negative
electrode or anode for Li-ion battery. The titania layer was produced
through chemical bath deposition method, meanwhile Cu particles
were deposited electrochemically. A material can be used as an
electrode as it has capability to intercalates Li ions into its crystal
structure. The Li intercalation into TiO2/Graphite and Cu-
TiO2/Graphite were analyzed from the changes of its XRD pattern
after it was used as electrode during discharging process. The XRD
patterns were refined by Le Bail method in order to determine the
crystal structure of the prepared materials. A specific capacity and the
cycle ability measurement were carried out to study the performance
of the prepared materials as negative electrode of the Li-ion battery.
The specific capacity was measured during discharging process from
fully charged until the cut off voltage. A 300 was used as a load.
The result shows that the specific capacity of Li-ion battery with
TiO2/Graphite as negative electrode is 230.87 ± 1.70mAh.g-1 which is
higher than the specific capacity of Li-ion battery with pure graphite
as negative electrode, i.e 140.75 ±0.46mAh.g-1. Meanwhile
deposition of Cu onto TiO2 layer does not increase the specific
capacity, and the value even lower than the battery with
TiO2/Graphite as electrode. The cycle ability of the prepared battery
is only two cycles, due to the Li ribbon which was used as cathode
became fragile and easily broken.
Abstract: Multimedia distributed systems deal with heterogeneous
data, such as texts, images, graphics, video and audio. The specification
of temporal relations among different data types and distributed
sources is an open research area. This paper proposes a fully
distributed synchronization model to be used in multimedia systems.
One original aspect of the model is that it avoids the use of a common
reference (e.g. wall clock and shared memory). To achieve this, all
possible multimedia temporal relations are specified according to
their causal dependencies.
Abstract: The IDR(s) method based on an extended IDR theorem was proposed by Sonneveld and van Gijzen. The original IDR(s) method has excellent property compared with the conventional iterative methods in terms of efficiency and small amount of memory. IDR(s) method, however, has unexpected property that relative residual 2-norm stagnates at the level of less than 10-12. In this paper, an effective strategy for stagnation detection, stagnation avoidance using adaptively information of parameter s and improvement of convergence rate itself of IDR(s) method are proposed in order to gain high accuracy of the approximated solution of IDR(s) method. Through numerical experiments, effectiveness of adaptive tuning IDR(s) method is verified and demonstrated.
Abstract: Most of the existing text mining approaches are
proposed, keeping in mind, transaction databases model. Thus, the
mined dataset is structured using just one concept: the “transaction",
whereas the whole dataset is modeled using the “set" abstract type. In
such cases, the structure of the whole dataset and the relationships
among the transactions themselves are not modeled and
consequently, not considered in the mining process.
We believe that taking into account structure properties of
hierarchically structured information (e.g. textual document, etc ...)
in the mining process, can leads to best results. For this purpose, an
hierarchical associations rule mining approach for textual documents
is proposed in this paper and the classical set-oriented mining
approach is reconsidered profits to a Direct Acyclic Graph (DAG)
oriented approach. Natural languages processing techniques are used
in order to obtain the DAG structure. Based on this graph model, an
hierarchical bottom up algorithm is proposed. The main idea is that
each node is mined with its parent node.