Abstract: Phishing, or stealing of sensitive information on the
web, has dealt a major blow to Internet Security in recent times. Most
of the existing anti-phishing solutions fail to handle the fuzziness
involved in phish detection, thus leading to a large number of false
positives. This fuzziness is attributed to the use of highly flexible and
at the same time, highly ambiguous HTML language. We introduce a
new perspective against phishing, that tries to systematically prove,
whether a given page is phished or not, using the corresponding
original page as the basis of the comparison. It analyzes the layout of
the pages under consideration to determine the percentage distortion
between them, indicative of any form of malicious alteration. The
system design represents an intelligent system, employing dynamic
assessment which accurately identifies brand new phishing attacks
and will prove effective in reducing the number of false positives.
This framework could potentially be used as a knowledge base, in
educating the internet users against phishing.
Abstract: Proper management of residues originated from
industrial activities is considered as one of the serious challenges
faced by industrial societies due to their potential hazards to the
environment. Common disposal methods for industrial solid wastes
(ISWs) encompass various combinations of solely management
options, i.e. recycling, incineration, composting, and sanitary
landfilling. Indeed, the procedure used to evaluate and nominate the
best practical methods should be based on environmental, technical,
economical, and social assessments. In this paper an environmentaltechnical
assessment model is developed using analytical network
process (ANP) to facilitate the decision making practice for ISWs
generated at Gilan province, Iran. Using the results of performed
surveys on industrial units located at Gilan, the various groups of
solid wastes in the research area were characterized, and four
different ISW management scenarios were studied. The evaluation
process was conducted using the above-mentioned model in the
Super Decisions software (version 2.0.8) environment. The results
indicates that the best ISW management scenario for Gilan province
is consist of recycling the metal industries residues, composting the
putrescible portion of ISWs, combustion of paper, wood, fabric and
polymeric wastes as well as energy extraction in the incineration
plant, and finally landfilling the rest of the waste stream in addition
with rejected materials from recycling and compost production plants
and ashes from the incineration unit.
Abstract: Octree compression techniques have been used
for several years for compressing large three dimensional data
sets into homogeneous regions. This compression technique
is ideally suited to datasets which have similar values in
clusters. Oil engineers represent reservoirs as a three dimensional
grid where hydrocarbons occur naturally in clusters. This
research looks at the efficiency of storing these grids using
octree compression techniques where grid cells are broken
into active and inactive regions. Initial experiments yielded
high compression ratios as only active leaf nodes and their
ancestor, header nodes are stored as a bitstream to file on
disk. Savings in computational time and memory were possible
at decompression, as only active leaf nodes are sent to the
graphics card eliminating the need of reconstructing the original
matrix. This results in a more compact vertex table, which can
be loaded into the graphics card quicker and generating shorter
refresh delay times.
Abstract: Whole genome duplication (WGD) increased the
number of yeast Saccharomyces cerevisiae chromosomes from 8 to
16. In spite of retention the number of chromosomes in the genome
of this organism after WGD to date, chromosomal rearrangement
events have caused an evolutionary distance between current genome
and its ancestor. Studies under evolutionary-based approaches on
eukaryotic genomes have shown that the rearrangement distance is an
approximable problem. In the case of S. cerevisiae, we describe that
rearrangement distance is accessible by using dedoubled adjacency
graph drawn for 55 large paired chromosomal regions originated
from WGD. Then, we provide a program extracted from a C program
database to draw a dedoubled genome adjacency graph for S.
cerevisiae. From a bioinformatical perspective, using the duplicated
blocks of current genome in S. cerevisiae, we infer that genomic
organization of eukaryotes has the potential to provide valuable
detailed information about their ancestrygenome.
Abstract: Mathematical justifications are given for a simulation technique of multivariate nonGaussian random processes and fields based on Rosenblatt-s transformation of Gaussian processes. Different types of convergences are given for the approaching sequence. Moreover an original numerical method is proposed in order to solve the functional equation yielding the underlying Gaussian process autocorrelation function.
Abstract: This paper presents a system for discovering
association rules from collections of unstructured documents called
EART (Extract Association Rules from Text). The EART system
treats texts only not images or figures. EART discovers association
rules amongst keywords labeling the collection of textual documents.
The main characteristic of EART is that the system integrates XML
technology (to transform unstructured documents into structured
documents) with Information Retrieval scheme (TF-IDF) and Data
Mining technique for association rules extraction. EART depends on
word feature to extract association rules. It consists of four phases:
structure phase, index phase, text mining phase and visualization
phase. Our work depends on the analysis of the keywords in the
extracted association rules through the co-occurrence of the keywords
in one sentence in the original text and the existing of the keywords
in one sentence without co-occurrence. Experiments applied on a
collection of scientific documents selected from MEDLINE that are
related to the outbreak of H5N1 avian influenza virus.
Abstract: Recently, an enhanced hexagon-based search (EHS)
algorithm was proposed to speedup the original hexagon-based search
(HS) by exploiting the group-distortion information of some evaluated
points. In this paper, a second version of the EHS is proposed with a
new point-oriented inner search technique which can further speedup
the HS in both large and small motion environments. Experimental
results show that the enhanced hexagon-based search version-2
(EHS2) is faster than the HS up to 34% with negligible PSNR
degradation.
Abstract: Medical imaging uses the advantage of digital
technology in imaging and teleradiology. In teleradiology systems
large amount of data is acquired, stored and transmitted. A major
technology that may help to solve the problems associated with the
massive data storage and data transfer capacity is data compression
and decompression. There are many methods of image compression
available. They are classified as lossless and lossy compression
methods. In lossy compression method the decompressed image
contains some distortion. Fractal image compression (FIC) is a lossy
compression method. In fractal image compression an image is
coded as a set of contractive transformations in a complete metric
space. The set of contractive transformations is guaranteed to
produce an approximation to the original image. In this paper FIC is
achieved by PIFS using quadtree partitioning. PIFS is applied on
different images like , Ultrasound, CT Scan, Angiogram, X-ray,
Mammograms. In each modality approximately twenty images are
considered and the average values of compression ratio and PSNR
values are arrived. In this method of fractal encoding, the
parameter, tolerance factor Tmax, is varied from 1 to 10, keeping the
other standard parameters constant. For all modalities of images the
compression ratio and Peak Signal to Noise Ratio (PSNR) are
computed and studied. The quality of the decompressed image is
arrived by PSNR values. From the results it is observed that the
compression ratio increases with the tolerance factor and
mammogram has the highest compression ratio. The quality of the
image is not degraded upto an optimum value of tolerance factor,
Tmax, equal to 8, because of the properties of fractal compression.
Abstract: In this paper, we present the video quality measure
estimation via a neural network. This latter predicts MOS (mean
opinion score) by providing height parameters extracted from
original and coded videos. The eight parameters that are used are: the
average of DFT differences, the standard deviation of DFT
differences, the average of DCT differences, the standard deviation
of DCT differences, the variance of energy of color, the luminance
Y, the chrominance U and the chrominance V. We chose Euclidean
Distance to make comparison between the calculated and estimated
output.
Abstract: In order to protect original data, watermarking is first consideration direction for digital information copyright. In addition, to achieve high quality image, the algorithm maybe can not run on embedded system because the computation is very complexity. However, almost nowadays algorithms need to build on consumer production because integrator circuit has a huge progress and cheap price. In this paper, we propose a novel algorithm which efficient inserts watermarking on digital image and very easy to implement on digital signal processor. In further, we select a general and cheap digital signal processor which is made by analog device company to fit consumer application. The experimental results show that the image quality by watermarking insertion can achieve 46 dB can be accepted in human vision and can real-time execute on digital signal processor.
Abstract: Designing modern machine tools is a complex task. A
simulation tool to aid the design work, a virtual machine, has
therefore been developed in earlier work. The virtual machine
considers the interaction between the mechanics of the machine
(including structural flexibility) and the control system. This paper
exemplifies the usefulness of the virtual machine as a tool for product
development. An optimisation study is conducted aiming at
improving the existing design of a machine tool regarding weight and
manufacturing accuracy at maintained manufacturing speed. The
problem can be categorised as constrained multidisciplinary multiobjective
multivariable optimisation. Parameters of the control and
geometric quantities of the machine are used as design variables. This
results in a mix of continuous and discrete variables and an
optimisation approach using a genetic algorithm is therefore
deployed. The accuracy objective is evaluated according to
international standards. The complete systems model shows nondeterministic
behaviour. A strategy to handle this based on statistical
analysis is suggested. The weight of the main moving parts is reduced
by more than 30 per cent and the manufacturing accuracy is
improvement by more than 60 per cent compared to the original
design, with no reduction in manufacturing speed. It is also shown
that interaction effects exist between the mechanics and the control,
i.e. this improvement would most likely not been possible with a
conventional sequential design approach within the same time, cost
and general resource frame. This indicates the potential of the virtual
machine concept for contributing to improved efficiency of both
complex products and the development process for such products.
Companies incorporating such advanced simulation tools in their
product development could thus improve its own competitiveness as
well as contribute to improved resource efficiency of society at large.
Abstract: In this paper, the processing of sonar signals has been
carried out using Minimal Resource Allocation Network (MRAN)
and a Probabilistic Neural Network (PNN) in differentiation of
commonly encountered features in indoor environments. The
stability-plasticity behaviors of both networks have been
investigated. The experimental result shows that MRAN possesses
lower network complexity but experiences higher plasticity than
PNN. An enhanced version called parallel MRAN (pMRAN) is
proposed to solve this problem and is proven to be stable in
prediction and also outperformed the original MRAN.
Abstract: Eye localization is necessary for face recognition and
related application areas. Most of eye localization algorithms reported
so far still need to be improved about precision and computational
time for successful applications. In this paper, we propose an eye
location method based on multi-scale Gabor feature vectors, which is
more robust with respect to initial points. The eye localization based
on Gabor feature vectors first needs to constructs an Eye Model Bunch
for each eye (left or right eye) which consists of n Gabor jets and
average eye coordinates of each eyes obtained from n model face
images, and then tries to localize eyes in an incoming face image by
utilizing the fact that the true eye coordinates is most likely to be very
close to the position where the Gabor jet will have the best Gabor jet
similarity matching with a Gabor jet in the Eye Model Bunch. Similar
ideas have been already proposed in such as EBGM (Elastic Bunch
Graph Matching). However, the method used in EBGM is known to be
not robust with respect to initial values and may need extensive search
range for achieving the required performance, but extensive search
ranges will cause much more computational burden. In this paper, we
propose a multi-scale approach with a little increased computational
burden where one first tries to localize eyes based on Gabor feature
vectors in a coarse face image obtained from down sampling of the
original face image, and then localize eyes based on Gabor feature
vectors in the original resolution face image by using the eye
coordinates localized in the coarse scaled image as initial points.
Several experiments and comparisons with other eye localization
methods reported in the other papers show the efficiency of our
proposed method.
Abstract: Educational games (EG) seem to have lots of potential due to digital games popularity and preferences of our younger generations of learners. However, most studies focus on game design and its effectiveness while little has been known about the factors that can affect users to accept or to reject EG for their learning. User acceptance research try to understand the determinants of information systems (IS) adoption among users by investigating both systems factors and users factors. Upon the lack of knowledge on acceptance factors for educational games, we seek to understand the issue. This study proposed a model of acceptance factors based on Unified Theory of Acceptance and Use of Technology (UTAUT). We use original model (performance expectancy, effort expectancy and social influence) together with two new determinants (learning opportunities and enjoyment). We will also investigate the effect of gender and gaming experience that moderate the proposed factors.
Abstract: The paper reflects current state of popularization of
static elasticity modulus of concrete. This parameter is undoubtedly
very important for designing of concrete structures, and very often
neglected and rarely determined before designing concrete
technology itself. The paper describes assessment and comparison of
four mix designs with almost constant dosage of individual
components. The only difference is area of origin of small size
fraction of aggregate 0/4. Development of compressive strength and
static elasticity modulus at the age of 7, 28 and 180 days were
observed. As the experiment showed, designing of individual
components and their quality are the basic factor influencing
elasticity modulus of current concrete.
Abstract: Construction of portable device for fast analysis of energetic materials is described in this paper. The developed analytical system consists of two main parts: a miniaturized microcolumn liquid chromatograph of unique construction and original chemiluminescence detector. This novel portable device is able to determine selectively most of nitramine- and nitroester-based explosives as well as inorganic nitrates at trace concentrations in water or soil extracts in less than 8 minutes.
Abstract: The paper considers the effect of feed plate location
on the interactions in a seven plate binary distillation column. The
mathematical model of the distillation column is deduced based on
the equations of mass and energy balances for each stage, detailed
model for both reboiler and condenser, and heat transfer equations.
The Dynamic Relative Magnitude Criterion, DRMC is used to assess
the interactions in different feed plate locations for a seven plate
(Benzene-Toluene) binary distillation column ( the feed plate is
originally at stage 4). The results show that whenever we go far from
the optimum feed plate position, the level of interaction augments.
Abstract: Overcurrent (OC) relays are the major protection
devices in a distribution system. The operating time of the OC relays
are to be coordinated properly to avoid the mal-operation of the
backup relays. The OC relay time coordination in ring fed
distribution networks is a highly constrained optimization problem
which can be stated as a linear programming problem (LPP). The
purpose is to find an optimum relay setting to minimize the time of
operation of relays and at the same time, to keep the relays properly
coordinated to avoid the mal-operation of relays.
This paper presents two phase simplex method for optimum time
coordination of OC relays. The method is based on the simplex
algorithm which is used to find optimum solution of LPP. The
method introduces artificial variables to get an initial basic feasible
solution (IBFS). Artificial variables are removed using iterative
process of first phase which minimizes the auxiliary objective
function. The second phase minimizes the original objective function
and gives the optimum time coordination of OC relays.
Abstract: Support Vector Machine (SVM) is a statistical learning tool that was initially developed by Vapnik in 1979 and later developed to a more complex concept of structural risk minimization (SRM). SVM is playing an increasing role in applications to detection problems in various engineering problems, notably in statistical signal processing, pattern recognition, image analysis, and communication systems. In this paper, SVM was applied to the detection of medical ultrasound images in the presence of partially developed speckle noise. The simulation was done for single look and multi-look speckle models to give a complete overlook and insight to the new proposed model of the SVM-based detector. The structure of the SVM was derived and applied to clinical ultrasound images and its performance in terms of the mean square error (MSE) metric was calculated. We showed that the SVM-detected ultrasound images have a very low MSE and are of good quality. The quality of the processed speckled images improved for the multi-look model. Furthermore, the contrast of the SVM detected images was higher than that of the original non-noisy images, indicating that the SVM approach increased the distance between the pixel reflectivity levels (detection hypotheses) in the original images.