Abstract: Random Access Memory (RAM) is an important
device in computer system. It can represent the snapshot on how the
computer has been used by the user. With the growth of its
importance, the computer memory has been an issue that has been
discussed in digital forensics. A number of tools have been developed
to retrieve the information from the memory. However, most of the
tools have their limitation in the ability of retrieving the important
information from the computer memory. Hence, this paper is aimed
to discuss the limitation and the setback for two main techniques such
as process signature search and process enumeration. Then, a new
hybrid approach will be presented to minimize the setback in both
individual techniques. This new approach combines both techniques
with the purpose to retrieve the information from the process block
and other objects in the computer memory. Nevertheless, the basic
theory in address translation for x86 platforms will be demonstrated
in this paper.
Abstract: In this paper back-propagation artificial neural network
(BPANN )with Levenberg–Marquardt algorithm is employed to
predict the deformation of the upsetting process. To prepare a
training set for BPANN, some finite element simulations were
carried out. The input data for the artificial neural network are a set
of parameters generated randomly (aspect ratio d/h, material
properties, temperature and coefficient of friction). The output data
are the coefficient of polynomial that fitted on barreling curves.
Neural network was trained using barreling curves generated by
finite element simulations of the upsetting and the corresponding
material parameters. This technique was tested for three different
specimens and can be successfully employed to predict the
deformation of the upsetting process
Abstract: The IEEE802.16 standard which has emerged as
Broadband Wireless Access (BWA) technology, promises to deliver
high data rate over large areas to a large number of subscribers in the
near future. This paper analyze the effect of overheads over capacity
of downlink (DL) of orthogonal frequency division multiple access
(OFDMA)–based on the IEEE802.16e mobile WiMAX system with
and without overheads. The analysis focuses in particular on the
impact of Adaptive Modulation and Coding (AMC) as well as
deriving an algorithm to determine the maximum numbers of
subscribers that each specific WiMAX sector may support. An
analytical study of the WiMAX propagation channel by using Cost-
231 Hata Model is presented. Numerical results and discussion
estimated by using Matlab to simulate the algorithm for different
multi-users parameters.
Abstract: In this paper, we propose a new architecture for the implementation of the N-point Fast Fourier Transform (FFT), based on the Radix-2 Decimation in Frequency algorithm. This architecture is based on a pipeline circuit that can process a stream of samples and produce two FFT transform samples every clock cycle. Compared to existing implementations the architecture proposed achieves double processing speed using the same circuit complexity.
Abstract: Cluster analysis divides data into groups that are
meaningful, useful, or both. Analysis of biological data is creating a
new generation of epidemiologic, prognostic, diagnostic and
treatment modalities. Clustering of protein sequences is one of the
current research topics in the field of computer science. Linear
relation is valuable in rule discovery for a given data, such as if value
X goes up 1, value Y will go down 3", etc. The classical linear
regression models the linear relation of two sequences perfectly.
However, if we need to cluster a large repository of protein sequences
into groups where sequences have strong linear relationship with
each other, it is prohibitively expensive to compare sequences one by
one. In this paper, we propose a new technique named General
Regression Model Technique Clustering Algorithm (GRMTCA) to
benignly handle the problem of linear sequences clustering. GRMT
gives a measure, GR*, to tell the degree of linearity of multiple
sequences without having to compare each pair of them.
Abstract: This paper describes the development of a numerical finite element algorithm used for the analysis of reinforced concrete structure equipped with shakes energy absorbing device subjected to earthquake excitation. For this purpose a finite element program code for analysis of reinforced concrete frame buildings is developed. The performance of developed program code is evaluated by analyzing of a reinforced concrete frame buildings model. The results are show that using damper device as seismic energy dissipation system effectively can reduce the structural response of framed structure during earthquake occurrence.
Abstract: This paper presents an efficient emission constrained
hydrothermal scheduling algorithm that deals with nonlinear
functions such as the water discharge characteristics, thermal cost,
and transmission loss. It is then incorporated into the hydrothermal
coordination program. The program has been tested on a practical
utility system having 32 thermal and 12 hydro generating units. Test
results show that a slight increase in production cost causes a
substantial reduction in emission.
Abstract: This paper presents a particle swarm optimization
(PSO) based approach for multiple object tracking based on histogram
matching. To start with, gray-level histograms are calculated to
establish a feature model for each of the target object. The difference
between the gray-level histogram corresponding to each particle in the
search space and the target object is used as the fitness value. Multiple
swarms are created depending on the number of the target objects
under tracking. Because of the efficiency and simplicity of the PSO
algorithm for global optimization, target objects can be tracked as
iterations continue. Experimental results confirm that the proposed
PSO algorithm can rapidly converge, allowing real-time tracking of
each target object. When the objects being tracked move outside the
tracking range, global search capability of the PSO resumes to re-trace
the target objects.
Abstract: The one of best robust search technique on large scale
search area is heuristic and meta heuristic approaches. Especially in
issue that the exploitation of combinatorial status in the large scale
search area prevents the solution of the problem via classical
calculating methods, so such problems is NP-complete. in this
research, the problem of winner determination in combinatorial
auctions have been formulated and by assessing older heuristic
functions, we solve the problem by using of genetic algorithm and
would show that this new method would result in better performance
in comparison to other heuristic function such as simulated annealing
greedy approach.
Abstract: A novel method of individual level adaptive mutation rate control called the rank-scaled mutation rate for genetic algorithms is introduced. The rank-scaled mutation rate controlled genetic algorithm varies the mutation parameters based on the rank of each individual within the population. Thereby the distribution of the fitness of the papulation is taken into consideration in forming the new mutation rates. The best fit mutate at the lowest rate and the least fit mutate at the highest rate. The complexity of the algorithm is of the order of an individual adaptation scheme and is lower than that of a self-adaptation scheme. The proposed algorithm is tested on two common problems, namely, numerical optimization of a function and the traveling salesman problem. The results show that the proposed algorithm outperforms both the fixed and deterministic mutation rate schemes. It is best suited for problems with several local optimum solutions without a high demand for excessive mutation rates.
Abstract: The IFS is a scheme for describing and manipulating complex fractal attractors using simple mathematical models. More precisely, the most popular “fractal –based" algorithms for both representation and compression of computer images have involved some implementation of the method of Iterated Function Systems (IFS) on complete metric spaces. In this paper a new generalized space called Multi-Fuzzy Fractal Space was constructed. On these spases a distance function is defined, and its completeness is proved. The completeness property of this space ensures the existence of a fixed-point theorem for the family of continuous mappings. This theorem is the fundamental result on which the IFS methods are based and the fractals are built. The defined mappings are proved to satisfy some generalizations of the contraction condition.
Abstract: Evolvable hardware (EHW) is a developing field that
applies evolutionary algorithm (EA) to automatically design circuits,
antennas, robot controllers etc. A lot of research has been done in this
area and several different EAs have been introduced to tackle
numerous problems, as scalability, evolvability etc. However every
time a specific EA is chosen for solving a particular task, all its
components, such as population size, initialization, selection
mechanism, mutation rate, and genetic operators, should be selected
in order to achieve the best results. In the last three decade the
selection of the right parameters for the EA-s components for solving
different “test-problems" has been investigated. In this paper the
behaviour of mutation rate for designing logic circuits, which has not
been done before, has been deeply analyzed. The mutation rate for an
EHW system modifies the number of inputs of each logic gates, the
functionality (for example from AND to NOR) and the connectivity
between logic gates. The behaviour of the mutation has been
analyzed based on the number of generations, genotype redundancy
and number of logic gates for the evolved circuits. The experimental
results found provide the behaviour of the mutation rate during
evolution for the design and optimization of simple logic circuits.
The experimental results propose the best mutation rate to be used for
designing combinational logic circuits. The research presented is
particular important for those who would like to implement a
dynamic mutation rate inside the evolutionary algorithm for evolving
digital circuits. The researches on the mutation rate during the last 40
years are also summarized.
Abstract: Fault-proneness of a software module is the
probability that the module contains faults. To predict faultproneness
of modules different techniques have been proposed which
includes statistical methods, machine learning techniques, neural
network techniques and clustering techniques. The aim of proposed
study is to explore whether metrics available in the early lifecycle
(i.e. requirement metrics), metrics available in the late lifecycle (i.e.
code metrics) and metrics available in the early lifecycle (i.e.
requirement metrics) combined with metrics available in the late
lifecycle (i.e. code metrics) can be used to identify fault prone
modules using Genetic Algorithm technique. This approach has been
tested with real time defect C Programming language datasets of
NASA software projects. The results show that the fusion of
requirement and code metric is the best prediction model for
detecting the faults as compared with commonly used code based
model.
Abstract: Increasing growth of information volume in the
internet causes an increasing need to develop new (semi)automatic
methods for retrieval of documents and ranking them according to
their relevance to the user query. In this paper, after a brief review
on ranking models, a new ontology based approach for ranking
HTML documents is proposed and evaluated in various
circumstances. Our approach is a combination of conceptual,
statistical and linguistic methods. This combination reserves the
precision of ranking without loosing the speed. Our approach
exploits natural language processing techniques for extracting
phrases and stemming words. Then an ontology based conceptual
method will be used to annotate documents and expand the query.
To expand a query the spread activation algorithm is improved so
that the expansion can be done in various aspects. The annotated
documents and the expanded query will be processed to compute
the relevance degree exploiting statistical methods. The outstanding
features of our approach are (1) combining conceptual, statistical
and linguistic features of documents, (2) expanding the query with
its related concepts before comparing to documents, (3) extracting
and using both words and phrases to compute relevance degree, (4)
improving the spread activation algorithm to do the expansion based
on weighted combination of different conceptual relationships and
(5) allowing variable document vector dimensions. A ranking
system called ORank is developed to implement and test the
proposed model. The test results will be included at the end of the
paper.
Abstract: This paper deals about four items assembly process of
linear drive. This assembly will be realized in flexible assembly cell
on Institute of Manufacturing Systems and Applied Mechanics. There
is defined manufacturing cell, individual actuators created our
flexible cell. Next chapter is about control type, detailed describe a
sequence control type, which will be used in mentioned flexible
assembly cell. All cell control is divided in individual steps
instructions. There instructions illustrate table number III.
Abstract: Repeated observation of a given area over time yields
potential for many forms of change detection analysis. These
repeated observations are confounded in terms of radiometric
consistency due to changes in sensor calibration over time,
differences in illumination, observation angles and variation in
atmospheric effects.
This paper demonstrates applicability of an empirical relative
radiometric normalization method to a set of multitemporal cloudy
images acquired by Resourcesat1 LISS III sensor. Objective of this
study is to detect and remove cloud cover and normalize an image
radiometrically. Cloud detection is achieved by using Average
Brightness Threshold (ABT) algorithm. The detected cloud is
removed and replaced with data from another images of the same
area. After cloud removal, the proposed normalization method is
applied to reduce the radiometric influence caused by non surface
factors. This process identifies landscape elements whose reflectance
values are nearly constant over time, i.e. the subset of non-changing
pixels are identified using frequency based correlation technique. The
quality of radiometric normalization is statistically assessed by R2
value and mean square error (MSE) between each pair of analogous
band.
Abstract: Single nucleotide polymorphisms (SNPs) hold much promise as a basis for disease-gene association. However, research is limited by the cost of genotyping the tremendous number of SNPs. Therefore, it is important to identify a small subset of informative SNPs, the so-called tag SNPs. This subset consists of selected SNPs of the genotypes, and accurately represents the rest of the SNPs. Furthermore, an effective evaluation method is needed to evaluate prediction accuracy of a set of tag SNPs. In this paper, a genetic algorithm (GA) is applied to tag SNP problems, and the K-nearest neighbor (K-NN) serves as a prediction method of tag SNP selection. The experimental data used was taken from the HapMap project; it consists of genotype data rather than haplotype data. The proposed method consistently identified tag SNPs with considerably better prediction accuracy than methods from the literature. At the same time, the number of tag SNPs identified was smaller than the number of tag SNPs in the other methods. The run time of the proposed method was much shorter than the run time of the SVM/STSA method when the same accuracy was reached.
Abstract: In this paper, a novel deinterlacing algorithm is
proposed. The proposed algorithm approximates the distribution of the
luminance into a polynomial function. Instead of using one
polynomial function for all pixels, different polynomial functions are
used for the uniform, texture, and directional edge regions. The
function coefficients for each region are computed by matrix
multiplications. Experimental results demonstrate that the proposed
method performs better than the conventional algorithms.
Abstract: Serial Analysis of Gene Expression is a powerful
quantification technique for generating cell or tissue gene expression
data. The profile of the gene expression of cell or tissue in several
different states is difficult for biologists to analyze because of the large
number of genes typically involved. However, feature selection in
machine learning can successfully reduce this problem. The method
allows reducing the features (genes) in specific SAGE data, and
determines only relevant genes. In this study, we used a genetic
algorithm to implement feature selection, and evaluate the
classification accuracy of the selected features with the K-nearest
neighbor method. In order to validate the proposed method, we used
two SAGE data sets for testing. The results of this study conclusively
prove that the number of features of the original SAGE data set can be
significantly reduced and higher classification accuracy can be
achieved.
Abstract: Wavelet transforms are multiresolution
decompositions that can be used to analyze signals and images.
Image compression is one of major applications of wavelet
transforms in image processing. It is considered as one of the most
powerful methods that provides a high compression ratio. However,
its implementation is very time-consuming. At the other hand,
parallel computing technologies are an efficient method for image
compression using wavelets. In this paper, we propose a parallel
wavelet compression algorithm based on quadtrees. We implement
the algorithm using MatlabMPI (a parallel, message passing version
of Matlab), and compute its isoefficiency function, and show that it is
scalable. Our experimental results confirm the efficiency of the
algorithm also.