Abstract: The most common forensic activity is searching a hard
disk for string of data. Nowadays, investigators and analysts are
increasingly experiencing large, even terabyte sized data sets when
conducting digital investigations. Therefore consecutive searching can
take weeks to complete successfully. There are two primary search
methods: index-based search and bitwise search. Index-based
searching is very fast after the initial indexing but initial indexing
takes a long time. In this paper, we discuss a high speed bitwise search
model for large-scale digital forensic investigations. We used pattern
matching board, which is generally used for network security, to
search for string and complex regular expressions. Our results indicate
that in many cases, the use of pattern matching board can substantially
increase the performance of digital forensic search tools.
Abstract: Although the STL (stereo lithography) file format is
widely used as a de facto industry standard in the rapid prototyping
industry due to its simplicity and ability to tessellation of almost all
surfaces, but there are always some defects and shortcoming in their
usage, which many of them are difficult to correct manually. In
processing the complex models, size of the file and its defects grow
extremely, therefore, correcting STL files become difficult. In this
paper through optimizing the exiting algorithms, size of the files and
memory usage of computers to process them will be reduced. In spite
of type and extent of the errors in STL files, the tail-to-head
searching method and analysis of the nearest distance between tails
and heads techniques were used. As a result STL models sliced
rapidly, and fully closed contours produced effectively and errorless.
Abstract: This paper describes an efficient and practical method
for economic dispatch problem in one and two area electrical power
systems with considering the constraint of the tie transmission line
capacity constraint. Direct search method (DSM) is used with some
equality and inequality constraints of the production units with any
kind of fuel cost function. By this method, it is possible to use several
inequality constraints without having difficulty for complex cost
functions or in the case of unavailability of the cost function
derivative. To minimize the number of total iterations in searching,
process multi-level convergence is incorporated in the DSM.
Enhanced direct search method (EDSM) for two area power system
will be investigated. The initial calculation step size that causes less
iterations and then less calculation time is presented. Effect of the
transmission tie line capacity, between areas, on economic dispatch
problem and on total generation cost will be studied; line
compensation and active power with reactive power dispatch are
proposed to overcome the high generation costs for this multi-area
system.
Abstract: As the information age matures, major social
infrastructures such as communication, finance, military and energy,
have become ever more dependent on information communication
systems. And since these infrastructures are connected to the Internet,
electronic intrusions such as hacking and viruses have become a new
security threat. Especially, disturbance or neutralization of a major
social infrastructure can result in extensive material damage and social
disorder. To address this issue, many nations around the world are
researching and developing various techniques and information
security policies as a government-wide effort to protect their
infrastructures from newly emerging threats. This paper proposes an
evaluation method for information security levels of CIIP (Critical
Information Infrastructure Protection), which can enhance the security
level of critical information infrastructure by checking the current
security status and establish security measures accordingly to protect
infrastructures effectively.
Abstract: With the hardware technology advancing, the cost of
storing is decreasing. Thus there is an urgent need for new techniques
and tools that can intelligently and automatically assist us in
transferring this data into useful knowledge. Different techniques of
data mining are developed which are helpful for handling these large
size databases [7]. Data mining is also finding its role in the field of
biotechnology. Pedigree means the associated ancestry of a crop
variety. Genetic diversity is the variation in the genetic composition
of individuals within or among species. Genetic diversity depends
upon the pedigree information of the varieties. Parents at lower
hierarchic levels have more weightage for predicting genetic
diversity as compared to the upper hierarchic levels. The weightage
decreases as the level increases. For crossbreeding, the two varieties
should be more and more genetically diverse so as to incorporate the
useful characters of the two varieties in the newly developed variety.
This paper discusses the searching and analyzing of different possible
pairs of varieties selected on the basis of morphological characters,
Climatic conditions and Nutrients so as to obtain the most optimal
pair that can produce the required crossbreed variety. An algorithm
was developed to determine the genetic diversity between the
selected wheat varieties. Cluster analysis technique is used for
retrieving the results.
Abstract: Nowadays e-Learning is more popular, in Vietnam
especially. In e-learning, materials for studying are very important.
It is necessary to design the knowledge base systems and expert
systems which support for searching, querying, solving of
problems. The ontology, which was called Computational Object
Knowledge Base Ontology (COB-ONT), is a useful tool for
designing knowledge base systems in practice. In this paper, a
design method for knowledge base systems in education using
COKB-ONT will be presented. We also present the design of a
knowledge base system that supports studying knowledge and
solving problems in higher mathematics.
Abstract: In this paper, a block code to minimize the peak-toaverage
power ratio (PAPR) of orthogonal frequency division
multiplexing (OFDM) signals is proposed. It is shown that cyclic
shift and codeword inversion cause not change to peak envelope
power. The encoding rule for the proposed code comprises of
searching for a seed codeword, shifting the register elements, and
determining codeword inversion, eliminating the look-up table for
one-to-one correspondence between the source and the coded data.
Simulation results show that OFDM systems with the proposed code
always have the minimum PAPR.
Abstract: In this paper, enhanced ground proximity warning simulation and validation system is designed and implemented. First, based on square grid and sub-grid structure, the global digital terrain database is designed and constructed. Terrain data searching is implemented through querying the latitude and longitude bands and separated zones of global terrain database with the current aircraft position. A combination of dynamic scheduling and hierarchical scheduling is adopted to schedule the terrain data, and the terrain data can be read and delete dynamically in the memory. Secondly, according to the scope, distance, approach speed information etc. to the dangerous terrain in front, and using security profiles calculating method, collision threat detection is executed in real-time, and provides caution and warning alarm. According to this scheme, the implementation of the enhanced ground proximity warning simulation system is realized. Simulations are carried out to verify a good real-time in terrain display and alarm trigger, and the results show simulation system is realized correctly, reasonably and stable.
Abstract: An accident is an unexpected and unplanned situation
that happens and affects human in a negative outcome. The accident
can cause an injury to a human biological organism. Thus, the
provision of initial care for an illness or injury is very important
move to prepare the patients/victims before sending to the doctor. In
this paper, a First Aid Application is developed to give some
directions for preliminary taking care of patient/victim via Android
mobile device. Also, the navigation function using Google Maps API
is implemented in this paper for searching a suitable path to the
nearest hospital. Therefore, in the emergency case, this function can
be activated and navigate patients/victims to the hospital with the
shortest path.
Abstract: The emergence of the Internet has brewed the
revolution of information storage and retrieval. As most of the
data in the web is unstructured, and contains a mix of text,
video, audio etc, there is a need to mine information to cater to
the specific needs of the users without loss of important
hidden information. Thus developing user friendly and
automated tools for providing relevant information quickly
becomes a major challenge in web mining research. Most of
the existing web mining algorithms have concentrated on
finding frequent patterns while neglecting the less frequent
ones that are likely to contain outlying data such as noise,
irrelevant and redundant data. This paper mainly focuses on
Signed approach and full word matching on the organized
domain dictionary for mining web content outliers. This
Signed approach gives the relevant web documents as well as
outlying web documents. As the dictionary is organized based
on the number of characters in a word, searching and retrieval
of documents takes less time and less space.
Abstract: Artificial Immune System is applied as a Heuristic
Algorithm for decades. Nevertheless, many of these applications
took advantage of the benefit of this algorithm but seldom proposed
approaches for enhancing the efficiency. In this paper, a
Self-evolving Artificial Immune System is proposed via developing
the T and B cell in Immune System and built a self-evolving
mechanism for the complexities of different problems. In this
research, it focuses on enhancing the efficiency of Clonal selection
which is responsible for producing Affinities to resist the invading of
Antigens. T and B cell are the main mechanisms for Clonal
Selection to produce different combinations of Antibodies.
Therefore, the development of T and B cell will influence the
efficiency of Clonal Selection for searching better solution.
Furthermore, for better cooperation of the two cells, a co-evolutional
strategy is applied to coordinate for more effective productions of
Antibodies. This work finally adopts Flow-shop scheduling
instances in OR-library to validate the proposed algorithm.
Abstract: Music segmentation is a key issue in music information
retrieval (MIR) as it provides an insight into the
internal structure of a composition. Structural information about
a composition can improve several tasks related to MIR such
as searching and browsing large music collections, visualizing
musical structure, lyric alignment, and music summarization.
The authors of this paper present the MTSSM framework, a twolayer
framework for the multi-track segmentation of symbolic
music. The strength of this framework lies in the combination of
existing methods for local track segmentation and the application
of global structure information spanning via multiple tracks.
The first layer of the MTSSM uses various string matching
techniques to detect the best candidate segmentations for each
track of a multi-track composition independently. The second
layer combines all single track results and determines the best
segmentation for each track in respect to the global structure of
the composition.
Abstract: While the explosive increase in information published
on the Web, researchers have to filter information when searching for
conference related information. To make it easier for users to search
related information, this paper uses Topic Maps and social information
to implement ontology since ontology can provide the formalisms and
knowledge structuring for comprehensive and transportable machine
understanding that digital information requires. Besides enhancing
information in Topic Maps, this paper proposes a method of
constructing research Topic Maps considering social information.
First, extract conference data from the web. Then extract conference
topics and the relationships between them through the proposed
method. Finally visualize it for users to search and browse. This paper
uses ontology, containing abundant of knowledge hierarchy structure,
to facilitate researchers getting useful search results. However, most
previous ontology construction methods didn-t take “people" into
account. So this paper also analyzes the social information which helps
researchers find the possibilities of cooperation/combination as well as
associations between research topics, and tries to offer better results.
Abstract: Continuous measurements and multivariate methods are applied in researching the effects of energy consumption on indoor air quality (IAQ) in a Finnish one-family house. Measured data used in this study was collected continuously in a house in Kuopio, Eastern Finland, during fourteen months long period. Consumption parameters measured were the consumptions of district heat, electricity and water. Indoor parameters gathered were temperature, relative humidity (RH), the concentrations of carbon dioxide (CO2) and carbon monoxide (CO) and differential air pressure. In this study, self-organizing map (SOM) and Sammon's mapping were applied to resolve the effects of energy consumption on indoor air quality. Namely, the SOM was qualified as a suitable method having a property to summarize the multivariable dependencies into easily observable two-dimensional map. Accompanying that, the Sammon's mapping method was used to cluster pre-processed data to find similarities of the variables, expressing distances and groups in the data. The methods used were able to distinguish 7 different clusters characterizing indoor air quality and energy efficiency in the study house. The results indicate, that the cost implications in euros of heating and electricity energy vary according to the differential pressure, concentration of carbon dioxide, temperature and season.
Abstract: In terms of total online audience, newspapers are the most successful form of online content to date. The online audience for newspapers continues to demand higher-quality services, including personalized news services. News providers should be able to offer suitable users appropriate content. In this paper, a news article recommender system is suggested based on a user-s preference when he or she visits an Internet news site and reads the published articles. This system helps raise the user-s satisfaction, increase customer loyalty toward the content provider.
Abstract: Searching similar documents and document
management subjects have important place in text mining. One of the
most important parts of similar document research studies is the
process of classifying or clustering the documents. In this study, a
similar document search approach that includes discussion of out the
case of belonging to multiple categories (multiple categories
problem) has been carried. The proposed method that based on Fuzzy
Similarity Classification (FSC) has been compared with Rocchio
algorithm and naive Bayes method which are widely used in text
mining. Empirical results show that the proposed method is quite
successful and can be applied effectively. For the second stage,
multiple categories vector method based on information of categories
regarding to frequency of being seen together has been used.
Empirical results show that achievement is increased almost two
times, when proposed method is compared with classical approach.
Abstract: In this paper, an approach to reduce the computation steps required by fast neural networksfor the searching process is presented. The principle ofdivide and conquer strategy is applied through imagedecomposition. Each image is divided into small in sizesub-images and then each one is tested separately usinga fast neural network. The operation of fast neuralnetworks based on applying cross correlation in thefrequency domain between the input image and theweights of the hidden neurons. Compared toconventional and fast neural networks, experimentalresults show that a speed up ratio is achieved whenapplying this technique to locate human facesautomatically in cluttered scenes. Furthermore, fasterface detection is obtained by using parallel processingtechniques to test the resulting sub-images at the sametime using the same number of fast neural networks. Incontrast to using only fast neural networks, the speed upratio is increased with the size of the input image whenusing fast neural networks and image decomposition.
Abstract: In the context of spectrum surveillance, a new method
to recover the code of spread spectrum signal is presented, while the
receiver has no knowledge of the transmitter-s spreading sequence. In
our previous paper, we used Genetic algorithm (GA), to recover
spreading code. Although genetic algorithms (GAs) are well known
for their robustness in solving complex optimization problems, but
nonetheless, by increasing the length of the code, we will often lead
to an unacceptable slow convergence speed. To solve this problem we
introduce Particle Swarm Optimization (PSO) into code estimation in
spread spectrum communication system. In searching process for
code estimation, the PSO algorithm has the merits of rapid
convergence to the global optimum, without being trapped in local
suboptimum, and good robustness to noise. In this paper we describe
how to implement PSO as a component of a searching algorithm in
code estimation. Swarm intelligence boasts a number of advantages
due to the use of mobile agents. Some of them are: Scalability, Fault
tolerance, Adaptation, Speed, Modularity, Autonomy, and
Parallelism. These properties make swarm intelligence very attractive
for spread spectrum code estimation. They also make swarm
intelligence suitable for a variety of other kinds of channels. Our
results compare between swarm-based algorithms and Genetic
algorithms, and also show PSO algorithm performance in code
estimation process.
Abstract: Point quad tree is considered as one of the most
common data organizations to deal with spatial data & can be used to
increase the efficiency for searching the point features. As the
efficiency of the searching technique depends on the height of the
tree, arbitrary insertion of the point features may make the tree
unbalanced and lead to higher time of searching. This paper attempts
to design an algorithm to make a nearly balanced quad tree. Point
pattern analysis technique has been applied for this purpose which
shows a significant enhancement of the performance and the results
are also included in the paper for the sake of completeness.
Abstract: Bioinformatics and Cheminformatics use computer as disciplines providing tools for acquisition, storage, processing, analysis, integrate data and for the development of potential applications of biological and chemical data. A chemical database is one of the databases that exclusively designed to store chemical information. NMRShiftDB is one of the main databases that used to represent the chemical structures in 2D or 3D structures. SMILES format is one of many ways to write a chemical structure in a linear format. In this study we extracted Antimicrobial Structures in SMILES format from NMRShiftDB and stored it in our Local Data Warehouse with its corresponding information. Additionally, we developed a searching tool that would response to user-s query using the JME Editor tool that allows user to draw or edit molecules and converts the drawn structure into SMILES format. We applied Quick Search algorithm to search for Antimicrobial Structures in our Local Data Ware House.