Abstract: This paper outlines the development of a learning retrieval agent. Task of this agent is to extract knowledge of the Active Semantic Network in respect to user-requests. Based on a reinforcement learning approach, the agent learns to interpret the user-s intention. Especially, the learning algorithm focuses on the retrieval of complex long distant relations. Increasing its learnt knowledge with every request-result-evaluation sequence, the agent enhances his capability in finding the intended information.
Abstract: Aerial and satellite images are information rich. They are also complex to analyze. For GIS systems, many features require fast and reliable extraction of roads and intersections. In this paper, we study efficient and reliable automatic extraction algorithms to address some difficult issues that are commonly seen in high resolution aerial and satellite images, nonetheless not well addressed in existing solutions, such as blurring, broken or missing road boundaries, lack of road profiles, heavy shadows, and interfering surrounding objects. The new scheme is based on a new method, namely reference circle, to properly identify the pixels that belong to the same road and use this information to recover the whole road network. This feature is invariable to the shape and direction of roads and tolerates heavy noise and disturbances. Road extraction based on reference circles is much more noise tolerant and flexible than the previous edge-detection based algorithms. The scheme is able to extract roads reliably from images with complex contents and heavy obstructions, such as the high resolution aerial/satellite images available from Google maps.
Abstract: Using plug flow model in conjunction with
experimental solute concentration profiles, overall volumetric mass
transfer coefficient based on continuous phase (Koca), in a packed
liquid-liquid extraction column has been optimized. Number of 12
experiments has been done using standard system of water/acid
acetic/toluene in a 6 cm diameter, 120 cm height column. Thorough
consideration of influencing parameters we intended to correlate
dimensionless parameters in term of overall Sherwood number which
has an acceptable average error of about 15.8%.
Abstract: XML files contain data which is in well formatted manner. By studying the format or semantics of the grammar it will be helpful for fast retrieval of the data. There are many algorithms which describes about searching the data from XML files. There are no. of approaches which uses data structure or are related to the contents of the document. In these cases user must know about the structure of the document and information retrieval techniques using NLPs is related to content of the document. Hence the result may be irrelevant or not so successful and may take more time to search.. This paper presents fast XML retrieval techniques by using new indexing technique and the concept of RXML. When indexing an XML document, the system takes into account both the document content and the document structure and assigns the value to each tag from file. To query the system, a user is not constrained about fixed format of query.
Abstract: Extraction of lactic acid by emulsion liquid membrane technology (ELM) using n-trioctyl amine (TOA) in n-heptane as carrier within the organic membrane along with sodium carbonate as acceptor phase was optimized by using response surface methodology (RSM). A three level Box-Behnken design was employed for experimental design, analysis of the results and to depict the combined effect of five independent variables, vizlactic acid concentration in aqueous phase (cl), sodium carbonate concentration in stripping phase (cs), carrier concentration in membrane phase (ψ), treat ratio, and batch extraction time (τ)
with equal volume of organic and external aqueous phase on lactic acid extraction efficiency. The maximum lactic acid extraction efficiency (ηext) of 98.21%from aqueous phase in a batch reactor using ELM was found at the optimized values for test variables, cl, cs, ψ, and τ as 0.06 [M], 0.18 [M], 4.72 (%,v/v), 1.98 (v/v) and 13.36 min respectively.
Abstract: Hand gesture is an active area of research in the vision
community, mainly for the purpose of sign language recognition and
Human Computer Interaction. In this paper, we propose a system to
recognize alphabet characters (A-Z) and numbers (0-9) in real-time
from stereo color image sequences using Hidden Markov Models
(HMMs). Our system is based on three main stages; automatic segmentation
and preprocessing of the hand regions, feature extraction
and classification. In automatic segmentation and preprocessing stage,
color and 3D depth map are used to detect hands where the hand
trajectory will take place in further step using Mean-shift algorithm
and Kalman filter. In the feature extraction stage, 3D combined features
of location, orientation and velocity with respected to Cartesian
systems are used. And then, k-means clustering is employed for
HMMs codeword. The final stage so-called classification, Baum-
Welch algorithm is used to do a full train for HMMs parameters.
The gesture of alphabets and numbers is recognized using Left-Right
Banded model in conjunction with Viterbi algorithm. Experimental
results demonstrate that, our system can successfully recognize hand
gestures with 98.33% recognition rate.
Abstract: An automatic method for the extraction of feature points for face based applications is proposed. The system is based upon volumetric feature descriptors, which in this paper has been extended to incorporate scale space. The method is robust to noise and has the ability to extract local and holistic features simultaneously from faces stored in a database. Extracted features are stable over a range of faces, with results indicating that in terms of intra-ID variability, the technique has the ability to outperform manual landmarking.
Abstract: In this study, the forty Thai medicinal plants were
used to screen the antibacterial activity against Campylobacter jejuni.
Crude 95% ethanolic extracts of each plant were prepared.
Antibacterial activity was investigated by the disc diffusion assay,
and MICs and MBCs were determined by broth microdilution. The
results of antibacterial screening showed that five plants have activity
against C.jejuni including Adenanthera pavonina L., Moringa
oleifera Lam., Annona squamosa L., Hibiscus sabdariffa L. and
Eupotorium odortum L. The extraction of A. pavonina L. and A.
squamosa L. produced an outstanding against C. jejuni, inhibiting
growth at 62.5-125 and 250-500 μg/mL, respectively. The MBCs of
two extracts were just 4-fold higher than MICs against C. jejuni,
suggesting the extracts are bactericidal against this species. These
results indicate that A. pavonina and A. squamosa could potentially
be used in modern applications aimed at treatment or prevention of
foodborne disease from C. jejuni.
Abstract: Today modern simulations solutions in the wind turbine industry have achieved a high degree of complexity and detail in result. Limitations exist when it is time to validate model results against measurements. Regarding Model validation it is of special interest to identify mode frequencies and to differentiate them from the different excitations. A wind turbine is a complex device and measurements regarding any part of the assembly show a lot of noise. Input excitations are difficult or even impossible to measure due to the stochastic nature of the environment. Traditional techniques for frequency analysis or features extraction are widely used to analyze wind turbine sensor signals, but have several limitations specially attending to non stationary signals (Events). A new technique based on autoregresive analysis techniques is introduced here for a specific application, a comparison and examples related to different events in the wind turbine operations are presented.
Abstract: Support vector machines (SVMs) have shown
superior performance compared to other machine learning techniques,
especially in classification problems. Yet one limitation of SVMs is
the lack of an explanation capability which is crucial in some
applications, e.g. in the medical and security domains. In this paper, a
novel approach for eclectic rule-extraction from support vector
machines is presented. This approach utilizes the knowledge acquired
by the SVM and represented in its support vectors as well as the
parameters associated with them. The approach includes three stages;
training, propositional rule-extraction and rule quality evaluation.
Results from four different experiments have demonstrated the value
of the approach for extracting comprehensible rules of high accuracy
and fidelity.
Abstract: This paper proposes a three-phase four-wire currentcontrolled
Voltage Source Inverter (CC-VSI) for both power quality
improvement and PV energy extraction. For power quality
improvement, the CC-VSI works as a grid current-controlling shunt
active power filter to compensate for harmonic and reactive power of
loads. Then, the PV array is coupled to the DC bus of the CC-VSI
and supplies active power to the grid. The MPPT controller employs
the particle swarm optimization technique. The output of the MPPT
controller is a DC voltage that determines the DC-bus voltage
according to PV maximum power. The PSO method is simple and
effective especially for a partially shaded PV array. From computer
simulation results, it proves that grid currents are sinusoidal and inphase
with grid voltages, while the PV maximum active power is
delivered to loads.
Abstract: This article presents the results using a parametric approach and a Wavelet Transform in analysing signals emitting from the sperm whale. The extraction of intrinsic characteristics of these unique signals emitted by marine mammals is still at present a difficult exercise for various reasons: firstly, it concerns non-stationary signals, and secondly, these signals are obstructed by interfering background noise. In this article, we compare the advantages and disadvantages of both methods: Auto Regressive models and Wavelet Transform. These approaches serve as an alternative to the commonly used estimators which are based on the Fourier Transform for which the hypotheses necessary for its application are in certain cases, not sufficiently proven. These modern approaches provide effective results particularly for the periodic tracking of the signal's characteristics and notably when the signal-to-noise ratio negatively effects signal tracking. Our objectives are twofold. Our first goal is to identify the animal through its acoustic signature. This includes recognition of the marine mammal species and ultimately of the individual animal (within the species). The second is much more ambitious and directly involves the intervention of cetologists to study the sounds emitted by marine mammals in an effort to characterize their behaviour. We are working on an approach based on the recordings of marine mammal signals and the findings from this data result from the Wavelet Transform. This article will explore the reasons for using this approach. In addition, thanks to the use of new processors, these algorithms once heavy in calculation time can be integrated in a real-time system.
Abstract: Supercritical carbon dioxide (SC-CO2) was used as a
solvent to extract oil from wheat bran. Extractions were carried out in a
semi-batch process at temperatures ranging from 40 to 60ºC and
pressures ranging from 10 to 30 MPa, with a carbon dioxide (CO2)
flow rate of 26.81 g/min. The oil obtained from wheat bran at different
extraction conditions was quantitatively measured to investigate the
solubility of oil in SC-CO2. The solubility of wheat bran oil was found
to be enhanced in high temperature and pressure. The composition of
fatty acids in wheat bran oil was measured by gas chromatography
(GC). Linoleic, palmitic, oleic and γ-linolenic acid were the major
fatty acids of wheat bran oil. Tocopherol contents in oil were analyzed
by high performance liquid chromatography (HPLC). The highest
amount of phenolics and tocopherols (α and β) were found at
temperature of 60ºC and pressure of 30 MPa.
Abstract: Iris localization is a very important approach in
biometric identification systems. Identification process usually is
implemented in three levels: iris localization, feature extraction, and
pattern matching finally. Accuracy of iris localization as the first step
affects all other levels and this shows the importance of iris
localization in an iris based biometric system. In this paper, we
consider Daugman iris localization method as a standard method,
propose a new method in this field and then analyze and compare the
results of them on a standard set of iris images. The proposed method
is based on the detection of circular edge of iris, and improved by
fuzzy circles and surface energy difference contexts. Implementation
of this method is so easy and compared to the other methods, have a
rather high accuracy and speed. Test results show that the accuracy of
our proposed method is about Daugman method and computation
speed of it is 10 times faster.
Abstract: Modernizing legacy applications is the key issue facing IT managers today because there's enormous pressure on organizations to change the way they run their business to meet the new requirements. The importance of software maintenance and reengineering is forever increasing. Understanding the architecture of existing legacy applications is the most critical issue for maintenance and reengineering. The artifacts recovery can be facilitated with different recovery approaches, methods and tools. The existing methods provide static and dynamic set of techniques for extracting architectural information, but are not suitable for all users in different domains. This paper presents a simple and lightweight pattern extraction technique to extract different artifacts from legacy systems using regular expression pattern specifications with multiple language support. We used our custom-built tool DRT to recover artifacts from existing system at different levels of abstractions. In order to evaluate our approach a case study is conducted.
Abstract: Nowadays, the rapid development of multimedia
and internet allows for wide distribution of digital media data.
It becomes much easier to edit, modify and duplicate digital
information Besides that, digital documents are also easy to
copy and distribute, therefore it will be faced by many
threatens. It-s a big security and privacy issue with the large
flood of information and the development of the digital
format, it become necessary to find appropriate protection
because of the significance, accuracy and sensitivity of the
information. Nowadays protection system classified with more
specific as hiding information, encryption information, and
combination between hiding and encryption to increase information
security, the strength of the information hiding science is due to the
non-existence of standard algorithms to be used in hiding secret
messages. Also there is randomness in hiding methods such as
combining several media (covers) with different methods to pass a
secret message. In addition, there are no formal methods to be
followed to discover the hidden data. For this reason, the task of this
research becomes difficult. In this paper, a new system of information
hiding is presented. The proposed system aim to hidden information
(data file) in any execution file (EXE) and to detect the hidden file
and we will see implementation of steganography system which
embeds information in an execution file. (EXE) files have been
investigated. The system tries to find a solution to the size of the
cover file and making it undetectable by anti-virus software. The
system includes two main functions; first is the hiding of the
information in a Portable Executable File (EXE), through the
execution of four process (specify the cover file, specify the
information file, encryption of the information, and hiding the
information) and the second function is the extraction of the hiding
information through three process (specify the steno file, extract the
information, and decryption of the information). The system has
achieved the main goals, such as make the relation of the size of the
cover file and the size of information independent and the result file
does not make any conflict with anti-virus software.
Abstract: On-line handwritten scripts are usually dealt with pen tip traces from pen-down to pen-up positions. Time evaluation of the pen coordinates is also considered along with trajectory information. However, the data obtained needs a lot of preprocessing including filtering, smoothing, slant removing and size normalization before recognition process. Instead of doing such lengthy preprocessing, this paper presents a simple approach to extract the useful character information. This work evaluates the use of the counter- propagation neural network (CPN) and presents feature extraction mechanism in full detail to work with on-line handwriting recognition. The obtained recognition rates were 60% to 94% using the CPN for different sets of character samples. This paper also describes a performance study in which a recognition mechanism with multiple thresholds is evaluated for counter-propagation architecture. The results indicate that the application of multiple thresholds has significant effect on recognition mechanism. The method is applicable for off-line character recognition as well. The technique is tested for upper-case English alphabets for a number of different styles from different peoples.
Abstract: As the world changes more rapidly, the demand for update information for resource management, environment monitoring, planning are increasing exponentially. Integration of Remote Sensing with GIS technology will significantly promote the ability for addressing these concerns. This paper presents an alternative way of update GIS applications using image processing and high resolution images. We show a method of high-resolution image segmentation using graphs and morphological operations, where a preprocessing step (watershed operation) is required. A morphological process is then applied using the opening and closing operations. After this segmentation we can extract significant cartographic elements such as urban areas, streets or green areas. The result of this segmentation and this extraction is then used to update GIS applications. Some examples are shown using aerial photography.
Abstract: Information Retrieval has the objective of studying
models and the realization of systems allowing a user to find the
relevant documents adapted to his need of information. The
information search is a problem which remains difficult because the
difficulty in the representing and to treat the natural languages such
as polysemia. Intentional Structures promise to be a new paradigm to
extend the existing documents structures and to enhance the different
phases of documents process such as creation, editing, search and
retrieval. The intention recognition of the author-s of texts can reduce
the largeness of this problem. In this article, we present intentions
recognition system is based on a semi-automatic method of
extraction the intentional information starting from a corpus of text.
This system is also able to update the ontology of intentions for the
enrichment of the knowledge base containing all possible intentions
of a domain. This approach uses the construction of a semi-formal
ontology which considered as the conceptualization of the intentional
information contained in a text. An experiments on scientific
publications in the field of computer science was considered to
validate this approach.
Abstract: In this paper we present a new method for over-height
vehicle detection in low headroom streets and highways using digital
video possessing. The accuracy and the lower price comparing to
present detectors like laser radars and the capability of providing
extra information like speed and height measurement make this
method more reliable and efficient. In this algorithm the features are
selected and tracked using KLT algorithm. A blob extraction
algorithm is also applied using background estimation and
subtraction. Then the world coordinates of features that are inside the
blobs are estimated using a noble calibration method. As, the heights
of the features are calculated, we apply a threshold to select overheight
features and eliminate others. The over-height features are
segmented using some association criteria and grouped using an
undirected graph. Then they are tracked through sequential frames.
The obtained groups refer to over-height vehicles in a scene.