Abstract: A phorbol-12-myristate-13-acetate (TPA) is a synthetic analogue of phorbol ester (PE), a natural toxic compound of Euphorbiaceae plant. The oil extracted from plants of this family is useful source for primarily biofuel. However this oil might also be used as a foodstuff due to its significant nutrition content. The limitations for utilizing the oil as a foodstuff are mainly due to a toxicity of PE. Currently, a majority of PE detoxification processes are expensive as include multi steps alcohol extraction sequence.
Ozone is considered as a strong oxidative agent. It reacts with PE by attacking the carbon-carbon double bond of PE. This modification of PE molecular structure yields a non toxic ester with high lipid content.
This report presents data on development of simple and cheap PE detoxification process with water application as a buffer and ozone as reactive component. The core of this new technique is an application for a new microscale plasma unit to ozone production and the technology permits ozone injection to the water-TPA mixture in form of microbubbles.
The efficacy of a heterogeneous process depends on the diffusion coefficient which can be controlled by contact time and interfacial area. The low velocity of rising microbubbles and high surface to volume ratio allow efficient mass transfer to be achieved during the process. Direct injection of ozone is the most efficient way to process with such highly reactive and short lived chemical.
Data on the plasma unit behavior are presented and the influence of gas oscillation technology on the microbubble production mechanism has been discussed. Data on overall process efficacy for TPA degradation is shown.
Abstract: This paper presents a text clustering system developed based on a k-means type subspace clustering algorithm to cluster large, high dimensional and sparse text data. In this algorithm, a new step is added in the k-means clustering process to automatically calculate the weights of keywords in each cluster so that the important words of a cluster can be identified by the weight values. For understanding and interpretation of clustering results, a few keywords that can best represent the semantic topic are extracted from each cluster. Two methods are used to extract the representative words. The candidate words are first selected according to their weights calculated by our new algorithm. Then, the candidates are fed to the WordNet to identify the set of noun words and consolidate the synonymy and hyponymy words. Experimental results have shown that the clustering algorithm is superior to the other subspace clustering algorithms, such as PROCLUS and HARP and kmeans type algorithm, e.g., Bisecting-KMeans. Furthermore, the word extraction method is effective in selection of the words to represent the topics of the clusters.
Abstract: Segmentation and quantification of stenosis is an
important task in assessing coronary artery disease. One of the main
challenges is measuring the real diameter of curved vessels.
Moreover, uncertainty in segmentation of different tissues in the
narrow vessel is an important issue that affects accuracy. This paper
proposes an algorithm to extract coronary arteries and measure the
degree of stenosis. Markovian fuzzy clustering method is applied to
model uncertainty arises from partial volume effect problem. The
algorithm employs: segmentation, centreline extraction, estimation of
orthogonal plane to centreline, measurement of the degree of
stenosis. To evaluate the accuracy and reproducibility, the approach
has been applied to a vascular phantom and the results are compared
with real diameter. The results of 10 patient datasets have been
visually judged by a qualified radiologist. The results reveal the
superiority of the proposed method compared to the Conventional
thresholding Method (CTM) on both datasets.
Abstract: The excellent suitability of the externally excited synchronous
machine (EESM) in automotive traction drive applications
is justified by its high efficiency over the whole operation range and
the high availability of materials. Usually, maximum efficiency is
obtained by modelling each single loss and minimizing the sum of all
losses. As a result, the quality of the optimization highly depends on
the precision of the model. Moreover, it requires accurate knowledge
of the saturation dependent machine inductances. Therefore, the
present contribution proposes a method to minimize the overall losses
of a salient pole EESM and its inverter in steady state operation based
on measurement data only. Since this method does not require any
manufacturer data, it is well suited for an automated measurement
data evaluation and inverter parametrization. The field oriented control
(FOC) of an EESM provides three current components resp. three
degrees of freedom (DOF). An analytic minimization of the copper
losses in the stator and the rotor (assuming constant inductances) is
performed and serves as a first approximation of how to choose the
optimal current reference values. After a numeric offline minimization
of the overall losses based on measurement data the results are
compared to a control strategy that satisfies cos (ϕ) = 1.
Abstract: The explosive growth of World Wide Web has posed
a challenging problem in extracting relevant data. Traditional web
crawlers focus only on the surface web while the deep web keeps
expanding behind the scene. Deep web pages are created
dynamically as a result of queries posed to specific web databases.
The structure of the deep web pages makes it impossible for
traditional web crawlers to access deep web contents. This paper,
Deep iCrawl, gives a novel and vision-based approach for extracting
data from the deep web. Deep iCrawl splits the process into two
phases. The first phase includes Query analysis and Query translation
and the second covers vision-based extraction of data from the
dynamically created deep web pages. There are several established
approaches for the extraction of deep web pages but the proposed
method aims at overcoming the inherent limitations of the former.
This paper also aims at comparing the data items and presenting them
in the required order.
Abstract: A research project dealing with the phytoremediation
of a soil polluted by some heavy metals is currently running. The
case study is represented by a mining area in Hamedan province in
the central west part of Iran. The potential of phytoextraction and
phytostabilization of plants was evaluated considering the
concentration of heavy metals in the plant tissues and also the
bioconcentration factor (BCF) and the translocation factor (TF). Also
the several established criteria were applied to define
hyperaccumulator plants in the studied area. Results showed that
none of the collected plant species were suitable for phytoextraction
of Cu, Zn, Fe and Mn, but among the plants, Euphorbia macroclada
was the most efficient in phytostabilization of Cu and Fe, while,
Ziziphora clinopodioides, Cousinia sp. and Chenopodium botrys
were the most suitable for phytostabilization of Zn and Chondrila
juncea and Stipa barbata had the potential for phytostabilization of
Mn. Using the most common criterion, Euphorbia macroclada and
Verbascum speciosum were Fe hyperaccumulator plants. Present
study showed that native plant species growing on contaminated sites
may have the potential for phytoremediation.
Abstract: Abstraction of water from the dry river sand-beds is
well-known as an alternative source of water during dry seasons.
Internally, because of the form of sand particles, voids are created
which can store water in the riverbeds. Large rivers are rare in South
Africa. Many rivers are sand river types and without water during the
prolonged dry periods. South Africa has not taken full advantage of
water storage in sand as a solution to the growing water scarcity both
in urban and rural areas. The paper reviews the benefits of run-off
storage in sand reservoirs gained from other arid areas and need for
adoption in rural areas of South Africa as an alternative water supply
where it is probable.
Abstract: Optimization of extraction of phenolic compounds
from Avicennia marina using response surface methodology was
carried out during the present study. Five levels, three factors
rotatable design (CCRD) was utilized to examine the optimum
combination of extraction variables based on the TPC of Avicennia
marina leaves. The best combination of response function was 78.41
°C, drying temperature; 26.18°C; extraction temperature and 36.53
minutes of extraction time. However, the procedure can be promptly
extended to the study of several others pharmaceutical processes like
purification of bioactive substances, drying of extracts and
development of the pharmaceutical dosage forms for the benefit of
consumers.
Abstract: Partial discharge (PD) detection is an important
method to evaluate the insulation condition of metal-clad apparatus.
Non-intrusive sensors which are easy to install and have no
interruptions on operation are preferred in onsite PD detection.
However, it often lacks of accuracy due to the interferences in PD
signals. In this paper a novel PD extraction method that uses frequency
analysis and entropy based time-frequency (TF) analysis is introduced.
The repetitive pulses from convertor are first removed via frequency
analysis. Then, the relative entropy and relative peak-frequency of
each pulse (i.e. time-indexed vector TF spectrum) are calculated and
all pulses with similar parameters are grouped. According to the
characteristics of non-intrusive sensor and the frequency distribution
of PDs, the pulses of PD and interferences are separated. Finally the
PD signal and interferences are recovered via inverse TF transform.
The de-noised result of noisy PD data demonstrates that the
combination of frequency and time-frequency techniques can
discriminate PDs from interferences with various frequency
distributions.
Abstract: The third phase of web means semantic web requires many web pages which are annotated with metadata. Thus, a crucial question is where to acquire these metadata. In this paper we propose our approach, a semi-automatic method to annotate the texts of documents and web pages and employs with a quite comprehensive knowledge base to categorize instances with regard to ontology. The approach is evaluated against the manual annotations and one of the most popular annotation tools which works the same as our tool. The approach is implemented in .net framework and uses the WordNet for knowledge base, an annotation tool for the Semantic Web.
Abstract: The use of machine vision to inspect the outcome of
surgical tasks is investigated, with the aim of incorporating this
approach in robotic surgery systems. Machine vision is a non-contact
form of inspection i.e. no part of the vision system is in direct contact
with the patient, and is therefore well suited for surgery where
sterility is an important consideration,. As a proof-of-concept, three
primary surgical tasks for a common neurosurgical procedure were
inspected using machine vision. Experiments were performed on
cadaveric pig heads to simulate the two possible outcomes i.e.
satisfactory or unsatisfactory, for tasks involved in making a burr
hole, namely incision, retraction, and drilling. We identify low level
image features to distinguish the two outcomes, as well as report on
results that validate our proposed approach. The potential of using
machine vision in a surgical environment, and the challenges that
must be addressed, are identified and discussed.
Abstract: This paper outlines the development of a learning retrieval agent. Task of this agent is to extract knowledge of the Active Semantic Network in respect to user-requests. Based on a reinforcement learning approach, the agent learns to interpret the user-s intention. Especially, the learning algorithm focuses on the retrieval of complex long distant relations. Increasing its learnt knowledge with every request-result-evaluation sequence, the agent enhances his capability in finding the intended information.
Abstract: Aerial and satellite images are information rich. They are also complex to analyze. For GIS systems, many features require fast and reliable extraction of roads and intersections. In this paper, we study efficient and reliable automatic extraction algorithms to address some difficult issues that are commonly seen in high resolution aerial and satellite images, nonetheless not well addressed in existing solutions, such as blurring, broken or missing road boundaries, lack of road profiles, heavy shadows, and interfering surrounding objects. The new scheme is based on a new method, namely reference circle, to properly identify the pixels that belong to the same road and use this information to recover the whole road network. This feature is invariable to the shape and direction of roads and tolerates heavy noise and disturbances. Road extraction based on reference circles is much more noise tolerant and flexible than the previous edge-detection based algorithms. The scheme is able to extract roads reliably from images with complex contents and heavy obstructions, such as the high resolution aerial/satellite images available from Google maps.
Abstract: Using plug flow model in conjunction with
experimental solute concentration profiles, overall volumetric mass
transfer coefficient based on continuous phase (Koca), in a packed
liquid-liquid extraction column has been optimized. Number of 12
experiments has been done using standard system of water/acid
acetic/toluene in a 6 cm diameter, 120 cm height column. Thorough
consideration of influencing parameters we intended to correlate
dimensionless parameters in term of overall Sherwood number which
has an acceptable average error of about 15.8%.
Abstract: This work deals with the initial applications and formulation of an anisotropic plastic-damage constitutive model proposed for non-linear analysis of reinforced concrete structures submitted to a loading with change of the sign. The original constitutive model is based on the fundamental hypothesis of energy equivalence between real and continuous medium following the concepts of the Continuum Damage Mechanics. The concrete is assumed as an initial elastic isotropic medium presenting anisotropy, permanent strains and bimodularity (distinct elastic responses whether traction or compression stress states prevail) induced by damage evolution. In order to take into account the bimodularity, two damage tensors governing the rigidity in tension or compression regimes are introduced. Then, some conditions are introduced in the original version of the model in order to simulate the damage unilateral effect. The three-dimensional version of the proposed model is analyzed in order to validate its formulation when compared to micromechanical theory. The one-dimensional version of the model is applied in the analyses of a reinforced concrete beam submitted to a loading with change of the sign. Despite the parametric identification problems, the initial applications show the good performance of the model.
Abstract: XML files contain data which is in well formatted manner. By studying the format or semantics of the grammar it will be helpful for fast retrieval of the data. There are many algorithms which describes about searching the data from XML files. There are no. of approaches which uses data structure or are related to the contents of the document. In these cases user must know about the structure of the document and information retrieval techniques using NLPs is related to content of the document. Hence the result may be irrelevant or not so successful and may take more time to search.. This paper presents fast XML retrieval techniques by using new indexing technique and the concept of RXML. When indexing an XML document, the system takes into account both the document content and the document structure and assigns the value to each tag from file. To query the system, a user is not constrained about fixed format of query.
Abstract: This paper proposes a Fuzzy Sliding Mode Control (FSMC) as a control strategy for Buck-Boost DC-DC converter. The proposed fuzzy controller specifies changes in the control signal based on the knowledge of the surface and the surface change to satisfy the sliding mode stability and attraction conditions. The performances of the proposed fuzzy sliding controller are compared to those obtained by a classical sliding mode controller. The satisfactory simulation results show the efficiency of the proposed control law which reduces the chattering phenomenon. Moreover, the obtained results prove the robustness of the proposed control law against variation of the load resistance and the input voltage of the studied converter.
Abstract: Extraction of lactic acid by emulsion liquid membrane technology (ELM) using n-trioctyl amine (TOA) in n-heptane as carrier within the organic membrane along with sodium carbonate as acceptor phase was optimized by using response surface methodology (RSM). A three level Box-Behnken design was employed for experimental design, analysis of the results and to depict the combined effect of five independent variables, vizlactic acid concentration in aqueous phase (cl), sodium carbonate concentration in stripping phase (cs), carrier concentration in membrane phase (ψ), treat ratio, and batch extraction time (τ)
with equal volume of organic and external aqueous phase on lactic acid extraction efficiency. The maximum lactic acid extraction efficiency (ηext) of 98.21%from aqueous phase in a batch reactor using ELM was found at the optimized values for test variables, cl, cs, ψ, and τ as 0.06 [M], 0.18 [M], 4.72 (%,v/v), 1.98 (v/v) and 13.36 min respectively.
Abstract: Hand gesture is an active area of research in the vision
community, mainly for the purpose of sign language recognition and
Human Computer Interaction. In this paper, we propose a system to
recognize alphabet characters (A-Z) and numbers (0-9) in real-time
from stereo color image sequences using Hidden Markov Models
(HMMs). Our system is based on three main stages; automatic segmentation
and preprocessing of the hand regions, feature extraction
and classification. In automatic segmentation and preprocessing stage,
color and 3D depth map are used to detect hands where the hand
trajectory will take place in further step using Mean-shift algorithm
and Kalman filter. In the feature extraction stage, 3D combined features
of location, orientation and velocity with respected to Cartesian
systems are used. And then, k-means clustering is employed for
HMMs codeword. The final stage so-called classification, Baum-
Welch algorithm is used to do a full train for HMMs parameters.
The gesture of alphabets and numbers is recognized using Left-Right
Banded model in conjunction with Viterbi algorithm. Experimental
results demonstrate that, our system can successfully recognize hand
gestures with 98.33% recognition rate.
Abstract: An automatic method for the extraction of feature points for face based applications is proposed. The system is based upon volumetric feature descriptors, which in this paper has been extended to incorporate scale space. The method is robust to noise and has the ability to extract local and holistic features simultaneously from faces stored in a database. Extracted features are stable over a range of faces, with results indicating that in terms of intra-ID variability, the technique has the ability to outperform manual landmarking.