Abstract: Graph has become increasingly important in modeling
complicated structures and schemaless data such as proteins, chemical
compounds, and XML documents. Given a graph query, it is desirable
to retrieve graphs quickly from a large database via graph-based
indices. Different from the existing methods, our approach, called
VFM (Vertex to Frequent Feature Mapping), makes use of vertices
and decision features as the basic indexing feature. VFM constructs
two mappings between vertices and frequent features to answer graph
queries. The VFM approach not only provides an elegant solution to
the graph indexing problem, but also demonstrates how database
indexing and query processing can benefit from data mining,
especially frequent pattern mining. The results show that the proposed
method not only avoids the enumeration method of getting subgraphs
of query graph, but also effectively reduces the subgraph isomorphism
tests between the query graph and graphs in candidate answer set in
verification stage.
Abstract: The concept of privacy, seen in connection to the consumer's private space and personalization, has recently gained a higher importance as a consequence of the increasing marketing efforts of the organizations based on the capturing, processing and usage of consumer-s personal data.Paper intends to provide a definition of the consumer-s private space based on the types of personal data the consumer is willing to disclose, to assess the attitude toward personalization and to identify the means preferred by consumers to control their personal data and defend their private space. Several implications generated through the definition of the consumer-s private space are identified and weighted from both the consumers- and organizations- perspectives.
Abstract: Everyday the usages of the Internet increase and simply a world of the data become accessible. Network providers do not want to let the provided services to be used in harmful or terrorist affairs, so they used a variety of methods to protect the special regions from the harmful data. One of the most important methods is supposed to be the firewall. Firewall stops the transfer of such packets through several ways, but in some cases they do not use firewall because of its blind packet stopping, high process power needed and expensive prices. Here we have proposed a method to find a discriminate function to distinguish between usual packets and harmful ones by the statistical processing on the network router logs. So an administrator can alarm to the user. This method is very fast and can be used simply in adjacent with the Internet routers.
Abstract: En bloc assumes modeling all phases of the orthostatic test with the only one mathematical model, which allows the complex parametric view of orthostatic response. The work presents the implementation of a mathematical model for processing of the measurements of systolic, diastolic blood pressure and heart rate performed on volunteers during orthostatic test. The original assumption of model hypothesis that every postural change means only one Stressor, did not complying with the measurements of physiological circulation factor-time profiles. Results of the identification support the hypothesis that second postural change of orthostatic test causes induced Stressors, with the observation of a physiological regulation mechanism. Maximal demonstrations are on the heart rate and diastolic blood pressure-time profile, minimal are for the measurements of the systolic blood pressure. Presented study gives a new view on orthostatic test with impact on clinical practice.
Abstract: Nanostructured materials have attracted many
researchers due to their outstanding mechanical and physical
properties. For example, carbon nanotubes (CNTs) or carbon
nanofibres (CNFs) are considered to be attractive reinforcement
materials for light weight and high strength metal matrix composites.
These composites are being projected for use in structural
applications for their high specific strength as well as functional
materials for their exciting thermal and electrical characteristics. The
critical issues of CNT-reinforced MMCs include processing
techniques, nanotube dispersion, interface, strengthening mechanisms
and mechanical properties. One of the major obstacles to the effective
use of carbon nanotubes as reinforcements in metal matrix
composites is their agglomeration and poor distribution/dispersion
within the metallic matrix. In order to tap into the advantages of the
properties of CNTs (or CNFs) in composites, the high dispersion of
CNTs (or CNFs) and strong interfacial bonding are the key issues
which are still challenging. Processing techniques used for synthesis
of the composites have been studied with an objective to achieve
homogeneous distribution of carbon nanotubes in the matrix.
Modified mechanical alloying (ball milling) techniques have emerged
as promising routes for the fabrication of carbon nanotube (CNT)
reinforced metal matrix composites. In order to obtain a
homogeneous product, good control of the milling process, in
particular control of the ball movement, is essential. The control of
the ball motion during the milling leads to a reduction in grinding
energy and a more homogeneous product. Also, the critical inner
diameter of the milling container at a particular rotational speed can
be calculated. In the present work, we use conventional and modified
mechanical alloying to generate a homogenous distribution of 2 wt.
% CNT within Al powders. 99% purity Aluminium powder (Acros,
200mesh) was used along with two different types of multiwall
carbon nanotube (MWCNTs) having different aspect ratios to
produce Al-CNT composites. The composite powders were processed
into bulk material by compaction, and sintering using a cylindrical
compaction and tube furnace. Field Emission Scanning electron
microscopy (FESEM), X-Ray diffraction (XRD), Raman
spectroscopy and Vickers macro hardness tester were used to
evaluate CNT dispersion, powder morphology, CNT damage, phase
analysis, mechanical properties and crystal size determination.
Despite the success of ball milling in dispersing CNTs in Al powder,
it is often accompanied with considerable strain hardening of the Al
powder, which may have implications on the final properties of the
composite. The results show that particle size and morphology vary
with milling time. Also, by using the mixing process and sonication
before mechanical alloying and modified ball mill, dispersion of the
CNTs in Al matrix improves.
Abstract: This paper proposes new hybrid approaches for face
recognition. Gabor wavelets representation of face images is an
effective approach for both facial action recognition and face
identification. Perform dimensionality reduction and linear
discriminate analysis on the down sampled Gabor wavelet faces can
increase the discriminate ability. Nearest feature space is extended to
various similarity measures. In our experiments, proposed Gabor
wavelet faces combined with extended neural net feature space
classifier shows very good performance, which can achieve 93 %
maximum correct recognition rate on ORL data set without any preprocessing
step.
Abstract: In this paper we present a way of controlling the
concurrent access to data in a distributed application using the
Pessimistic Offline Lock design pattern. In our case, the application
processes a complex entity, which contains in a hierarchical structure
different other entities (objects). It will be shown how the complex
entity and the contained entities must be locked in order to control
the concurrent access to data.
Abstract: Wheel-running type moving robot has the restriction
on the moving range caused by obstacles or stairs. Solving this
weakness, we studied the development of moving robot using airship.
Our airship robot moves by recognizing arrow marks on the path. To
have the airship robot recognize arrow marks, we used edge-based
template matching. To control propeller units, we used PID and PD
controller. The results of experiments demonstrated that the airship
robot can move along the marks and can go up and down the stairs. It is
shown the possibility that airship robot can become a robot which can
move at wide range facilities.
Abstract: Liver segmentation is the first significant process for
liver diagnosis of the Computed Tomography. It segments the liver
structure from other abdominal organs. Sophisticated filtering techniques
are indispensable for a proper segmentation. In this paper, we
employ a 3D anisotropic diffusion as a preprocessing step. While
removing image noise, this technique preserve the significant parts
of the image, typically edges, lines or other details that are important
for the interpretation of the image. The segmentation task is done
by using thresholding with automatic threshold values selection and
finally the false liver region is eliminated using 3D connected component.
The result shows that by employing the 3D anisotropic filtering,
better liver segmentation results could be achieved eventhough simple
segmentation technique is used.
Abstract: Accurate timing alignment and stability is important
to maximize the true counts and minimize the random counts in
positron emission tomography So signals output from detectors must
be centering with the two isotopes to pre-operation and fed signals
into four units of pulse-processing units, each unit can accept up to
eight inputs. The dual source computed tomography consist two units
on the left for 15 detector signals of Cs-137 isotope and two units on
the right are for 15 detectors signals of Co-60 isotope. The gamma
spectrum consisting of either single or multiple photo peaks. This
allows for the use of energy discrimination electronic hardware
associated with the data acquisition system to acquire photon counts
data with a specific energy, even if poor energy resolution detectors
are used. This also helps to avoid counting of the Compton scatter
counts especially if a single discrete gamma photo peak is emitted by
the source as in the case of Cs-137. In this study the polyenergetic
version of the alternating minimization algorithm is applied to the
dual energy gamma computed tomography problem.
Abstract: Raisin Concentrate (RC) are the most important
products obtained in the raisin processing industries. These RC
products are now used to make the syrups, drinks and confectionery
productions and introduced as natural substitute for sugar in food
applications. Iran is a one of the biggest raisin exporter in the world
but unfortunately despite a good raw material, no serious effort to
extract the RC has been taken in Iran. Therefore, in this paper, we
determined and analyzed affected parameters on extracting RC
process and then optimizing these parameters for design the
extracting RC process in two types of raisin (round and long)
produced in Khorasan region. Two levels of solvent (1:1 and 2:1),
three levels of extraction temperature (60°C, 70°C and 80°C), and
three levels of concentration temperature (50°C, 60°C and 70°C)
were the treatments. Finally physicochemical characteristics of the
obtained concentrate such as color, viscosity, percentage of reduction
sugar, acidity and the microbial tests (mould and yeast) were
counted. The analysis was performed on the basis of factorial in the
form of completely randomized design (CRD) and Duncan's multiple
range test (DMRT) was used for the comparison of the means.
Statistical analysis of results showed that optimal conditions for
production of concentrate is round raisins when the solvent ratio was
2:1 with extraction temperature of 60°C and then concentration
temperature of 50°C. Round raisin is cheaper than the long one, and
it is more economical to concentrate production. Furthermore, round
raisin has more aromas and the less color degree with increasing the
temperature of concentration and extraction. Finally, according to
mentioned factors the concentrate of round raisin is recommended.
Abstract: Response surface methodology was used for
quantitative investigation of water and solids transfer during osmotic
dehydration of beetroot in aqueous solution of salt. Effects of
temperature (25 – 45oC), processing time (30–150 min), salt
concentration (5–25%, w/w) and solution to sample ratio (5:1 – 25:1)
on osmotic dehydration of beetroot were estimated. Quadratic
regression equations describing the effects of these factors on the
water loss and solids gain were developed. It was found that effects
of temperature and salt concentrations were more significant on the
water loss than the effects of processing time and solution to sample
ratio. As for solids gain processing time and salt concentration were
the most significant factors. The osmotic dehydration process was
optimized for water loss, solute gain, and weight reduction. The
optimum conditions were found to be: temperature – 35oC,
processing time – 90 min, salt concentration – 14.31% and solution
to sample ratio 8.5:1. At these optimum values, water loss, solid gain
and weight reduction were found to be 30.86 (g/100 g initial sample),
9.43 (g/100 g initial sample) and 21.43 (g/100 g initial sample)
respectively.
Abstract: LabVIEW and SIMULINK are two most widely used
graphical programming environments for designing digital signal
processing and control systems. Unlike conventional text-based
programming languages such as C, Cµ and MATLAB, graphical
programming involves block-based code developments, allowing a
more efficient mechanism to build and analyze control systems. In
this paper a LabVIEW environment has been employed as a
graphical user interface for monitoring the operation of a controlled
distillation column, by visualizing both the closed loop performance
and the user selected control conditions, while the column dynamics
has been modeled under the SIMULINK environment. This tool has
been applied to the PID based decoupled control of a binary
distillation column. By means of such integrated environments the
control designer is able to monitor and control the plant behavior and
optimize the response when both, the quality improvement of
distillation products and the operation efficiency tasks, are
considered.
Abstract: This research uses computational linguistics, an area of study that employs a computer to process natural language, and aims at discerning the patterns that exist in declarative sentences used in technical texts. The approach is mathematical, and the focus is on instructional texts found on web pages. The technique developed by the author and named the MAYA Semantic Technique is used here and organized into four stages. In the first stage, the parts of speech in each sentence are identified. In the second stage, the subject of the sentence is determined. In the third stage, MAYA performs a frequency analysis on the remaining words to determine the verb and its object. In the fourth stage, MAYA does statistical analysis to determine the content of the web page. The advantage of the MAYA Semantic Technique lies in its use of mathematical principles to represent grammatical operations which assist processing and accuracy if performed on unambiguous text. The MAYA Semantic Technique is part of a proposed architecture for an entire web-based intelligent tutoring system. On a sample set of sentences, partial semantics derived using the MAYA Semantic Technique were approximately 80% accurate. The system currently processes technical text in one domain, namely Cµ programming. In this domain all the keywords and programming concepts are known and understood.
Abstract: Wireless capsule endoscopy provides real-time images in the digestive tract. Capsule images are usually low resolution and are diverse images due to travel through various regions of human body. Color information has been a primary reference in predicting abnormalities such as bleeding. Often color is not sufficient for this purpose. In this study, we took morphological shapes into account as additional, but important criterion. First, we processed gastric images in order to indentify various objects in the image. Then, we analyzed color information in the object. In this way, we could remove unnecessary information and increase the accuracy. Compared to our previous investigations, we could handle images of various degrees of brightness and improve our diagnostic algorithm.
Abstract: This paper presents an new vision technique for
robotic manipulation of randomly oriented objects in industrial
applications. The proposed approach uses 2D and 3D vision for
efficiently extracting the 3D pose of an object in the presence of
multiple randomly positioned objects. 2D vision permits to quickly
select the objects of interest for 3D processing with a new modified
ICP algorithm (FaR-ICP), thus reducing significantly the processing
time. The extracted 3D pose is then sent to the robot manipulator for
picking. The tests show that the proposed system achieves high
performances
Abstract: Processing the data by computers and performing
reasoning tasks is an important aim in Computer Science. Semantic
Web is one step towards it. The use of ontologies to enhance the
information by semantically is the current trend. Huge amount of
domain specific, unstructured on-line data needs to be expressed in
machine understandable and semantically searchable format.
Currently users are often forced to search manually in the results
returned by the keyword-based search services. They also want to use
their native languages to express what they search. In this paper, an
ontology-based automated question answering system on software
test documents domain is presented. The system allows users to enter
a question about the domain by means of natural language and
returns exact answer of the questions. Conversion of the natural
language question into the ontology based query is the challenging
part of the system. To be able to achieve this, a new algorithm
regarding free text to ontology based search engine query conversion
is proposed. The algorithm is based on investigation of suitable
question type and parsing the words of the question sentence.
Abstract: In order to accelerate the similarity search in highdimensional database, we propose a new hierarchical indexing method. It is composed of offline and online phases. Our contribution concerns both phases. In the offline phase, after gathering the whole of the data in clusters and constructing a hierarchical index, the main originality of our contribution consists to develop a method to construct bounding forms of clusters to avoid overlapping. For the online phase, our idea improves considerably performances of similarity search. However, for this second phase, we have also developed an adapted search algorithm. Our method baptized NOHIS (Non-Overlapping Hierarchical Index Structure) use the Principal Direction Divisive Partitioning (PDDP) as algorithm of clustering. The principle of the PDDP is to divide data recursively into two sub-clusters; division is done by using the hyper-plane orthogonal to the principal direction derived from the covariance matrix and passing through the centroid of the cluster to divide. Data of each two sub-clusters obtained are including by a minimum bounding rectangle (MBR). The two MBRs are directed according to the principal direction. Consequently, the nonoverlapping between the two forms is assured. Experiments use databases containing image descriptors. Results show that the proposed method outperforms sequential scan and SRtree in processing k-nearest neighbors.
Abstract: The identification and classification of weeds are of
major technical and economical importance in the agricultural
industry. To automate these activities, like in shape, color and
texture, weed control system is feasible. The goal of this paper is to
build a real-time, machine vision weed control system that can detect
weed locations. In order to accomplish this objective, a real-time
robotic system is developed to identify and locate outdoor plants
using machine vision technology and pattern recognition. The
algorithm is developed to classify images into broad and narrow class
for real-time selective herbicide application. The developed
algorithm has been tested on weeds at various locations, which have
shown that the algorithm to be very effectiveness in weed
identification. Further the results show a very reliable performance
on weeds under varying field conditions. The analysis of the results
shows over 90 percent classification accuracy over 140 sample
images (broad and narrow) with 70 samples from each category of
weeds.
Abstract: In this paper we propose segmentation approach based
on Vector Quantization technique. Here we have used Kekre-s fast
codebook generation algorithm for segmenting low-altitude aerial
image. This is used as a preprocessing step to form segmented
homogeneous regions. Further to merge adjacent regions color
similarity and volume difference criteria is used. Experiments
performed with real aerial images of varied nature demonstrate that
this approach does not result in over segmentation or under
segmentation. The vector quantization seems to give far better results
as compared to conventional on-the-fly watershed algorithm.