Abstract: Classifying biomedical literature is a difficult and
challenging task, especially when a large number of biomedical
articles should be organized into a hierarchical structure. In this paper,
we present an approach for classifying a collection of biomedical text
abstracts downloaded from Medline database with the help of
ontology alignment. To accomplish our goal, we construct two types
of hierarchies, the OHSUMED disease hierarchy and the Medline
abstract disease hierarchies from the OHSUMED dataset and the
Medline abstracts, respectively. Then, we enrich the OHSUMED
disease hierarchy before adapting it to ontology alignment process for
finding probable concepts or categories. Subsequently, we compute
the cosine similarity between the vector in probable concepts (in the
“enriched" OHSUMED disease hierarchy) and the vector in Medline
abstract disease hierarchies. Finally, we assign category to the new
Medline abstracts based on the similarity score. The results obtained
from the experiments show the performance of our proposed approach
for hierarchical classification is slightly better than the performance of
the multi-class flat classification.
Abstract: In an era of knowledge explosion, the growth of data
increases rapidly day by day. Since data storage is a limited resource,
how to reduce the data space in the process becomes a challenge issue.
Data compression provides a good solution which can lower the
required space. Data mining has many useful applications in recent
years because it can help users discover interesting knowledge in large
databases. However, existing compression algorithms are not
appropriate for data mining. In [1, 2], two different approaches were
proposed to compress databases and then perform the data mining
process. However, they all lack the ability to decompress the data to
their original state and improve the data mining performance. In this
research a new approach called Mining Merged Transactions with the
Quantification Table (M2TQT) was proposed to solve these problems.
M2TQT uses the relationship of transactions to merge related
transactions and builds a quantification table to prune the candidate
itemsets which are impossible to become frequent in order to improve
the performance of mining association rules. The experiments show
that M2TQT performs better than existing approaches.
Abstract: One major difficulty that faces developers of
concurrent and distributed software is analysis for concurrency based
faults like deadlocks. Petri nets are used extensively in the
verification of correctness of concurrent programs. ECATNets [2] are
a category of algebraic Petri nets based on a sound combination of
algebraic abstract types and high-level Petri nets. ECATNets have
'sound' and 'complete' semantics because of their integration in
rewriting logic [12] and its programming language Maude [13].
Rewriting logic is considered as one of very powerful logics in terms
of description, verification and programming of concurrent systems.
We proposed in [4] a method for translating Ada-95 tasking
programs to ECATNets formalism (Ada-ECATNet). In this paper,
we show that ECATNets formalism provides a more compact
translation for Ada programs compared to the other approaches based
on simple Petri nets or Colored Petri nets (CPNs). Such translation
doesn-t reduce only the size of program, but reduces also the number
of program states. We show also, how this compact Ada-ECATNet
may be reduced again by applying reduction rules on it. This double
reduction of Ada-ECATNet permits a considerable minimization of
the memory space and run time of corresponding Maude program.
Abstract: In this paper, we propose a novel algorithm for
delineating the endocardial wall from a human heart ultrasound scan.
We assume that the gray levels in the ultrasound images are
independent and identically distributed random variables with
different Rician Inverse Gaussian (RiIG) distributions. Both synthetic
and real clinical data will be used for testing the algorithm. Algorithm
performance will be evaluated using the expert radiologist evaluation
of a soft copy of an ultrasound scan during the scanning process and
secondly, doctor’s conclusion after going through a printed copy of
the same scan. Successful implementation of this algorithm should
make it possible to differentiate normal from abnormal soft tissue and
help disease identification, what stage the disease is in and how best
to treat the patient. We hope that an automated system that uses this
algorithm will be useful in public hospitals especially in Third World
countries where problems such as shortage of skilled radiologists and
shortage of ultrasound machines are common. These public hospitals
are usually the first and last stop for most patients in these countries.
Abstract: In this paper, we propose effective system for digital music retrieval. We divided proposed system into Client and Server. Client part consists of pre-processing and Content-based feature extraction stages. In pre-processing stage, we minimized Time code Gap that is occurred among same music contents. As content-based feature, first-order differentiated MFCC were used. These presented approximately envelop of music feature sequences. Server part included Music Server and Music Matching stage. Extracted features from 1,000 digital music files were stored in Music Server. In Music Matching stage, we found retrieval result through similarity measure by DTW. In experiment, we used 450 queries. These were made by mixing different compression standards and sound qualities from 50 digital music files. Retrieval accurate indicated 97% and retrieval time was average 15ms in every single query. Out experiment proved that proposed system is effective in retrieve digital music and robust at various user environments of web.
Abstract: This paper presents a remote on-line diagnostic system
for vehicles via the use of On-Board Diagnostic (OBD), GPS, and 3G
techniques. The main parts of the proposed system are on-board
computer, vehicle monitor server, and vehicle status browser. First,
the on-board computer can obtain the location of deriver and vehicle
status from GPS receiver and OBD interface, respectively. Then
on-board computer will connect with the vehicle monitor server
through 3G network to transmit the real time vehicle system status.
Finally, vehicle status browser could show the remote vehicle status
including vehicle speed, engine rpm, battery voltage, engine coolant
temperature, and diagnostic trouble codes. According to the
experimental results, the proposed system can help fleet managers and
car knockers to understand the remote vehicle status. Therefore this
system can decrease the time of fleet management and vehicle repair
due to the fleet managers and car knockers who find the diagnostic
trouble messages in time.
Abstract: Monitoring the tool flank wear without affecting the
throughput is considered as the prudent method in production
technology. The examination has to be done without affecting the
machining process. In this paper we proposed a novel work that is
used to determine tool flank wear by observing the sound signals
emitted during the turning process. The work-piece material we used
here is steel and aluminum and the cutting insert was carbide
material. Two different cutting speeds were used in this work. The
feed rate and the cutting depth were constant whereas the flank wear
was a variable. The emitted sound signal of a fresh tool (0 mm flank
wear) a slightly worn tool (0.2 -0.25 mm flank wear) and a severely
worn tool (0.4mm and above flank wear) during turning process were
recorded separately using a high sensitive microphone. Analysis
using Singular Value Decomposition was done on these sound
signals to extract the feature sound components. Observation of the
results showed that an increase in tool flank wear correlates with an
increase in the values of SVD features produced out of the sound
signals for both the materials. Hence it can be concluded that wear
monitoring of tool flank during turning process using SVD features
with the Fuzzy C means classification on the emitted sound signal is
a potential and relatively simple method.
Abstract: Forecasting the values of the indicators, which
characterize the effectiveness of performance of organizations is of
great importance for their successful development. Such forecasting
is necessary in order to assess the current state and to foresee future
developments, so that measures to improve the organization-s
activity could be undertaken in time. The article presents an
overview of the applied mathematical and statistical methods for
developing forecasts. Special attention is paid to artificial neural
networks as a forecasting tool. Their strengths and weaknesses are
analyzed and a synopsis is made of the application of artificial neural
networks in the field of forecasting of the values of different
education efficiency indicators. A method of evaluation of the
activity of universities using the Balanced Scorecard is proposed and
Key Performance Indicators for assessment of e-learning are
selected. Resulting indicators for the evaluation of efficiency of the
activity are proposed. An artificial neural network is constructed and
applied in the forecasting of the values of indicators for e-learning
efficiency on the basis of the KPI values.
Abstract: In this paper, Speed Sensorless Indirect Field Oriented Control (IFOC) of a Permanent Magnet Synchronous machine (PMSM) is studied. The closed loop scheme of the drive system utilizes fuzzy speed and current controllers. Due to the well known drawbacks of the speed sensor, an algorithm is proposed in this paper to eliminate it. In fact, based on the model of the PMSM, the stator currents and rotor speed are estimated simultaneously using adaptive Luenberger observer for currents and MRAS (Model Reference Adaptive System) observer for rotor speed. To overcome the sensivity of this algorithm against parameter variation, adaptive for on line stator resistance tuning is proposed. The validity of the proposed method is verified by an extensive simulation work.
Abstract: Abrasive waterjet is a novel machining process capable of processing wide range of hard-to-machine materials. This research addresses modeling and optimization of the process parameters for this machining technique. To model the process a set of experimental data has been used to evaluate the effects of various parameter settings in cutting 6063-T6 aluminum alloy. The process variables considered here include nozzle diameter, jet traverse rate, jet pressure and abrasive flow rate. Depth of cut, as one of the most important output characteristics, has been evaluated based on different parameter settings. The Taguchi method and regression modeling are used in order to establish the relationships between input and output parameters. The adequacy of the model is evaluated using analysis of variance (ANOVA) technique. The pairwise effects of process parameters settings on process response outputs are also shown graphically. The proposed model is then embedded into a Simulated Annealing algorithm to optimize the process parameters. The optimization is carried out for any desired values of depth of cut. The objective is to determine proper levels of process parameters in order to obtain a certain level of depth of cut. Computational results demonstrate that the proposed solution procedure is quite effective in solving such multi-variable problems.
Abstract: In this work we investigated the behavior of methane
hydrates dispersed in crude oils from different fields at temperatures
below 0°C. In case of crude oil emulsion the size of water droplets is
in the range of 50e100"m. The size of hydrate particles formed from
droplets is the same. The self-preservation is not expected in this
field. However, the self-preservation of hydrates with the size of
particles 24±18"m (electron microscopy data) in suspensions is
observed. Similar results were obtained for four different kinds of
crude oil and model system such as asphaltenes, resins and wax in ndecane.
This result can allow developing effective methods to prevent
the formation and elimination of gas-hydrate plugs in pipelines under
low temperature conditions (e. g. in Eastern Siberia). There is a
prospective to use experiment results for working out the technology
of associated petroleum gas recovery.
Abstract: This paper considers a scheduling problem in flexible
flow shops environment with the aim of minimizing two important
criteria including makespan and cumulative tardiness of jobs. Since
the proposed problem is known as an Np-hard problem in literature,
we have to develop a meta-heuristic to solve it. We considered
general structure of Genetic Algorithm (GA) and developed a new
version of that based on Data Envelopment Analysis (DEA). Two
objective functions assumed as two different inputs for each Decision
Making Unit (DMU). In this paper we focused on efficiency score of
DMUs and efficient frontier concept in DEA technique. After
introducing the method we defined two different scenarios with
considering two types of mutation operator. Also we provided an
experimental design with some computational results to show the
performance of algorithm. The results show that the algorithm
implements in a reasonable time.
Abstract: Image retrieval is a topic where scientific interest is currently high. The important steps associated with image retrieval system are the extraction of discriminative features and a feasible similarity metric for retrieving the database images that are similar in content with the search image. Gabor filtering is a widely adopted technique for feature extraction from the texture images. The recently proposed sparsity promoting l1-norm minimization technique finds the sparsest solution of an under-determined system of linear equations. In the present paper, the l1-norm minimization technique as a similarity metric is used in image retrieval. It is demonstrated through simulation results that the l1-norm minimization technique provides a promising alternative to existing similarity metrics. In particular, the cases where the l1-norm minimization technique works better than the Euclidean distance metric are singled out.
Abstract: In recent years, Radio Frequency Identification (RFID)
is followed with interest by many researches, especially for the
purpose of indoor positioning as the innate properties of RFID are
profitable for achieving it. A lot of algorithms or schemes are proposed
to be used in the RFID-based positioning system, but most of them are
lack of environmental consideration and it induces inaccuracy of
application. In this research, a lot of algorithms and schemes of RFID
indoor positioning are discussed to see whether effective or not on
application, and some rules are summarized for achieving accurate
positioning. On the other hand, a new term “Noise Factor" is involved
to describe the signal loss between the target and the obstacle. As a
result, experimental data can be obtained but not only simulation; and
the performance of the positioning system can be expressed
substantially.
Abstract: One of the major disadvantages of the minimally
invasive surgery (MIS) is the lack of tactile feedback to the surgeon.
In order to identify and avoid any damage to the grasped complex
tissue by endoscopic graspers, it is important to measure the local
softness of tissue during MIS. One way to display the measured
softness to the surgeon is a graphical method. In this paper, a new
tactile sensor has been reported. The tactile sensor consists of an
array of four softness sensors, which are integrated into the jaws of a
modified commercial endoscopic grasper. Each individual softness
sensor consists of two piezoelectric polymer Polyvinylidene Fluoride
(PVDF) films, which are positioned below a rigid and a compliant
cylinder. The compliant cylinder is fabricated using a micro molding
technique. The combination of output voltages from PVDF films is
used to determine the softness of the grasped object. The theoretical
analysis of the sensor is also presented.
A method has been developed with the aim of reproducing the
tactile softness to the surgeon by using a graphical method. In this
approach, the proposed system, including the interfacing and the data
acquisition card, receives signals from the array of softness sensors.
After the signals are processed, the tactile information is displayed
by means of a color coding method. It is shown that the degrees of
softness of the grasped objects/tissues can be visually differentiated
and displayed on a monitor.
Abstract: In this paper, a new method of controlling position of AC Servomotor using Field Programmable Gate Array (FPGA). FPGA controller is used to generate direction and the number of pulses required to rotate for a given angle. Pulses are sent as a square wave, the number of pulses determines the angle of rotation and frequency of square wave determines the speed of rotation. The proposed control scheme has been realized using XILINX FPGA SPARTAN XC3S400 and tested using MUMA012PIS model Alternating Current (AC) servomotor. Experimental results show that the position of the AC Servo motor can be controlled effectively. KeywordsAlternating Current (AC), Field Programmable Gate Array (FPGA), Liquid Crystal Display (LCD).
Abstract: Road traffic accidents are a major cause of death worldwide. In an attempt to reduce accidents, some research efforts have focused on creating Advanced Driver Assistance Systems (ADAS) able to detect vehicle, driver and environmental conditions and to use this information to identify cues for potential accidents. This paper presents continued work on a novel Non-intrusive Intelligent Driver Assistance and Safety System (Ni-DASS) for assessing driver point of regard within vehicles. It uses an on-board CCD camera to observe the driver-s face. A template matching approach is used to compare the driver-s eye-gaze pattern with a set of eye-gesture templates of the driver looking at different focal points within the vehicle. The windscreen is divided into cells and comparison of the driver-s eye-gaze pattern with templates of a driver-s eyes looking at each cell is used to determine the driver-s point of regard on the windscreen. Results indicate that the proposed technique could be useful in situations where low resolution estimates of driver point of regard are adequate. For instance, To allow ADAS systems to alert the driver if he/she has positively failed to observe a hazard.
Abstract: Assembly line balancing is a very important issue in
mass production systems due to production cost. Although many
studies have been done on this topic, but because assembly line
balancing problems are so complex they are categorized as NP-hard
problems and researchers strongly recommend using heuristic
methods. This paper presents a new heuristic approach called the
critical task method (CTM) for solving U-shape assembly line
balancing problems. The performance of the proposed heuristic
method is tested by solving a number of test problems and comparing
them with 12 other heuristics available in the literature to confirm the
superior performance of the proposed heuristic. Furthermore, to
prove the efficiency of the proposed CTM, the objectives are
increased to minimize the number of workstation (or equivalently
maximize line efficiency), and minimizing the smoothness index.
Finally, it is proven that the proposed heuristic is more efficient than
the others to solve the U-shape assembly line balancing problem.
Abstract: An adaptive Fuzzy Inference Perceptual model has
been proposed for watermarking of digital images. The model
depends on the human visual characteristics of image sub-regions in
the frequency multi-resolution wavelet domain. In the proposed
model, a multi-variable fuzzy based architecture has been designed to
produce a perceptual membership degree for both candidate
embedding sub-regions and strength watermark embedding factor.
Different sizes of benchmark images with different sizes of
watermarks have been applied on the model. Several experimental
attacks have been applied such as JPEG compression, noises and
rotation, to ensure the robustness of the scheme. In addition, the
model has been compared with different watermarking schemes. The
proposed model showed its robustness to attacks and at the same time
achieved a high level of imperceptibility.
Abstract: Nowadays companies strive to survive in a
competitive global environment. To speed up product
development/modifications, it is suggested to adopt a collaborative
product development approach. However, despite the advantages of
new IT improvements still many CAx systems work separately and
locally. Collaborative design and manufacture requires a product
information model that supports related CAx product data models. To
solve this problem many solutions are proposed, which the most
successful one is adopting the STEP standard as a product data model
to develop a collaborative CAx platform. However, the improvement
of the STEP-s Application Protocols (APs) over the time, huge
number of STEP AP-s and cc-s, the high costs of implementation,
costly process for conversion of older CAx software files to the STEP
neutral file format; and lack of STEP knowledge, that usually slows
down the implementation of the STEP standard in collaborative data
exchange, management and integration should be considered. In this
paper the requirements for a successful collaborative CAx system is
discussed. The STEP standard capability for product data integration
and its shortcomings as well as the dominant platforms for supporting
CAx collaboration management and product data integration are
reviewed. Finally a platform named LAYMOD to fulfil the
requirements of CAx collaborative environment and integrating the
product data is proposed. The platform is a layered platform to enable
global collaboration among different CAx software
packages/developers. It also adopts the STEP modular architecture
and the XML data structures to enable collaboration between CAx
software packages as well as overcoming the STEP standard
limitations. The architecture and procedures of LAYMOD platform
to manage collaboration and avoid contradicts in product data
integration are introduced.