Abstract: Electromyography (EMG) signal processing has been investigated remarkably regarding various applications such as in rehabilitation systems. Specifically, wavelet transform has served as a powerful technique to scrutinize EMG signals since wavelet transform is consistent with the nature of EMG as a non-stationary signal. In this paper, the efficiency of wavelet transform in surface EMG feature extraction is investigated from four levels of wavelet decomposition and a comparative study between different mother wavelets had been done. To recognize the best function and level of wavelet analysis, two evaluation criteria, scatter plot and RES index are recruited. Hereupon, four wavelet families, namely, Daubechies, Coiflets, Symlets and Biorthogonal are studied in wavelet decomposition stage. Consequently, the results show that only features from first and second level of wavelet decomposition yields good performance and some functions of various wavelet families can lead to an improvement in separability class of different hand movements.
Abstract: This paper demonstrates an effort of a serviceoriented
engineering department in improving the sharing and
transfer of knowledge. Although the department consist of only six
employees, but it provides services in various chemical application in
an oil and gas business. The services provided span across Asia
Pacific region mainly Indonesia, Myanmar, Vietnam, Brunei,
Thailand and Singapore. Currently there are no effective tools or
integrated systems that support the sharing or transfer and
maintenance of knowledge so the department has considered
preserving this valuable knowledge by developing a Knowledge
Management System (KMS). This paper presents the development of
a KMS to support the sharing of knowledge in a service-oriented
engineering department of an oil and gas company. The embedded
features in the KMS like blog and forum will encourage iterative
process of knowledge sharing among the employees in the
department. The information and knowledge being shared, discussed
and communicated will be then achieved for future re-use. The re-use
of the knowledge allows the department to reduce redundant efforts
in providing consistent, up-to-date and cost effective of the best
solution to the its clients.
Abstract: This study proposes a materials procurement contracts
model to which the zero-cost collar option is applied for heading price
fluctuation risks in construction.The material contract model based on
the collar option that consists of the call option striking zone of the
construction company(the buyer) following the materials price
increase andthe put option striking zone of the material vendor(the
supplier) following a materials price decrease. This study first
determined the call option strike price Xc of the construction company
by a simple approach: it uses the predicted profit at the project starting
point and then determines the strike price of put option Xp that has an
identical option value, which completes the zero-cost material
contract.The analysis results indicate that the cost saving of the
construction company increased as Xc decreased. This was because the
critical level of the steel materials price increasewas set at a low level.
However, as Xc decreased, Xpof a put option that had an identical
option value gradually increased. Cost saving increased as Xc
decreased. However, as Xp gradually increased, the risk of loss from a
construction company increased as the steel materials price decreased.
Meanwhile, cost saving did not occur for the construction company,
because of volatility. This result originated in the zero-cost features of
the two-way contract of the collar option. In the case of the regular
one-way option, the transaction cost had to be subtracted from the cost
saving. The transaction cost originated from an option value that
fluctuated with the volatility. That is, the cost saving of the one-way
option was affected by the volatility. Meanwhile, even though the
collar option with zero transaction cost cut the connection between
volatility and cost saving, there was a risk of exercising the put option.
Abstract: In this paper, a new approach for target recognition based on the Empirical mode decomposition (EMD) algorithm of Huang etal. [11] and the energy tracking operator of Teager [13]-[14] is introduced. The conjunction of these two methods is called Teager-Huang analysis. This approach is well suited for nonstationary signals analysis. The impulse response (IR) of target is first band pass filtered into subsignals (components) called Intrinsic mode functions (IMFs) with well defined Instantaneous frequency (IF) and Instantaneous amplitude (IA). Each IMF is a zero-mean AM-FM component. In second step, the energy of each IMF is tracked using the Teager energy operator (TEO). IF and IA, useful to describe the time-varying characteristics of the signal, are estimated using the Energy separation algorithm (ESA) algorithm of Maragos et al .[16]-[17]. In third step, a set of features such as skewness and kurtosis are extracted from the IF, IA and IMF energy functions. The Teager-Huang analysis is tested on set of synthetic IRs of Sonar targets with different physical characteristics (density, velocity, shape,? ). PCA is first applied to features to discriminate between manufactured and natural targets. The manufactured patterns are classified into spheres and cylinders. One hundred percent of correct recognition is achieved with twenty three echoes where sixteen IRs, used for training, are free noise and seven IRs, used for testing phase, are corrupted with white Gaussian noise.
Abstract: This Paper proposes a new facial feature extraction approach, Wash-Hadamard Transform (WHT). This approach is based on correlation between local pixels of the face image. Its primary advantage is the simplicity of its computation. The paper compares the proposed approach, WHT, which was traditionally used in data compression with two other known approaches: the Principal Component Analysis (PCA) and the Discrete Cosine Transform (DCT) using the face database of Olivetti Research Laboratory (ORL). In spite of its simple computation, the proposed algorithm (WHT) gave very close results to those obtained by the PCA and DCT. This paper initiates the research into WHT and the family of frequency transforms and examines their suitability for feature extraction in face recognition applications.
Abstract: This paper describes a methodology for remote
performance monitoring of retail refrigeration systems. The proposed
framework starts with monitoring of the whole refrigeration circuit
which allows detecting deviations from expected behavior caused by
various faults and degradations. The subsequent diagnostics methods
drill down deeper in the equipment hierarchy to more specifically
determine root causes. An important feature of the proposed concept
is that it does not require any additional sensors, and thus, the
performance monitoring solution can be deployed at a low
installation cost. Moreover only a minimum of contextual
information is required, which also substantially reduces time and
cost of the deployment process.
Abstract: This paper presents a text clustering system developed based on a k-means type subspace clustering algorithm to cluster large, high dimensional and sparse text data. In this algorithm, a new step is added in the k-means clustering process to automatically calculate the weights of keywords in each cluster so that the important words of a cluster can be identified by the weight values. For understanding and interpretation of clustering results, a few keywords that can best represent the semantic topic are extracted from each cluster. Two methods are used to extract the representative words. The candidate words are first selected according to their weights calculated by our new algorithm. Then, the candidates are fed to the WordNet to identify the set of noun words and consolidate the synonymy and hyponymy words. Experimental results have shown that the clustering algorithm is superior to the other subspace clustering algorithms, such as PROCLUS and HARP and kmeans type algorithm, e.g., Bisecting-KMeans. Furthermore, the word extraction method is effective in selection of the words to represent the topics of the clusters.
Abstract: This paper presents the application of an enhanced
Particle Swarm Optimization (EPSO) combined with Gaussian
Mutation (GM) for solving the Dynamic Economic Dispatch (DED)
problem considering the operating constraints of generators. The
EPSO consists of the standard PSO and a modified heuristic search
approaches. Namely, the ability of the traditional PSO is enhanced
by applying the modified heuristic search approach to prevent the
solutions from violating the constraints. In addition, Gaussian
Mutation is aimed at increasing the diversity of global search, whilst
it also prevents being trapped in suboptimal points during search. To
illustrate its efficiency and effectiveness, the developed EPSO-GM
approach is tested on the 3-unit and 10-unit 24-hour systems
considering valve-point effect. From the experimental results, it can
be concluded that the proposed EPSO-GM provides, the accurate
solution, the efficiency, and the feature of robust computation
compared with other algorithms under consideration.
Abstract: This study introduces a new method for detecting,
sorting, and localizing spikes from multiunit EEG recordings. The
method combines the wavelet transform, which localizes distinctive
spike features, with Super-Paramagnetic Clustering (SPC) algorithm,
which allows automatic classification of the data without assumptions
such as low variance or Gaussian distributions. Moreover, the method
is capable of setting amplitude thresholds for spike detection. The
method makes use of several real EEG data sets, and accordingly the
spikes are detected, clustered and their times were detected.
Abstract: The main goal in this paper is to quantify the quality of
different techniques for radiation treatment plans, a back-propagation
artificial neural network (ANN) combined with biomedicine theory
was used to model thirteen dosimetric parameters and to calculate
two dosimetric indices. The correlations between dosimetric indices
and quality of life were extracted as the features and used in the ANN
model to make decisions in the clinic. The simulation results show
that a trained multilayer back-propagation neural network model can
help a doctor accept or reject a plan efficiently. In addition, the
models are flexible and whenever a new treatment technique enters
the market, the feature variables simply need to be imported and the
model re-trained for it to be ready for use.
Abstract: The rapid growth of e-Commerce services is
significantly observed in the past decade. However, the method to
verify the authenticated users still widely depends on numeric
approaches. A new search on other verification methods suitable for
online e-Commerce is an interesting issue. In this paper, a new online
signature-verification method using angular transformation is
presented. Delay shifts existing in online signatures are estimated by
the estimation method relying on angle representation. In the
proposed signature-verification algorithm, all components of input
signature are extracted by considering the discontinuous break points
on the stream of angular values. Then the estimated delay shift is
captured by comparing with the selected reference signature and the
error matching can be computed as a main feature used for verifying
process. The threshold offsets are calculated by two types of error
characteristics of the signature verification problem, False Rejection
Rate (FRR) and False Acceptance Rate (FAR). The level of these two
error rates depends on the decision threshold chosen whose value is
such as to realize the Equal Error Rate (EER; FAR = FRR). The
experimental results show that through the simple programming,
employed on Internet for demonstrating e-Commerce services, the
proposed method can provide 95.39% correct verifications and 7%
better than DP matching based signature-verification method. In
addition, the signature verification with extracting components
provides more reliable results than using a whole decision making.
Abstract: The aim of this study is to emphasize the opportunities in space design under the aspect of HCI as performance areas. HCI is a multidisciplinary approach that could be identified in many different areas. The aesthetical reflections of HCI by virtual reality in space design are the high-tech solutions of the new innovations as computational facilities by artistic features. The method of this paper is to identify the subject in 3 main parts. In the first part a general approach and definition of interactivity on the basis of space design; in the second part the concept of multimedia interactive theater by some chosen samples from the world and interactive design aspects; in the third part the samples from Turkey will be identified by stage designing principles. In the results it could be declared that the multimedia database is the virtual approach of theatre stage designing regarding interactive means by computational facilities according to aesthetical aspects. HCI is mostly identified in theatre stages as computational intelligence under the affect of interactivity.
Abstract: This paper considers the influence of promotion
instruments for renewable energy sources (RES) on a multi-energy
modeling framework. In Europe, so called Feed-in Tariffs are
successfully used as incentive structures to increase the amount of
energy produced by RES. Because of the stochastic nature of large
scale integration of distributed generation, many problems have
occurred regarding the quality and stability of supply. Hence, a
macroscopic model was developed in order to optimize the power
supply of the local energy infrastructure, which includes electricity,
natural gas, fuel oil and district heating as energy carriers. Unique
features of the model are the integration of RES and the adoption of
Feed-in Tariffs into one optimization stage. Sensitivity studies are
carried out to examine the system behavior under changing profits
for the feed-in of RES. With a setup of three energy exchanging
regions and a multi-period optimization, the impact of costs and
profits are determined.
Abstract: Hybrid knowledge model is suggested as an underlying
framework for product development management. It can support such
hybrid features as ontologies and rules. Effective collaboration in
product development environment depends on sharing and reasoning
product information as well as engineering knowledge. Many studies
have considered product information and engineering knowledge.
However, most previous research has focused either on building the
ontology of product information or rule-based systems of engineering
knowledge. This paper shows that F-logic based knowledge model can
support such desirable features in a hybrid way.
Abstract: In the visual servoing systems, the data obtained by
Visionary is used for controlling robots. In this project, at first the
simulator which was proposed for simulating the performance of a
6R robot before, was examined in terms of software and test, and in
the proposed simulator, existing defects were obviated. In the first
version of simulation, the robot was directed toward the target object only in a Position-based method using two cameras in the
environment. In the new version of the software, three cameras were used simultaneously. The camera which is installed as eye-inhand on the end-effector of the robot is used for visual servoing in a
Feature-based method. The target object is recognized according to
its characteristics and the robot is directed toward the object in compliance with an algorithm similar to the function of human-s
eyes. Then, the function and accuracy of the operation of the robot are examined through Position-based visual servoing method using
two cameras installed as eye-to-hand in the environment. Finally, the obtained results are tested under ANSI-RIA R15.05-2 standard.
Abstract: The new idea of this research is application of a new fault detection and isolation (FDI) technique for supervision of sensor networks in transportation system. In measurement systems, it is necessary to detect all types of faults and failures, based on predefined algorithm. Last improvements in artificial neural network studies (ANN) led to using them for some FDI purposes. In this paper, application of new probabilistic neural network features for data approximation and data classification are considered for plausibility check in temperature measurement. For this purpose, two-phase FDI mechanism was considered for residual generation and evaluation.
Abstract: The application of Neural Network for disease
diagnosis has made great progress and is widely used by physicians.
An Electrocardiogram carries vital information about heart activity and physicians use this signal for cardiac disease diagnosis which
was the great motivation towards our study. In our work, tachycardia
features obtained are used for the training and testing of a Neural
Network. In this study we are using Fuzzy Probabilistic Neural
Networks as an automatic technique for ECG signal analysis. As
every real signal recorded by the equipment can have different
artifacts, we needed to do some preprocessing steps before feeding it
to our system. Wavelet transform is used for extracting the
morphological parameters of the ECG signal. The outcome of the
approach for the variety of arrhythmias shows the represented
approach is superior than prior presented algorithms with an average
accuracy of about %95 for more than 7 tachy arrhythmias.
Abstract: Object Relational Databases (ORDB) are complex in
nature than traditional relational databases because they combine the
characteristics of both object oriented concepts and relational
features of conventional databases. Design of an ORDB demands
efficient and quality schema considering the structural, functional
and componential traits. This internal quality of the schema is
assured by metrics that measure the relevant attributes. This is
extended to substantiate the understandability, usability and
reliability of the schema, thus assuring external quality of the
schema. This work institutes a formalization of ORDB metrics;
metric definition, evaluation methodology and the calibration of the
metric. Three ORDB schemas were used to conduct the evaluation
and the formalization of the metrics. The metrics are calibrated using
content and criteria related validity based on the measurability,
consistency and reliability of the metrics. Nominal and summative
scales are derived based on the evaluated metric values and are
standardized. Future works pertaining to ORDB metrics forms the
concluding note.
Abstract: A theory for optimal filtering of infinite sets of random
signals is presented. There are several new distinctive features of the
proposed approach. First, a single optimal filter for processing any
signal from a given infinite signal set is provided. Second, the filter is
presented in the special form of a sum with p terms where each term
is represented as a combination of three operations. Each operation
is a special stage of the filtering aimed at facilitating the associated
numerical work. Third, an iterative scheme is implemented into the
filter structure to provide an improvement in the filter performance at
each step of the scheme. The final step of the scheme concerns signal
compression and decompression. This step is based on the solution of
a new rank-constrained matrix approximation problem. The solution
to the matrix problem is described in this paper. A rigorous error
analysis is given for the new filter.