Abstract: In this paper, different approaches to solve the
forward kinematics of a three DOF actuator redundant hydraulic
parallel manipulator are presented. On the contrary to series
manipulators, the forward kinematic map of parallel manipulators
involves highly coupled nonlinear equations, which are almost
impossible to solve analytically. The proposed methods are using
neural networks identification with different structures to solve the
problem. The accuracy of the results of each method is analyzed in
detail and the advantages and the disadvantages of them in
computing the forward kinematic map of the given mechanism is
discussed in detail. It is concluded that ANFIS presents the best
performance compared to MLP, RBF and PNN networks in this
particular application.
Abstract: Heart failure is the most common reason of death
nowadays, but if the medical help is given directly, the patient-s life
may be saved in many cases. Numerous heart diseases can be
detected by means of analyzing electrocardiograms (ECG). Artificial
Neural Networks (ANN) are computer-based expert systems that
have proved to be useful in pattern recognition tasks. ANN can be
used in different phases of the decision-making process, from
classification to diagnostic procedures. This work concentrates on a
review followed by a novel method.
The purpose of the review is to assess the evidence of healthcare
benefits involving the application of artificial neural networks to the
clinical functions of diagnosis, prognosis and survival analysis, in
ECG signals. The developed method is based on a compound neural
network (CNN), to classify ECGs as normal or carrying an
AtrioVentricular heart Block (AVB). This method uses three
different feed forward multilayer neural networks. A single output
unit encodes the probability of AVB occurrences. A value between 0
and 0.1 is the desired output for a normal ECG; a value between 0.1
and 1 would infer an occurrence of an AVB. The results show that
this compound network has a good performance in detecting AVBs,
with a sensitivity of 90.7% and a specificity of 86.05%. The accuracy
value is 87.9%.
Abstract: In this study, we propose a network architecture for
providing secure access to information resources of enterprise
network from remote locations in a wireless fashion. Our proposed
architecture offers a very promising solution for organizations which
are in need of a secure, flexible and cost-effective remote access
methodology. Security of the proposed architecture is based on
Virtual Private Network technology and a special role based access
control mechanism with location and time constraints. The flexibility
mainly comes from the use of Internet as the communication medium
and cost-effectiveness is due to the possibility of in-house
implementation of the proposed architecture.
Abstract: This paper presents a system overview of Mobile to Server Face Recognition, which is a face recognition application developed specifically for mobile phones. Images taken from mobile phone cameras lack of quality due to the low resolution of the cameras. Thus, a prototype is developed to experiment the chosen method. However, this paper shows a result of system backbone without the face recognition functionality. The result demonstrated in this paper indicates that the interaction between mobile phones and server is successfully working. The result shown before the database is completely ready. The system testing is currently going on using real images and a mock-up database to test the functionality of the face recognition algorithm used in this system. An overview of the whole system including screenshots and system flow-chart are presented in this paper. This paper also presents the inspiration or motivation and the justification in developing this system.
Abstract: This article presents the simulation, parameterization and optimization of an electromagnet with the C–shaped configuration, intended for the study of magnetic properties of materials. The electromagnet studied consists of a C-shaped yoke, which provides self–shielding for minimizing losses of magnetic flux density, two poles of high magnetic permeability and power coils wound on the poles. The main physical variable studied was the static magnetic flux density in a column within the gap between the poles, with 4cm2 of square cross section and a length of 5cm, seeking a suitable set of parameters that allow us to achieve a uniform magnetic flux density of 1x104 Gaussor values above this in the column, when the system operates at room temperature and with a current consumption not exceeding 5A. By means of a magnetostatic analysis by the finite element method, the magnetic flux density and the distribution of the magnetic field lines were visualized and quantified. From the results obtained by simulating an initial configuration of electromagnet, a structural optimization of the geometry of the adjustable caps for the ends of the poles was performed. The magnetic permeability effect of the soft magnetic materials used in the poles system, such as low– carbon steel (0.08% C), Permalloy (45% Ni, 54.7% Fe) and Mumetal (21.2% Fe, 78.5% Ni), was also evaluated. The intensity and uniformity of the magnetic field in the gap showed a high dependence with the factors described above. The magnetic field achieved in the column was uniform and its magnitude ranged between 1.5x104 Gauss and 1.9x104 Gauss according to the material of the pole used, with the possibility of increasing the magnetic field by choosing a suitable geometry of the cap, introducing a cooling system for the coils and adjusting the spacing between the poles. This makes the device a versatile and scalable tool to generate the magnetic field necessary to perform magnetic characterization of materials by techniques such as vibrating sample magnetometry (VSM), Hall-effect, Kerr-effect magnetometry, among others. Additionally, a CAD design of the modules of the electromagnet is presented in order to facilitate the construction and scaling of the physical device.
Abstract: This paper presents comparative study on recent
integer DCTs and a new method to construct a low sensitive structure
of integer DCT for colored input signals. The method refers to
sensitivity of multiplier coefficients to finite word length as an
indicator of how word length truncation effects on quality of output
signal. The sensitivity is also theoretically evaluated as a function of
auto-correlation and covariance matrix of input signal. The structure of
integer DCT algorithm is optimized by combination of lower sensitive
lifting structure types of IRT. It is evaluated by the sensitivity of
multiplier coefficients to finite word length expression in a function of
covariance matrix of input signal. Effectiveness of the optimum
combination of IRT in integer DCT algorithm is confirmed by quality
improvement comparing with existing case. As a result, the optimum
combination of IRT in each integer DCT algorithm evidently improves
output signal quality and it is still compatible with the existing one.
Abstract: The increasing importance of data stream arising in a
wide range of advanced applications has led to the extensive study of
mining frequent patterns. Mining data streams poses many new
challenges amongst which are the one-scan nature, the unbounded
memory requirement and the high arrival rate of data streams. In this
paper, we propose a new approach for mining itemsets on data
stream. Our approach SFIDS has been developed based on FIDS
algorithm. The main attempts were to keep some advantages of the
previous approach and resolve some of its drawbacks, and
consequently to improve run time and memory consumption. Our
approach has the following advantages: using a data structure similar
to lattice for keeping frequent itemsets, separating regions from each
other with deleting common nodes that results in a decrease in search
space, memory consumption and run time; and Finally, considering
CPU constraint, with increasing arrival rate of data that result in
overloading system, SFIDS automatically detect this situation and
discard some of unprocessing data. We guarantee that error of results
is bounded to user pre-specified threshold, based on a probability
technique. Final results show that SFIDS algorithm could attain
about 50% run time improvement than FIDS approach.
Abstract: Border Gateway Protocol (BGP) is the standard routing protocol between various autonomous systems (AS) in the internet. In the event of failure, a considerable delay in the BGP convergence has been shown by empirical measurements. During the convergence time the BGP will repeatedly advertise new routes to some destination and withdraw old ones until it reach a stable state. It has been found that the KEEPALIVE message timer and the HOLD time are tow parameters affecting the convergence speed. This paper aims to find the optimum value for the KEEPALIVE timer and the HOLD time that maximally reduces the convergence time without increasing the traffic. The KEEPALIVE message timer optimal value founded by this paper is 30 second instead of 60 seconds, and the optimal value for the HOLD time is 90 seconds instead of 180 seconds.
Abstract: Performance of a cobalt doped sol-gel derived silica (Co/SiO2) catalyst for Fischer–Tropsch synthesis (FTS) in slurryphase reactor was studied using paraffin wax as initial liquid media. The reactive mixed gas, hydrogen (H2) and carbon monoxide (CO) in a molar ratio of 2:1, was flowed at 50 ml/min. Braunauer-Emmett- Teller (BET) surface area and X-ray diffraction (XRD) techniques were employed to characterize both the specific surface area and crystallinity of the catalyst, respectively. The reduction behavior of Co/SiO2 catalyst was investigated using the Temperature Programmmed Reduction (TPR) method. Operating temperatures were varied from 493 to 533K to find the optimum conditions to maximize liquid fuels production, gasoline and diesel.
Abstract: Avalanche velocity (from start to track zone) has been estimated in the present model for an avalanche which is triggered artificially by an explosive devise. The initial development of the model has been from the concept of micro-continuum theories [1], underwater explosions [2] and from fracture mechanics [3] with appropriate changes to the present model. The model has been computed for different slab depth R, slope angle θ, snow density ¤ü, viscosity μ, eddy viscosity η*and couple stress parameter η. The applicability of the present model in the avalanche forecasting has been highlighted.
Abstract: With the advent of emerging personal computing paradigms such as ubiquitous and mobile computing, Web contents are becoming accessible from a wide range of mobile devices. Since these devices do not have the same rendering capabilities, Web contents need to be adapted for transparent access from a variety of client agents. Such content adaptation is exploited for either an individual element or a set of consecutive elements in a Web document and results in better rendering and faster delivery to the client device. Nevertheless, Web content adaptation sets new challenges for semantic markup. This paper presents an advanced components platform, called SMC, enabling the development of mobility applications and services according to a channel model based on the principles of Services Oriented Architecture (SOA). It then goes on to describe the potential for integration with the Semantic Web through a novel framework of external semantic annotation that prescribes a scheme for representing semantic markup files and a way of associating Web documents with these external annotations. The role of semantic annotation in this framework is to describe the contents of individual documents themselves, assuring the preservation of the semantics during the process of adapting content rendering. Semantic Web content adaptation is a way of adding value to Web contents and facilitates repurposing of Web contents (enhanced browsing, Web Services location and access, etc).
Abstract: This work consists of three parts. First, the alias-free
condition for the conventional two-channel quadrature mirror filter
bank is analyzed using complex arithmetic. Second, the approach
developed in the first part is applied to the complex quadrature mirror
filter bank. Accordingly, the structure is simplified and the theory is
easier to follow. Finally, a new class of complex quadrature mirror
filter banks is proposed. Interesting properties of this new structure
are also discussed.
Abstract: The ability of UML to handle the modeling process of complex industrial software applications has increased its popularity to the extent of becoming the de-facto language in serving the design purpose. Although, its rich graphical notation naturally oriented towards the object-oriented concept, facilitates the understandability, it hardly successes to report all domainspecific aspects in a satisfactory way. OCL, as the standard language for expressing additional constraints on UML models, has great potential to help improve expressiveness. Unfortunately, it suffers from a weak formalism due to its poor semantic resulting in many obstacles towards the build of tools support and thus its application in the industry field. For this reason, many researches were established to formalize OCL expressions using a more rigorous approach. Our contribution join this work in a complementary way since it focuses specifically on OCL predefined properties which constitute an important part in the construction of OCL expressions. Using formal methods, we mainly succeed in expressing rigorously OCL predefined functions.
Abstract: This paper is motivated by the aspect of uncertainty in
financial decision making, and how artificial intelligence and soft
computing, with its uncertainty reducing aspects can be used for
algorithmic trading applications that trade in high frequency.
This paper presents an optimized high frequency trading system that
has been combined with various moving averages to produce a hybrid
system that outperforms trading systems that rely solely on moving
averages. The paper optimizes an adaptive neuro-fuzzy inference
system that takes both the price and its moving average as input,
learns to predict price movements from training data consisting of
intraday data, dynamically switches between the best performing
moving averages, and performs decision making of when to buy or
sell a certain currency in high frequency.
Abstract: The decoding of Low-Density Parity-Check (LDPC) codes is operated over a redundant structure known as the bipartite graph, meaning that the full set of bit nodes is not absolutely necessary for decoder convergence. In 2008, Soyjaudah and Catherine designed a recovery algorithm for LDPC codes based on this assumption and showed that the error-correcting performance of their codes outperformed conventional LDPC Codes. In this work, the use of the recovery algorithm is further explored to test the performance of LDPC codes while the number of iterations is progressively increased. For experiments conducted with small blocklengths of up to 800 bits and number of iterations of up to 2000, the results interestingly demonstrate that contrary to conventional wisdom, the error-correcting performance keeps increasing with increasing number of iterations.
Abstract: Interpretation of aerial images is an important task in
various applications. Image segmentation can be viewed as the essential
step for extracting information from aerial images. Among many
developed segmentation methods, the technique of clustering has been
extensively investigated and used. However, determining the number
of clusters in an image is inherently a difficult problem, especially
when a priori information on the aerial image is unavailable. This
study proposes a support vector machine approach for clustering
aerial images. Three cluster validity indices, distance-based index,
Davies-Bouldin index, and Xie-Beni index, are utilized as quantitative
measures of the quality of clustering results. Comparisons on the
effectiveness of these indices and various parameters settings on the
proposed methods are conducted. Experimental results are provided
to illustrate the feasibility of the proposed approach.
Abstract: The number of features required to represent an image
can be very huge. Using all available features to recognize objects
can suffer from curse dimensionality. Feature selection and
extraction is the pre-processing step of image mining. Main issues in
analyzing images is the effective identification of features and
another one is extracting them. The mining problem that has been
focused is the grouping of features for different shapes. Experiments
have been conducted by using shape outline as the features. Shape
outline readings are put through normalization and dimensionality
reduction process using an eigenvector based method to produce a
new set of readings. After this pre-processing step data will be
grouped through their shapes. Through statistical analysis, these
readings together with peak measures a robust classification and
recognition process is achieved. Tests showed that the suggested
methods are able to automatically recognize objects through their
shapes. Finally, experiments also demonstrate the system invariance
to rotation, translation, scale, reflection and to a small degree of
distortion.
Abstract: In order to make surfing the internet faster, and to save redundant processing load with each request for the same web page, many caching techniques have been developed to reduce latency of retrieving data on World Wide Web. In this paper we will give a quick overview of existing web caching techniques used for dynamic web pages then we will introduce a design and implementation model that take advantage of “URL Rewriting" feature in some popular web servers, e.g. Apache, to provide an effective approach of caching dynamic web pages.
Abstract: There are three approaches to complete Bayesian
Network (BN) model construction: total expert-centred, total datacentred,
and semi data-centred. These three approaches constitute the
basis of the empirical investigation undertaken and reported in this
paper. The objective is to determine, amongst these three
approaches, which is the optimal approach for the construction of a
BN-based model for the performance assessment of students-
laboratory work in a virtual electronic laboratory environment. BN
models were constructed using all three approaches, with respect to
the focus domain, and compared using a set of optimality criteria. In
addition, the impact of the size and source of the training, on the
performance of total data-centred and semi data-centred models was
investigated. The results of the investigation provide additional
insight for BN model constructors and contribute to literature
providing supportive evidence for the conceptual feasibility and
efficiency of structure and parameter learning from data. In addition,
the results highlight other interesting themes.
Abstract: The purpose of this study is to examine the self and
decision making levels of students receiving education in schools of
physical training and sports. The population of the study consisted
258 students, among which 152 were male and 106 were female
( X age=19,3713 + 1,6968), that received education in the schools of
physical education and sports of Selcuk University, Inonu University,
Gazi University and Karamanoglu Mehmetbey University. In order to
achieve the purpose of the study, the Melbourne Decision Making
Questionnary developed by Mann et al. (1998) [1] and adapted to
Turkish by Deniz (2004) [2] and the Self-Esteem Scale developed by
Aricak (1999) [3] was utilized. For analyzing and interpreting data
Kolmogorov-Smirnov test, t-test and one way anova test were used,
while for determining the difference between the groups Tukey test
and Multiple Linear Regression test were employed and significance
was accepted at P