Abstract: In recent years, a number of works proposing the
combination of multiple classifiers to produce a single
classification have been reported in remote sensing literature. The
resulting classifier, referred to as an ensemble classifier, is
generally found to be more accurate than any of the individual
classifiers making up the ensemble. As accuracy is the primary
concern, much of the research in the field of land cover
classification is focused on improving classification accuracy. This
study compares the performance of four ensemble approaches
(boosting, bagging, DECORATE and random subspace) with a
univariate decision tree as base classifier. Two training datasets,
one without ant noise and other with 20 percent noise was used to
judge the performance of different ensemble approaches. Results
with noise free data set suggest an improvement of about 4% in
classification accuracy with all ensemble approaches in
comparison to the results provided by univariate decision tree
classifier. Highest classification accuracy of 87.43% was achieved
by boosted decision tree. A comparison of results with noisy data
set suggests that bagging, DECORATE and random subspace
approaches works well with this data whereas the performance of
boosted decision tree degrades and a classification accuracy of
79.7% is achieved which is even lower than that is achieved (i.e.
80.02%) by using unboosted decision tree classifier.
Abstract: Minimally invasive surgery (MIS) is now being widely used as a preferred choice for various types of operations. The need to detect various tactile properties, justifies the key role of tactile sensing that is currently missing in MIS. In this regard, Laparoscopy is one of the methods of minimally invasive surgery that can be used in kidney stone removal surgeries. At this moment, determination of the exact location of stone during laparoscopy is one of the limitations of this method that no scientific solution has been found for so far. Artificial tactile sensing is a new method for obtaining the characteristics of a hard object embedded in a soft tissue. Artificial palpation is an important application of artificial tactile sensing that can be used in different types of surgeries. In this study, a new method for determining the exact location of stone during laparoscopy is presented. In the present study, the effects of stone existence on the surface of kidney were investigated using conceptual 3D model of kidney containing a simulated stone. Having imitated palpation and modeled it conceptually, indications of stone existence that appear on the surface of kidney were determined. A number of different cases were created and solved by the software and using stress distribution contours and stress graphs, it is illustrated that the created stress patterns on the surface of kidney show not only the existence of stone inside, but also its exact location. So three-dimensional analysis leads to a novel method of predicting the exact location of stone and can be directly applied to the incorporation of tactile sensing in artificial palpation, helping surgeons in non-invasive procedures.
Abstract: In this paper, different approaches to solve the
forward kinematics of a three DOF actuator redundant hydraulic
parallel manipulator are presented. On the contrary to series
manipulators, the forward kinematic map of parallel manipulators
involves highly coupled nonlinear equations, which are almost
impossible to solve analytically. The proposed methods are using
neural networks identification with different structures to solve the
problem. The accuracy of the results of each method is analyzed in
detail and the advantages and the disadvantages of them in
computing the forward kinematic map of the given mechanism is
discussed in detail. It is concluded that ANFIS presents the best
performance compared to MLP, RBF and PNN networks in this
particular application.
Abstract: Heart failure is the most common reason of death
nowadays, but if the medical help is given directly, the patient-s life
may be saved in many cases. Numerous heart diseases can be
detected by means of analyzing electrocardiograms (ECG). Artificial
Neural Networks (ANN) are computer-based expert systems that
have proved to be useful in pattern recognition tasks. ANN can be
used in different phases of the decision-making process, from
classification to diagnostic procedures. This work concentrates on a
review followed by a novel method.
The purpose of the review is to assess the evidence of healthcare
benefits involving the application of artificial neural networks to the
clinical functions of diagnosis, prognosis and survival analysis, in
ECG signals. The developed method is based on a compound neural
network (CNN), to classify ECGs as normal or carrying an
AtrioVentricular heart Block (AVB). This method uses three
different feed forward multilayer neural networks. A single output
unit encodes the probability of AVB occurrences. A value between 0
and 0.1 is the desired output for a normal ECG; a value between 0.1
and 1 would infer an occurrence of an AVB. The results show that
this compound network has a good performance in detecting AVBs,
with a sensitivity of 90.7% and a specificity of 86.05%. The accuracy
value is 87.9%.
Abstract: This paper presents a system overview of Mobile to Server Face Recognition, which is a face recognition application developed specifically for mobile phones. Images taken from mobile phone cameras lack of quality due to the low resolution of the cameras. Thus, a prototype is developed to experiment the chosen method. However, this paper shows a result of system backbone without the face recognition functionality. The result demonstrated in this paper indicates that the interaction between mobile phones and server is successfully working. The result shown before the database is completely ready. The system testing is currently going on using real images and a mock-up database to test the functionality of the face recognition algorithm used in this system. An overview of the whole system including screenshots and system flow-chart are presented in this paper. This paper also presents the inspiration or motivation and the justification in developing this system.
Abstract: The study in this paper underlines the importance of
correct joint selection of the spreading codes for uplink of multicarrier
code division multiple access (MC-CDMA) at the transmitter
side and detector at the receiver side in the presence of nonlinear
distortion due to high power amplifier (HPA). The bit error rate
(BER) of system for different spreading sequences (Walsh code, Gold
code, orthogonal Gold code, Golay code and Zadoff-Chu code) and
different kinds of receivers (minimum mean-square error receiver
(MMSE-MUD) and microstatistic multi-user receiver (MSF-MUD))
is compared by means of simulations for MC-CDMA transmission
system. Finally, the results of analysis will show, that the application
of MSF-MUD in combination with Golay codes can outperform
significantly the other tested spreading codes and receivers for all
mostly used models of HPA.
Abstract: This article presents the simulation, parameterization and optimization of an electromagnet with the C–shaped configuration, intended for the study of magnetic properties of materials. The electromagnet studied consists of a C-shaped yoke, which provides self–shielding for minimizing losses of magnetic flux density, two poles of high magnetic permeability and power coils wound on the poles. The main physical variable studied was the static magnetic flux density in a column within the gap between the poles, with 4cm2 of square cross section and a length of 5cm, seeking a suitable set of parameters that allow us to achieve a uniform magnetic flux density of 1x104 Gaussor values above this in the column, when the system operates at room temperature and with a current consumption not exceeding 5A. By means of a magnetostatic analysis by the finite element method, the magnetic flux density and the distribution of the magnetic field lines were visualized and quantified. From the results obtained by simulating an initial configuration of electromagnet, a structural optimization of the geometry of the adjustable caps for the ends of the poles was performed. The magnetic permeability effect of the soft magnetic materials used in the poles system, such as low– carbon steel (0.08% C), Permalloy (45% Ni, 54.7% Fe) and Mumetal (21.2% Fe, 78.5% Ni), was also evaluated. The intensity and uniformity of the magnetic field in the gap showed a high dependence with the factors described above. The magnetic field achieved in the column was uniform and its magnitude ranged between 1.5x104 Gauss and 1.9x104 Gauss according to the material of the pole used, with the possibility of increasing the magnetic field by choosing a suitable geometry of the cap, introducing a cooling system for the coils and adjusting the spacing between the poles. This makes the device a versatile and scalable tool to generate the magnetic field necessary to perform magnetic characterization of materials by techniques such as vibrating sample magnetometry (VSM), Hall-effect, Kerr-effect magnetometry, among others. Additionally, a CAD design of the modules of the electromagnet is presented in order to facilitate the construction and scaling of the physical device.
Abstract: In this paper, an adaptive radio resource allocation
(RRA) algorithm applying to multiple traffic OFDMA system is
proposed, which distributes sub-carrier and loading bits among users
according to their different QoS requirements and traffic class. By
classifying and prioritizing the users based on their traffic
characteristic and ensuring resource for higher priority users, the
scheme decreases tremendously the outage probability of the users
requiring a real time transmission without impact on the spectrum
efficiency of system, as well as the outage probability of data users is
not increased compared with the RRA methods published.
Abstract: The increasing importance of data stream arising in a
wide range of advanced applications has led to the extensive study of
mining frequent patterns. Mining data streams poses many new
challenges amongst which are the one-scan nature, the unbounded
memory requirement and the high arrival rate of data streams. In this
paper, we propose a new approach for mining itemsets on data
stream. Our approach SFIDS has been developed based on FIDS
algorithm. The main attempts were to keep some advantages of the
previous approach and resolve some of its drawbacks, and
consequently to improve run time and memory consumption. Our
approach has the following advantages: using a data structure similar
to lattice for keeping frequent itemsets, separating regions from each
other with deleting common nodes that results in a decrease in search
space, memory consumption and run time; and Finally, considering
CPU constraint, with increasing arrival rate of data that result in
overloading system, SFIDS automatically detect this situation and
discard some of unprocessing data. We guarantee that error of results
is bounded to user pre-specified threshold, based on a probability
technique. Final results show that SFIDS algorithm could attain
about 50% run time improvement than FIDS approach.
Abstract: This paper describes an algorithm to estimate realtime vehicle velocity using image processing technique from the known camera calibration parameters. The presented algorithm involves several main steps. First, the moving object is extracted by utilizing frame differencing technique. Second, the object tracking method is applied and the speed is estimated based on the displacement of the object-s centroid. Several assumptions are listed to simplify the transformation of 2D images from 3D real-world images. The results obtained from the experiment have been compared to the estimated ground truth. From this experiment, it exhibits that the proposed algorithm has achieved the velocity accuracy estimation of about ± 1.7 km/h.
Abstract: Avalanche velocity (from start to track zone) has been estimated in the present model for an avalanche which is triggered artificially by an explosive devise. The initial development of the model has been from the concept of micro-continuum theories [1], underwater explosions [2] and from fracture mechanics [3] with appropriate changes to the present model. The model has been computed for different slab depth R, slope angle θ, snow density ¤ü, viscosity μ, eddy viscosity η*and couple stress parameter η. The applicability of the present model in the avalanche forecasting has been highlighted.
Abstract: With the advent of emerging personal computing paradigms such as ubiquitous and mobile computing, Web contents are becoming accessible from a wide range of mobile devices. Since these devices do not have the same rendering capabilities, Web contents need to be adapted for transparent access from a variety of client agents. Such content adaptation is exploited for either an individual element or a set of consecutive elements in a Web document and results in better rendering and faster delivery to the client device. Nevertheless, Web content adaptation sets new challenges for semantic markup. This paper presents an advanced components platform, called SMC, enabling the development of mobility applications and services according to a channel model based on the principles of Services Oriented Architecture (SOA). It then goes on to describe the potential for integration with the Semantic Web through a novel framework of external semantic annotation that prescribes a scheme for representing semantic markup files and a way of associating Web documents with these external annotations. The role of semantic annotation in this framework is to describe the contents of individual documents themselves, assuring the preservation of the semantics during the process of adapting content rendering. Semantic Web content adaptation is a way of adding value to Web contents and facilitates repurposing of Web contents (enhanced browsing, Web Services location and access, etc).
Abstract: This work consists of three parts. First, the alias-free
condition for the conventional two-channel quadrature mirror filter
bank is analyzed using complex arithmetic. Second, the approach
developed in the first part is applied to the complex quadrature mirror
filter bank. Accordingly, the structure is simplified and the theory is
easier to follow. Finally, a new class of complex quadrature mirror
filter banks is proposed. Interesting properties of this new structure
are also discussed.
Abstract: This paper presents modeling and optimization of two NP-hard problems in flexible manufacturing system (FMS), part type selection problem and loading problem. Due to the complexity and extent of the problems, the paper was split into two parts. The first part of the papers has discussed the modeling of the problems and showed how the real coded genetic algorithms (RCGA) can be applied to solve the problems. This second part discusses the effectiveness of the RCGA which uses an array of real numbers as chromosome representation. The novel proposed chromosome representation produces only feasible solutions which minimize a computational time needed by GA to push its population toward feasible search space or repair infeasible chromosomes. The proposed RCGA improves the FMS performance by considering two objectives, maximizing system throughput and maintaining the balance of the system (minimizing system unbalance). The resulted objective values are compared to the optimum values produced by branch-and-bound method. The experiments show that the proposed RCGA could reach near optimum solutions in a reasonable amount of time.
Abstract: The ability of UML to handle the modeling process of complex industrial software applications has increased its popularity to the extent of becoming the de-facto language in serving the design purpose. Although, its rich graphical notation naturally oriented towards the object-oriented concept, facilitates the understandability, it hardly successes to report all domainspecific aspects in a satisfactory way. OCL, as the standard language for expressing additional constraints on UML models, has great potential to help improve expressiveness. Unfortunately, it suffers from a weak formalism due to its poor semantic resulting in many obstacles towards the build of tools support and thus its application in the industry field. For this reason, many researches were established to formalize OCL expressions using a more rigorous approach. Our contribution join this work in a complementary way since it focuses specifically on OCL predefined properties which constitute an important part in the construction of OCL expressions. Using formal methods, we mainly succeed in expressing rigorously OCL predefined functions.
Abstract: This paper is motivated by the aspect of uncertainty in
financial decision making, and how artificial intelligence and soft
computing, with its uncertainty reducing aspects can be used for
algorithmic trading applications that trade in high frequency.
This paper presents an optimized high frequency trading system that
has been combined with various moving averages to produce a hybrid
system that outperforms trading systems that rely solely on moving
averages. The paper optimizes an adaptive neuro-fuzzy inference
system that takes both the price and its moving average as input,
learns to predict price movements from training data consisting of
intraday data, dynamically switches between the best performing
moving averages, and performs decision making of when to buy or
sell a certain currency in high frequency.
Abstract: Interpretation of aerial images is an important task in
various applications. Image segmentation can be viewed as the essential
step for extracting information from aerial images. Among many
developed segmentation methods, the technique of clustering has been
extensively investigated and used. However, determining the number
of clusters in an image is inherently a difficult problem, especially
when a priori information on the aerial image is unavailable. This
study proposes a support vector machine approach for clustering
aerial images. Three cluster validity indices, distance-based index,
Davies-Bouldin index, and Xie-Beni index, are utilized as quantitative
measures of the quality of clustering results. Comparisons on the
effectiveness of these indices and various parameters settings on the
proposed methods are conducted. Experimental results are provided
to illustrate the feasibility of the proposed approach.
Abstract: In order to make surfing the internet faster, and to save redundant processing load with each request for the same web page, many caching techniques have been developed to reduce latency of retrieving data on World Wide Web. In this paper we will give a quick overview of existing web caching techniques used for dynamic web pages then we will introduce a design and implementation model that take advantage of “URL Rewriting" feature in some popular web servers, e.g. Apache, to provide an effective approach of caching dynamic web pages.
Abstract: There are three approaches to complete Bayesian
Network (BN) model construction: total expert-centred, total datacentred,
and semi data-centred. These three approaches constitute the
basis of the empirical investigation undertaken and reported in this
paper. The objective is to determine, amongst these three
approaches, which is the optimal approach for the construction of a
BN-based model for the performance assessment of students-
laboratory work in a virtual electronic laboratory environment. BN
models were constructed using all three approaches, with respect to
the focus domain, and compared using a set of optimality criteria. In
addition, the impact of the size and source of the training, on the
performance of total data-centred and semi data-centred models was
investigated. The results of the investigation provide additional
insight for BN model constructors and contribute to literature
providing supportive evidence for the conceptual feasibility and
efficiency of structure and parameter learning from data. In addition,
the results highlight other interesting themes.
Abstract: Models are placed by modeling paradigm at the center of development process. These models are represented by languages, like UML the language standardized by the OMG which became necessary for development. Moreover the ontology engineering paradigm places ontologies at the center of development process; in this paradigm we find OWL the principal language for knowledge representation. Building ontologies from scratch is generally a difficult task. The bridging between UML and OWL appeared on several regards such as the classes and associations. In this paper, we have to profit from convergence between UML and OWL to propose an approach based on Meta-Modelling and Graph Grammars and registered in the MDA architecture for the automatic generation of OWL ontologies from UML class diagrams. The transformation is based on transformation rules; the level of abstraction in these rules is close to the application in order to have usable ontologies. We illustrate this approach by an example.