Abstract: The exponential increase in the volume of medical image database has imposed new challenges to clinical routine in maintaining patient history, diagnosis, treatment and monitoring. With the advent of data mining and machine learning techniques it is possible to automate and/or assist physicians in clinical diagnosis. In this research a medical image classification framework using data mining techniques is proposed. It involves feature extraction, feature selection, feature discretization and classification. In the classification phase, the performance of the traditional kNN k nearest neighbor classifier is improved using a feature weighting scheme and a distance weighted voting instead of simple majority voting. Feature weights are calculated using the interestingness measures used in association rule mining. Experiments on the retinal fundus images show that the proposed framework improves the classification accuracy of traditional kNN from 78.57 % to 92.85 %.
Abstract: Thermite welding is mainly used in world. The
reasons why the thermite welding method is widely used are
that the equipment has good mobility and total working time
of that is shorter than that of the enclosed arc welding method
on site. Moreover, the operating skill, which required for
thermite welding, is less than that of for enclosed arc welding.
In the present research work, heat treatment and combined
'expulsion and heat treatment' techniques were used improve
the mechanical properties and weldment structure. The
specimens were cut in the transverse direction from expulsion
with Heat treated and heat treated Thermite Welded rails.
Specimens were prepared according to AWS standard and
subjected to tensile test, Impact test and hardness and their
results were tabulated. Microstructural analysis was carried
out with the help of SEM. Then analyze to effect of heat
treated and 'expulsion with heat treated' with the properties of
their thermite welded rails. Compare the mechanical and
microstructural properties of thermite welded rails between
heat expulsion with heat treated and heat treated. Mechanical
and microstructural response expulsion with heat treated
thermite welded rail is higher value as compared to heat
treatment.
Abstract: Themain goal of this article is to find efficient
methods for elemental and molecular analysis of living
microorganisms (algae) under defined environmental conditions and
cultivation processes. The overall knowledge of chemical
composition is obtained utilizing laser-based techniques, Laser-
Induced Breakdown Spectroscopy (LIBS) for acquiring information
about elemental composition and Raman Spectroscopy for gaining
molecular information, respectively. Algal cells were suspended in
liquid media and characterized using their spectra. Results obtained
employing LIBS and Raman Spectroscopy techniques will help to
elucidate algae biology (nutrition dynamics depending on cultivation
conditions) and to identify algal strains, which have the potential for
applications in metal-ion absorption (bioremediation) and biofuel
industry. Moreover, bioremediation can be readily combined with
production of 3rd generation biofuels. In order to use algae for
efficient fuel production, the optimal cultivation parameters have to
be determinedleading to high production of oil in selected
cellswithout significant inhibition of the photosynthetic activity and
the culture growth rate, e.g. it is necessary to distinguish conditions
for algal strain containing high amount of higher unsaturated fatty
acids. Measurements employing LIBS and Raman Spectroscopy were
utilized in order to give information about alga Trachydiscusminutus
with emphasis on the amount of the lipid content inside the algal cell
and the ability of algae to withdraw nutrients from its environment
and bioremediation (elemental composition), respectively. This
article can serve as the reference for further efforts in describing
complete chemical composition of algal samples employing laserablation
techniques.
Abstract: Nowadays, more engineering systems are using some
kind of Artificial Intelligence (AI) for the development of their
processes. Some well-known AI techniques include artificial neural
nets, fuzzy inference systems, and neuro-fuzzy inference systems
among others. Furthermore, many decision-making applications base
their intelligent processes on Fuzzy Logic; due to the Fuzzy
Inference Systems (FIS) capability to deal with problems that are
based on user knowledge and experience. Also, knowing that users
have a wide variety of distinctiveness, and generally, provide
uncertain data, this information can be used and properly processed
by a FIS. To properly consider uncertainty and inexact system input
values, FIS normally use Membership Functions (MF) that represent
a degree of user satisfaction on certain conditions and/or constraints.
In order to define the parameters of the MFs, the knowledge from
experts in the field is very important. This knowledge defines the MF
shape to process the user inputs and through fuzzy reasoning and
inference mechanisms, the FIS can provide an “appropriate" output.
However an important issue immediately arises: How can it be
assured that the obtained output is the optimum solution? How can it
be guaranteed that each MF has an optimum shape? A viable solution
to these questions is through the MFs parameter optimization. In this
Paper a novel parameter optimization process is presented. The
process for FIS parameter optimization consists of the five simple
steps that can be easily realized off-line. Here the proposed process
of FIS parameter optimization it is demonstrated by its
implementation on an Intelligent Interface section dealing with the
on-line customization / personalization of internet portals applied to
E-commerce.
Abstract: Over the early years of the 21st century, cities
throughout the Middle East, particularly in the Gulf region have
expanded more rapidly than ever before. Given the presence of a
large volume of high-rise buildings allover the region, the local
authority aims to set a new standard for sustainable development;
with an integrated approach to maintain a balance between economy,
quality, environmental protection and safety of life. In the very near
future, as mandatory requirements, sustainability will be the criteria
that should be included in all building projects. It is well known in
the building sustainability topics that structural design engineers do
not have a key role in this matter. In addition, the LEED (Leadership
in Energy and Environmental Design) has looked almost exclusively
on the environmental components and materials specifications. The
objective of this paper is to focus and establish groundwork for
sustainability techniques and applications related to the RC high-rise
buildings design, from the structural point of view. A set of
recommendations related to local conditions, structural modeling and
analysis is given, and some helpful suggestions for structural design
team work are addressed. This paper attempts to help structural
engineers in identifying the building sustainability design, in order to
meet local needs and achieve alternative solutions at an early stage of
project design.
Abstract: The national economy development affects the vehicle
ownership which ultimately increases fuel consumption. The rise of
the vehicle ownership is dominated by the increasing number of
motorcycles. This research aims to analyze and identify the
characteristics of fuel consumption, the city transportation system,
and to analyze the relationship and the effect of the city
transportation system on the fuel consumption. A multivariable
analysis is used in this study. The data analysis techniques include: a
Multivariate Multivariable Analysis by using the R software. More
than 84% of fuel on Java is consumed in metropolitan and large
cities. The city transportation system variables that strongly effect the
fuel consumption are population, public vehicles, private vehicles and
private bus. This method can be developed to control the fuel
consumption by considering the urban transport system and city
tipology. The effect can reducing subsidy on the fuel consumption,
increasing state economic.
Abstract: In this paper, we investigated the characteristic of a
clinical dataseton the feature selection and classification
measurements which deal with missing values problem.And also
posed the appropriated techniques to achieve the aim of the activity;
in this research aims to find features that have high effect to mortality
and mortality time frame. We quantify the complexity of a clinical
dataset. According to the complexity of the dataset, we proposed the
data mining processto cope their complexity; missing values, high
dimensionality, and the prediction problem by using the methods of
missing value replacement, feature selection, and classification.The
experimental results will extend to develop the prediction model for
cardiology.
Abstract: A novel behavioral detection framework is proposed
to detect zero day buffer overflow vulnerabilities (based on network
behavioral signatures) using zero-day exploits, instead of the
signature-based or anomaly-based detection solutions currently
available for IDPS techniques. At first we present the detection
model that uses shadow honeypot. Our system is used for the online
processing of network attacks and generating a behavior detection
profile. The detection profile represents the dataset of 112 types of
metrics describing the exact behavior of malware in the network. In
this paper we present the examples of generating behavioral
signatures for two attacks – a buffer overflow exploit on FTP server
and well known Conficker worm. We demonstrated the visualization
of important aspects by showing the differences between valid
behavior and the attacks. Based on these metrics we can detect
attacks with a very high probability of success, the process of
detection is however very expensive.
Abstract: Fine-grained data replication over the Internet allows duplication of frequently accessed data objects, as opposed to entire sites, to certain locations so as to improve the performance of largescale content distribution systems. In a distributed system, agents representing their sites try to maximize their own benefit since they are driven by different goals such as to minimize their communication costs, latency, etc. In this paper, we will use game theoretical techniques and in particular auctions to identify a bidding mechanism that encapsulates the selfishness of the agents, while having a controlling hand over them. In essence, the proposed game theory based mechanism is the study of what happens when independent agents act selfishly and how to control them to maximize the overall performance. A bidding mechanism asks how one can design systems so that agents- selfish behavior results in the desired system-wide goals. Experimental results reveal that this mechanism provides excellent solution quality, while maintaining fast execution time. The comparisons are recorded against some well known techniques such as greedy, branch and bound, game theoretical auctions and genetic algorithms.
Abstract: IVE toolkit has been created for facilitating research,education and development in the ?eld of virtual storytelling andcomputer games. Primarily, the toolkit is intended for modellingaction selection mechanisms of virtual humans, investigating level-of-detail AI techniques for large virtual environments, and for exploringjoint behaviour and role-passing technique (Sec. V). Additionally, thetoolkit can be used as an AI middleware without any changes. Themain facility of IVE is that it serves for prototyping both the AI andvirtual worlds themselves. The purpose of this paper is to describeIVE?s features in general and to present our current work - includingan educational game - on this platform.Keywords? AI middleware, simulation, virtual world.
Abstract: The goal of a network-based intrusion detection
system is to classify activities of network traffics into two major
categories: normal and attack (intrusive) activities. Nowadays, data
mining and machine learning plays an important role in many
sciences; including intrusion detection system (IDS) using both
supervised and unsupervised techniques. However, one of the
essential steps of data mining is feature selection that helps in
improving the efficiency, performance and prediction rate of
proposed approach. This paper applies unsupervised K-means
clustering algorithm with information gain (IG) for feature selection
and reduction to build a network intrusion detection system. For our
experimental analysis, we have used the new NSL-KDD dataset,
which is a modified dataset for KDDCup 1999 intrusion detection
benchmark dataset. With a split of 60.0% for the training set and the
remainder for the testing set, a 2 class classifications have been
implemented (Normal, Attack). Weka framework which is a java
based open source software consists of a collection of machine
learning algorithms for data mining tasks has been used in the testing
process. The experimental results show that the proposed approach is
very accurate with low false positive rate and high true positive rate
and it takes less learning time in comparison with using the full
features of the dataset with the same algorithm.
Abstract: Artificial neural networks (ANN) have the ability to model input-output relationships from processing raw data. This characteristic makes them invaluable in industry domains where such knowledge is scarce at best. In the recent decades, in order to overcome the black-box characteristic of ANNs, researchers have attempted to extract the knowledge embedded within ANNs in the form of rules that can be used in inference systems. This paper presents a new technique that is able to extract a small set of rules from a two-layer ANN. The extracted rules yield high classification accuracy when implemented within a fuzzy inference system. The technique targets industry domains that possess less complex problems for which no expert knowledge exists and for which a simpler solution is preferred to a complex one. The proposed technique is more efficient, simple, and applicable than most of the previously proposed techniques.
Abstract: The most important subtype of non-Hodgkin-s
lymphoma is the Diffuse Large B-Cell Lymphoma. Approximately
40% of the patients suffering from it respond well to therapy,
whereas the remainder needs a more aggressive treatment, in order to
better their chances of survival. Data Mining techniques have helped
to identify the class of the lymphoma in an efficient manner. Despite
that, thousands of genes should be processed to obtain the results.
This paper presents a comparison of the use of various attribute
selection methods aiming to reduce the number of genes to be
searched, looking for a more effective procedure as a whole.
Abstract: The objective of this work was to examine the
changes in the microstructure and macro physical properties caused
by the carbonation of normalised CEM II mortar. Samples were
prepared and subjected to accelerated carbonation at 20°C, 65%
relative humidity and 20% CO2 concentration. On the microstructure
scale, the evolutions of the cumulative pore volume, pore size
distribution, and specific surface area during carbonation were
calculated from the adsorption desorption isotherms of nitrogen. We
also examined the evolution of macro physical properties such as the
porosity accessible to water, the gas permeability, and thermal
conductivity. The conflict between the results of nitrogen porosity
and water porosity indicated that the porous domains explored using
these two techniques are different and help to complementarily
evaluate the effects of carbonation. This is a multi-scale study where
results on microstructural changes can help to explain the evolution
of macro physical properties.
Abstract: Current proposals for E-passport or ID-Card is similar to a regular passport with the addition of tiny contactless integrated circuit (computer chip) inserted in the back cover, which will act as a secure storage device of the same data visually displayed on the photo page of the passport. In addition, it will include a digital photograph that will enable biometric comparison, through the use of facial recognition technology at international borders. Moreover, the e-passport will have a new interface, incorporating additional antifraud and security features. However, its problems are reliability, security and privacy. Privacy is a serious issue since there is no encryption between the readers and the E-passport. However, security issues such as authentication, data protection and control techniques cannot be embedded in one process. In this paper, design and prototype implementation of an improved E-passport reader is presented. The passport holder is authenticated online by using GSM network. The GSM network is the main interface between identification center and the e-passport reader. The communication data is protected between server and e-passport reader by using AES to encrypt data for protection will transferring through GSM network. Performance measurements indicate a 19% improvement in encryption cycles versus previously reported results.
Abstract: Pure phase gallosilicate nitrite sodalite has been synthesized in a single step by low temperature (373 oK) hydrothermal technique. The product obtained was characterized using a combination of techniques including X-ray powder diffraction, IR, Raman spectroscopy, SEM, MAS NMR spectroscopy as well as thermogravimetry. Sodalite with an ideal composition was obtained after synthesis at 3730K and seven days duration using alkaline medium. The structural features of the Na8[GaSiO4]6(NO2)2 sodalite were investigated by IR, MAS NMR spectroscopy of 29Si and 23Na nuclei and by Reitveld refinement of X-ray powder diffraction data. The crystal structure of this sodalite has been refined in the space group P 4 3n; with a cell parameter 8.98386Å, V= 726.9 Å, (Rwp= 0.077 and Rp=0.0537) and Si-O-Ga angle is found to be 132.920 . MAS NMR study confirms complete ordering of Si and Ga in the gallosilicate framework. The surface area of single entity with stoichiometry Na8[GaSiO4]6(NO2)2 was found to be 8.083 x10-15 cm2/g.
Abstract: This paper proposes the use of metrics in design space exploration that highlight where in the structure of the model and at what point in the behaviour, prevention is needed against transient faults. Previous approaches to tackle transient faults focused on recovery after detection. Almost no research has been directed towards preventive measures. But in real-time systems, hard deadlines are performance requirements that absolutely must be met and a missed deadline constitutes an erroneous action and a possible system failure. This paper proposes the use of metrics to assess the system design to flag where transient faults may have significant impact. These tools then allow the design to be changed to minimize that impact, and they also flag where particular design techniques – such as coding of communications or memories – need to be applied in later stages of design.
Abstract: This paper summarizes and compares approaches to
solving the knapsack problem and its known application in capital
budgeting. The first approach uses deterministic methods and can be
applied to small-size tasks with a single constraint. We can also
apply commercial software systems such as the GAMS modelling
system. However, because of NP-completeness of the problem, more
complex problem instances must be solved by means of heuristic
techniques to achieve an approximation of the exact solution in a
reasonable amount of time. We show the problem representation and
parameter settings for a genetic algorithm framework.
Abstract: An effective approach for extracting document images from a noisy background is introduced. The entire scheme is divided into three sub- stechniques – the initial preprocessing operations for noise cluster tightening, introduction of a new thresholding method by maximizing the ratio of stan- dard deviations of the combined effect on the image to the sum of weighted classes and finally the image restoration phase by image binarization utiliz- ing the proposed optimum threshold level. The proposed method is found to be efficient compared to the existing schemes in terms of computational complexity as well as speed with better noise rejection.
Abstract: Unmanned Aerial Vehicles (UAVs) have gained tremendous importance, in both Military and Civil, during first decade of this century. In a UAV, onboard computer (autopilot) autonomously controls the flight and navigation of the aircraft. Based on the aircraft role and flight envelope, basic to complex and sophisticated controllers are used to stabilize the aircraft flight parameters. These controllers constitute the autopilot system for UAVs. The autopilot systems, most commonly, provide lateral and longitudinal control through Proportional-Integral-Derivative (PID) controllers or Phase-lead or Lag Compensators. Various techniques are commonly used to ‘tune’ gains of these controllers. Some techniques used are, in-flight step-by-step tuning, software-in-loop or hardware-in-loop tuning methods. Subsequently, numerous in-flight tests are required to actually ‘fine-tune’ these gains. However, an optimization-based tuning of these PID controllers or compensators, as presented in this paper, can greatly minimize the requirement of in-flight ‘tuning’ and substantially reduce the risks and cost involved in flight-testing.