Abstract: Bio-chips are used for experiments on genes and
contain various information such as genes, samples and so on. The
two-dimensional bio-chips, in which one axis represent genes and the
other represent samples, are widely being used these days. Instead of
experimenting with real genes which cost lots of money and much
time to get the results, bio-chips are being used for biological
experiments. And extracting data from the bio-chips with high
accuracy and finding out the patterns or useful information from such
data is very important. Bio-chip analysis systems extract data from
various kinds of bio-chips and mine the data in order to get useful
information. One of the commonly used methods to mine the data is
classification. The algorithm that is used to classify the data can be
various depending on the data types or number characteristics and so
on. Considering that bio-chip data is extremely large, an algorithm that
imitates the ecosystem such as the ant algorithm is suitable to use as an
algorithm for classification. This paper focuses on finding the
classification rules from the bio-chip data using the Ant Colony
algorithm which imitates the ecosystem. The developed system takes
in consideration the accuracy of the discovered rules when it applies it
to the bio-chip data in order to predict the classes.
Abstract: Knowledge Discovery of Databases (KDD) is the
process of extracting previously unknown but useful and significant
information from large massive volume of databases. Data Mining is
a stage in the entire process of KDD which applies an algorithm to
extract interesting patterns. Usually, such algorithms generate huge
volume of patterns. These patterns have to be evaluated by using
interestingness measures to reflect the user requirements.
Interestingness is defined in different ways, (i) Objective measures
(ii) Subjective measures. Objective measures such as support and
confidence extract meaningful patterns based on the structure of the
patterns, while subjective measures such as unexpectedness and
novelty reflect the user perspective. In this report, we try to brief the
more widely spread and successful subjective measures and propose
a new subjective measure of interestingness, i.e. shocking.
Abstract: The heuristic decision rules used for project
scheduling will vary depending upon the project-s size, complexity,
duration, personnel, and owner requirements. The concept of project
complexity has received little detailed attention. The need to
differentiate between easy and hard problem instances and the
interest in isolating the fundamental factors that determine the
computing effort required by these procedures inspired a number of
researchers to develop various complexity measures.
In this study, the most common measures of project complexity are
presented. A new measure of project complexity is developed. The
main privilege of the proposed measure is that, it considers size,
shape and logic characteristics, time characteristics, resource
demands and availability characteristics as well as number of critical
activities and critical paths. The degree of sensitivity of the proposed
measure for complexity of project networks has been tested and
evaluated against the other measures of complexity of the considered
fifty project networks under consideration in the current study. The
developed measure showed more sensitivity to the changes in the
network data and gives accurate quantified results when comparing
the complexities of networks.
Abstract: This research paper evaluates and compares the
performance of equal cost adaptive multi-path routing algorithms
taking the transport protocols TCP (Transmission Control Protocol)
and UDP (User Datagram Protocol) using network simulator ns2 and
concludes which one is better.
Abstract: Next generation networks with the idea of convergence of service and control layer in existing networks (fixed, mobile and data) and with the intention of providing services in an integrated network, has opened new horizon for telecom operators. On the other hand, economic problems have caused operators to look for new source of income including consider new services, subscription of more users and their promotion in using morenetwork resources and easy participation of service providers or 3rd party operators in utilizing networks. With this requirement, an architecture based on next generation objectives for service layer is necessary. In this paper, a new architecture based on IMS model explains participation of 3rd party operators in creation and implementation of services on an integrated telecom network.
Abstract: Abstract— The paper presents a preliminary study on modeling and estimation of basic wind speed ( extreme wind gusts ) for the consideration of vulnerability and design of building in Ayeyarwady Region. The establishment of appropriate design wind speeds is a critical step towards the calculation of design wind loads for structures. In this paper the extreme value analysis of this prediction work is based on the anemometer data (1970-2009) maintained by the department of meteorology and hydrology of Pathein. Statistical and probabilistic approaches are used to derive formulas for estimating 3-second gusts from recorded data (10-minute sustained mean wind speeds).
Abstract: This paper presents a new approach using Combined Artificial Neural Network (CANN) module for daily peak load forecasting. Five different computational techniques –Constrained method, Unconstrained method, Evolutionary Programming (EP), Particle Swarm Optimization (PSO), and Genetic Algorithm (GA) – have been used to identify the CANN module for peak load forecasting. In this paper, a set of neural networks has been trained with different architecture and training parameters. The networks are trained and tested for the actual load data of Chennai city (India). A set of better trained conventional ANNs are selected to develop a CANN module using different algorithms instead of using one best conventional ANN. Obtained results using CANN module confirm its validity.
Abstract: In this paper, we propose an architecture for easily
constructing a robot controller. The architecture is a multi-agent
system which has eight agents: the Man-machine interface, Task
planner, Task teaching editor, Motion planner, Arm controller,
Vehicle controller, Vision system and CG display. The controller has
three databases: the Task knowledge database, the Robot database and
the Environment database. Based on this controller architecture, we
are constructing an experimental power distribution line maintenance
robot system and are doing the experiment for the maintenance tasks,
for example, “Bolt insertion task".
Abstract: A feed-forward, back-propagation Artificial Neural
Network (ANN) model has been used to forecast the occurrences of
wastewater overflows in a combined sewerage reticulation system.
This approach was tested to evaluate its applicability as a method
alternative to the common practice of developing a complete
conceptual, mathematical hydrological-hydraulic model for the
sewerage system to enable such forecasts. The ANN approach
obviates the need for a-priori understanding and representation of the
underlying hydrological hydraulic phenomena in mathematical terms
but enables learning the characteristics of a sewer overflow from the
historical data.
The performance of the standard feed-forward, back-propagation
of error algorithm was enhanced by a modified data normalizing
technique that enabled the ANN model to extrapolate into the
territory that was unseen by the training data. The algorithm and the
data normalizing method are presented along with the ANN model
output results that indicate a good accuracy in the forecasted sewer
overflow rates. However, it was revealed that the accurate
forecasting of the overflow rates are heavily dependent on the
availability of a real-time flow monitoring at the overflow structure
to provide antecedent flow rate data. The ability of the ANN to
forecast the overflow rates without the antecedent flow rates (as is
the case with traditional conceptual reticulation models) was found to
be quite poor.
Abstract: The advances in wireless communication have opened unlimited horizons but there are some challenges as well. The Nature derived air medium between MS (Mobile Station) and BS (Base Station) is beyond human control and produces channel impairment. The impact of the natural conditions at the air medium is the biggest issue in wireless communication. Natural conditions make reliability more cumbersome; here reliability refers to the efficient recovery of the lost or erroneous data. The SR-ARQ (Selective Repeat-Automatic Repeat Request) protocol is a de facto standard for any wireless technology at the air interface with its standard reliability features. Our focus in this research is on the reliability of the control or feedback signal of the SR-ARQ protocol. The proposed mechanism, RSR-ARQ (Reliable SR-ARQ) is an enhancement of the SR-ARQ protocol that has ensured the reliability of the control signals through channel impairment sensitive mechanism. We have modeled the system under two-state discrete time Markov Channel. The simulation results demonstrate the better recovery of the lost or erroneous data that will increase the overall system performance.
Abstract: Medical imaging uses the advantage of digital
technology in imaging and teleradiology. In teleradiology systems
large amount of data is acquired, stored and transmitted. A major
technology that may help to solve the problems associated with the
massive data storage and data transfer capacity is data compression
and decompression. There are many methods of image compression
available. They are classified as lossless and lossy compression
methods. In lossy compression method the decompressed image
contains some distortion. Fractal image compression (FIC) is a lossy
compression method. In fractal image compression an image is
coded as a set of contractive transformations in a complete metric
space. The set of contractive transformations is guaranteed to
produce an approximation to the original image. In this paper FIC is
achieved by PIFS using quadtree partitioning. PIFS is applied on
different images like , Ultrasound, CT Scan, Angiogram, X-ray,
Mammograms. In each modality approximately twenty images are
considered and the average values of compression ratio and PSNR
values are arrived. In this method of fractal encoding, the
parameter, tolerance factor Tmax, is varied from 1 to 10, keeping the
other standard parameters constant. For all modalities of images the
compression ratio and Peak Signal to Noise Ratio (PSNR) are
computed and studied. The quality of the decompressed image is
arrived by PSNR values. From the results it is observed that the
compression ratio increases with the tolerance factor and
mammogram has the highest compression ratio. The quality of the
image is not degraded upto an optimum value of tolerance factor,
Tmax, equal to 8, because of the properties of fractal compression.
Abstract: In this paper, we use nonlinear system identification method to predict and detect process fault of a cement rotary kiln. After selecting proper inputs and output, an input-output model is identified for the plant. To identify the various operation points in the
kiln, Locally Linear Neuro-Fuzzy (LLNF) model is used. This model is trained by LOLIMOT algorithm which is an incremental treestructure
algorithm. Then, by using this method, we obtained 3
distinct models for the normal and faulty situations in the kiln. One of the models is for normal condition of the kiln with 15 minutes
prediction horizon. The other two models are for the two faulty situations in the kiln with 7 minutes prediction horizon are presented.
At the end, we detect these faults in validation data. The data collected from White Saveh Cement Company is used for in this study.
Abstract: In this paper spatial variability of some chemical and
physical soil properties were investigated in mountain rangelands of
Nesho, Mazandaran province, Iran. 110 soil samples from 0-30 cm
depth were taken with systematic method on grid 30×30 m2 in
regions with different vegetation cover and transported to laboratory.
Then soil chemical and physical parameters including Acidity (pH),
Electrical conductivity, Caco3, Bulk density, Particle density, total
phosphorus, total Nitrogen, available potassium, Organic matter,
Saturation moisture, Soil texture (percentage of sand, silt and clay),
Sodium, Calcium, magnesium were measured in laboratory. Data
normalization was performed then was done statistical analysis for
description of soil properties and geostatistical analysis for indication
spatial correlation between these properties and were perpetrated
maps of spatial distribution of soil properties using Kriging method.
Results indicated that in the study area Saturation moisture and
percentage of Sand had highest and lowest spatial correlation
respectively.
Abstract: Knowledge Discovery in Databases (KDD) is the process of extracting previously unknown, hidden and interesting patterns from a huge amount of data stored in databases. Data mining is a stage of the KDD process that aims at selecting and applying a particular data mining algorithm to extract an interesting and useful knowledge. It is highly expected that data mining methods will find interesting patterns according to some measures, from databases. It is of vital importance to define good measures of interestingness that would allow the system to discover only the useful patterns. Measures of interestingness are divided into objective and subjective measures. Objective measures are those that depend only on the structure of a pattern and which can be quantified by using statistical methods. While, subjective measures depend only on the subjectivity and understandability of the user who examine the patterns. These subjective measures are further divided into actionable, unexpected and novel. The key issues that faces data mining community is how to make actions on the basis of discovered knowledge. For a pattern to be actionable, the user subjectivity is captured by providing his/her background knowledge about domain. Here, we consider the actionability of the discovered knowledge as a measure of interestingness and raise important issues which need to be addressed to discover actionable knowledge.
Abstract: Organizational innovation favors technological
innovation, but does it also influence technological innovation
persistence? This article investigates empirically the pattern of
technological innovation persistence and tests the potential impact of
organizational innovation using firm-level data from three waves of
the French Community Innovation Surveys. Evidence shows a
positive effect of organizational innovation on technological
innovation persistence, according to various measures of
organizational innovation. Moreover, this impact is more significant
for complex innovators (i.e., those who innovate in both products and
processes). These results highlight the complexity of managing
organizational practices with regard to the firm-s technological
innovation. They also add to comprehension of the drivers of
innovation persistence, through a focus on an often forgotten
dimension of innovation in a broader sense.
Abstract: Groundlessness of application probability-statistic methods are especially shown at an early stage of the aviation GTE technical condition diagnosing, when the volume of the information has property of the fuzzy, limitations, uncertainty and efficiency of application of new technology Soft computing at these diagnosing stages by using the fuzzy logic and neural networks methods. It is made training with high accuracy of multiple linear and nonlinear models (the regression equations) received on the statistical fuzzy data basis. At the information sufficiency it is offered to use recurrent algorithm of aviation GTE technical condition identification on measurements of input and output parameters of the multiple linear and nonlinear generalized models at presence of noise measured (the new recursive least squares method (LSM)). As application of the given technique the estimation of the new operating aviation engine D30KU-154 technical condition at height H=10600 m was made.
Abstract: In present days the area of data migration is very topical. Current tools for data migration in the area of relational database have several disadvantages that are presented in this paper. We propose a methodology for data migration of the database tables and their data between various types of relational database systems (RDBMS). The proposed methodology contains an expert system. The expert system contains a knowledge base that is composed of IFTHEN rules and based on the input data suggests appropriate data types of columns of database tables. The proposed tool, which contains an expert system, also includes the possibility of optimizing the data types in the target RDBMS database tables based on processed data of the source RDBMS database tables. The proposed expert system is shown on data migration of selected database of the source RDBMS to the target RDBMS.
Abstract: As the majority of faults are found in a few of its modules so there is a need to investigate the modules that are affected severely as compared to other modules and proper maintenance need to be done on time especially for the critical applications. In this paper, we have explored the different predictor models to NASA-s public domain defect dataset coded in Perl programming language. Different machine learning algorithms belonging to the different learner categories of the WEKA project including Mamdani Based Fuzzy Inference System and Neuro-fuzzy based system have been evaluated for the modeling of maintenance severity or impact of fault severity. The results are recorded in terms of Accuracy, Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE). The results show that Neuro-fuzzy based model provides relatively better prediction accuracy as compared to other models and hence, can be used for the maintenance severity prediction of the software.
Abstract: Problem Statement:Rapid technological developments of the 21st century have advanced our daily lives in various ways. Particularly in education, students frequently utilize technological resources to aid their homework and to access information. listen to radio or watch television (26.9 %) and e-mails (34.2 %) [26]. Not surprisingly, the increase in the use of technologies also resulted in an increase in the use of e-mail, instant messaging, chat rooms, mobile phones, mobile phone cameras and web sites by adolescents to bully peers. As cyber bullying occurs in the cyber space, lesser access to technologies would mean lesser cyber-harm. Therefore, the frequency of technology use is a significant predictor of cyber bullying and cyber victims. Cyber bullies try to harm the victim using various media. These tools include sending derogatory texts via mobile phones, sending threatening e-mails and forwarding confidential emails to everyone on the contacts list. Another way of cyber bullying is to set up a humiliating website and invite others to post comments. In other words, cyber bullies use e-mail, chat rooms, instant messaging, pagers, mobile texts and online voting tools to humiliate and frighten others and to create a sense of helplessness. No matter what type of bullying it is, it negatively affects its victims. Children who bully exhibit more emotional inhibition and attribute themselves more negative self-statements compared to non-bullies. Students whose families are not sympathetic and who receive lower emotional support are more prone to bully their peers. Bullies have authoritarian families and do not get along well with them. The family is the place where the children-s physical, social and psychological needs are satisfied and where their personalities develop. As the use of the internet became prevalent so did parents- restrictions on their children-s internet use. However, parents are unaware of the real harm. Studies that explain the relationship between parental attitudes and cyber bullying are scarce in literature. Thus, this study aims to investigate the relationship between cyber bullying and parental attitudes in the primary school. Purpose of Study: This study aimed to investigate the relationship between cyber bullying and parental attitudes. A second aim was to determine whether parental attitudes could predict cyber bullying and if so which variables could predict it significantly. Methods:The study had a cross-sectional and relational survey model. A demographics information form, questions about cyber bullying and a Parental Attitudes Inventory were conducted with a total of 346 students (189 females and 157 males) registered at various primary schools. Data was analysed by multiple regression analysis using the software package SPSS 16.
Abstract: The dynamic or complex modulus test is considered
to be a mechanistically based laboratory test to reliably characterize
the strength and load-resistance of Hot-Mix Asphalt (HMA) mixes
used in the construction of roads. The most common observation is
that the data collected from these tests are often noisy and somewhat
non-sinusoidal. This hampers accurate analysis of the data to obtain
engineering insight. The goal of the work presented in this paper is to
develop and compare automated evolutionary computational
techniques to filter test noise in the collection of data for the HMA
complex modulus test. The results showed that the Covariance
Matrix Adaptation-Evolutionary Strategy (CMA-ES) approach is
computationally efficient for filtering data obtained from the HMA
complex modulus test.