Abstract: Knowledge is indispensable but voluminous knowledge becomes a bottleneck for efficient processing. A great challenge for data mining activity is the generation of large number of potential rules as a result of mining process. In fact sometimes result size is comparable to the original data. Traditional data mining pruning activities such as support do not sufficiently reduce the huge rule space. Moreover, many practical applications are characterized by continual change of data and knowledge, thereby making knowledge voluminous with each change. The most predominant representation of the discovered knowledge is the standard Production Rules (PRs) in the form If P Then D. Michalski & Winston proposed Censored Production Rules (CPRs), as an extension of production rules, that exhibit variable precision and supports an efficient mechanism for handling exceptions. A CPR is an augmented production rule of the form: If P Then D Unless C, where C (Censor) is an exception to the rule. Such rules are employed in situations in which the conditional statement 'If P Then D' holds frequently and the assertion C holds rarely. By using a rule of this type we are free to ignore the exception conditions, when the resources needed to establish its presence, are tight or there is simply no information available as to whether it holds or not. Thus the 'If P Then D' part of the CPR expresses important information while the Unless C part acts only as a switch changes the polarity of D to ~D. In this paper a scheme based on Dempster-Shafer Theory (DST) interpretation of a CPR is suggested for discovering CPRs from the discovered flat PRs. The discovery of CPRs from flat rules would result in considerable reduction of the already discovered rules. The proposed scheme incrementally incorporates new knowledge and also reduces the size of knowledge base considerably with each episode. Examples are given to demonstrate the behaviour of the proposed scheme. The suggested cumulative learning scheme would be useful in mining data streams.
Abstract: A liquid curved jet has many applications in different
industrial and engineering processes, such as the prilling process
for generating small spherical pellets (fertilizer or magnesium). The
liquids used are usually molten and contain small quantities of
polymers and therefore can be modelled as non-Newtonian liquids. In
this paper, we model the viscoelastic liquid jet by using the Oldroyd-
B model. An asymptotic analysis has been used to simplify the
governing equations. Furthermore, the trajectory and a linear temporal
stability in the presence of gravity and rotation have been determined.
Abstract: This study aims at investigating the empirical
relationships between risk preference, internet preference, and
internet knowledge which are known as user characteristics, in
addition to perceived risk of the customers on the internet purchase
intention. In order to test the relationships between the variables of
model 174, a questionnaire was collected from the students with
previous online experience. For the purpose of data analysis,
confirmatory factor analysis (CFA) and structural equation model
(SEM) was used.
Test results show that the perceived risk affects the internet
purchase intention, and increase or decrease of perceived risk
influences the purchase intention when the customer does the internet
shopping. Other factors such as internet preference, knowledge of the
internet, and risk preference affect the internet purchase intention.
Abstract: This method decrease usage power (expenditure) in networks on chips (NOC). This method data coding for data transferring in order to reduces expenditure. This method uses data compression reduces the size. Expenditure calculation in NOC occurs inside of NOC based on grown models and transitive activities in entry ports. The goal of simulating is to weigh expenditure for encoding, decoding and compressing in Baseline networks and reduction of switches in this type of networks. KeywordsNetworks on chip, Compression, Encoding, Baseline networks, Banyan networks.
Abstract: In this paper we introduce new data oriented modeling
of uniform random variable well-matched with computing systems. Due to this conformity with current computers structure, this modeling will be efficiently used in statistical inference.
Abstract: Message Passing Interface is widely used for Parallel
and Distributed Computing. MPICH and LAM are popular open
source MPIs available to the parallel computing community also
there are commercial MPIs, which performs better than MPICH etc.
In this paper, we discuss a commercial Message Passing Interface, CMPI
(C-DAC Message Passing Interface). C-MPI is an optimized
MPI for CLUMPS. It is found to be faster and more robust compared
to MPICH. We have compared performance of C-MPI and MPICH
on Gigabit Ethernet network.
Abstract: Congestion control is one of the fundamental issues in computer networks. Without proper congestion control mechanisms there is the possibility of inefficient utilization of resources, ultimately leading to network collapse. Hence congestion control is an effort to adapt the performance of a network to changes in the traffic load without adversely affecting users perceived utilities. AIMD (Additive Increase Multiplicative Decrease) is the best algorithm among the set of liner algorithms because it reflects good efficiency as well as good fairness. Our control model is based on the assumption of the original AIMD algorithm; we show that both efficiency and fairness of AIMD can be improved. We call our approach is New AIMD. We present experimental results with TCP that match the expectation of our theoretical analysis.
Abstract: Supply chain consists of all stages involved, directly
or indirectly, includes all functions involved in fulfilling a customer
demand. In two stage transportation supply chain problem,
transportation costs are of a significant proportion of final product
costs. It is often crucial for successful decisions making approaches
in two stage supply chain to explicit account for non-linear
transportation costs. In this paper, deterministic demand and finite
supply of products was considered. The optimized distribution level
and the routing structure from the manufacturing plants to the
distribution centres and to the end customers is determined using
developed mathematical model and solved by proposed particle
swarm optimization based genetic algorithm. Numerical analysis of
the case study is carried out to validate the model.
Abstract: Analysis of blood vessel mechanics in normal and
diseased conditions is essential for disease research, medical device
design and treatment planning. In this work, 3D finite element
models of normal vessel and atherosclerotic vessel with 50% plaque
deposition were developed. The developed models were meshed
using finite number of tetrahedral elements. The developed models
were simulated using actual blood pressure signals. Based on the
transient analysis performed on the developed models, the parameters
such as total displacement, strain energy density and entropy per unit
volume were obtained. Further, the obtained parameters were used to
develop artificial neural network models for analyzing normal and
atherosclerotic blood vessels. In this paper, the objectives of the
study, methodology and significant observations are presented.
Abstract: There are two common methodologies to verify
signatures: the functional approach and the parametric approach. This
paper presents a new approach for dynamic handwritten signature
verification (HSV) using the Neural Network with verification by the
Conjugate Gradient Neural Network (NN). It is yet another avenue in
the approach to HSV that is found to produce excellent results when
compared with other methods of dynamic. Experimental results show
the system is insensitive to the order of base-classifiers and gets a
high verification ratio.
Abstract: This paper presents the convergence analysis
of a prediction based blind equalizer for IIR channels.
Predictor parameters are estimated by using the recursive
least squares algorithm. It is shown that the prediction
error converges almost surely (a.s.) toward a scalar
multiple of the unknown input symbol sequence. It is
also proved that the convergence rate of the parameter
estimation error is of the same order as that in the iterated
logarithm law.
Abstract: Many difficulties are faced in the process of learning
computer programming. This paper will propose a system framework
intended to reduce cognitive load in learning programming. In first
section focus is given on the process of learning and the
shortcomings of the current approaches to learning programming.
Finally the proposed prototype is suggested along with the
justification of the prototype. In the proposed prototype the concept
map is used as visualization metaphor. Concept maps are similar to
the mental schema in long term memory and hence it can reduce
cognitive load well. In addition other method such as part code
method is also proposed in this framework to can reduce cognitive
load.
Abstract: Routing places an important role in determining the
quality of service in wireless networks. The routing methods adopted
in wireless networks have many drawbacks. This paper aims to
review the current routing methods used in wireless networks. This
paper proposes an innovative solution to overcome the problems in
routing. This solution is aimed at improving the Quality of Service.
This solution is different from others as it involves the resuage of the
part of the virtual circuits. This improvement in quality of service is
important especially in propagation of multimedia applications like
video, animations etc. So it is the dire need to propose a new solution
to improve the quality of service in ATM wireless networks for
multimedia applications especially during this era of multimedia
based applications.
Abstract: Glaucoma diagnosis involves extracting three features
of the fundus image; optic cup, optic disc and vernacular. Present
manual diagnosis is expensive, tedious and time consuming. A
number of researches have been conducted to automate this process.
However, the variability between the diagnostic capability of an
automated system and ophthalmologist has yet to be established. This
paper discusses the efficiency and variability between
ophthalmologist opinion and digital technique; threshold. The
efficiency and variability measures are based on image quality
grading; poor, satisfactory or good. The images are separated into
four channels; gray, red, green and blue. A scientific investigation
was conducted on three ophthalmologists who graded the images
based on the image quality. The images are threshold using multithresholding
and graded as done by the ophthalmologist. A
comparison of grade from the ophthalmologist and threshold is made.
The results show there is a small variability between result of
ophthalmologists and digital threshold.