Abstract: In this paper we propose a class of second derivative multistep methods for solving some well-known classes of Lane- Emden type equations which are nonlinear ordinary differential equations on the semi-infinite domain. These methods, which have good stability and accuracy properties, are useful in deal with stiff ODEs. We show superiority of these methods by applying them on the some famous Lane-Emden type equations.
Abstract: Segmentation of a color image composed of different
kinds of regions can be a hard problem, namely to compute for an
exact texture fields. The decision of the optimum number of
segmentation areas in an image when it contains similar and/or un
stationary texture fields. A novel neighborhood-based segmentation
approach is proposed. A genetic algorithm is used in the proposed
segment-pass optimization process. In this pass, an energy function,
which is defined based on Markov Random Fields, is minimized. In
this paper we use an adaptive threshold estimation method for image
thresholding in the wavelet domain based on the generalized
Gaussian distribution (GGD) modeling of sub band coefficients. This
method called Normal Shrink is computationally more efficient and
adaptive because the parameters required for estimating the threshold
depend on sub band data energy that used in the pre-stage of
segmentation. A quad tree is employed to implement the multi
resolution framework, which enables the use of different strategies at
different resolution levels, and hence, the computation can be
accelerated. The experimental results using the proposed
segmentation approach are very encouraging.
Abstract: The present work deals with the structural analysis of
turbine blades and modeling of turbine blades. A common failure
mode for turbine machines is high cycle of fatigue of compressor and
turbine blades due to high dynamic stresses caused by blade vibration
and resonance within the operation range of the machinery. In this
work, proper damping system will be analyzed to reduce the
vibrating blade. The main focus of the work is the modeling of under
platform damper to evaluate the dynamic analysis of turbine-blade
vibrations. The system is analyzed using Bond graph technique. Bond
graph is one of the most convenient ways to represent a system from
the physical aspect in foreground. It has advantage of putting together
multi-energy domains of a system in a single representation in a
unified manner. The bond graph model of dry friction damper is
simulated on SYMBOLS-shakti® software. In this work, the blades
are modeled as Timoshenko beam. Blade Vibrations under different
working conditions are being analyzed numerically.
Abstract: This paper proposes a new model to support user
queries on postgraduate research information at Universiti Tenaga
Nasional. The ontology to be developed will contribute towards
shareable and reusable domain knowledge that makes knowledge
assets intelligently accessible to both people and software. This work
adapts a methodology for ontology development based on the
framework proposed by Uschold and King. The concepts and
relations in this domain are represented in a class diagram using the
Protégé software. The ontology will be used to support a menudriven
query system for assisting students in searching for
information related to postgraduate research at the university.
Abstract: In this paper, an alternating implicit block method for
solving two dimensional scalar wave equation is presented. The
new method consist of two stages for each time step implemented
in alternating directions which are very simple in computation. To
increase the speed of computation, a group of adjacent points is
computed simultaneously. It is shown that the presented method
increase the maximum time step size and more accurate than the
conventional finite difference time domain (FDTD) method and other
existing method of natural ordering.
Abstract: Automatic reusability appraisal could be helpful in
evaluating the quality of developed or developing reusable software
components and in identification of reusable components from
existing legacy systems; that can save cost of developing the software
from scratch. But the issue of how to identify reusable components
from existing systems has remained relatively unexplored. In this
paper, we have mentioned two-tier approach by studying the
structural attributes as well as usability or relevancy of the
component to a particular domain. Latent semantic analysis is used
for the feature vector representation of various software domains. It
exploits the fact that FeatureVector codes can be seen as documents
containing terms -the idenifiers present in the components- and so
text modeling methods that capture co-occurrence information in
low-dimensional spaces can be used. Further, we devised Neuro-
Fuzzy hybrid Inference System, which takes structural metric values
as input and calculates the reusability of the software component.
Decision tree algorithm is used to decide initial set of fuzzy rules for
the Neuro-fuzzy system. The results obtained are convincing enough
to propose the system for economical identification and retrieval of
reusable software components.
Abstract: Data mining techniques have been used in medical
research for many years and have been known to be effective. In order
to solve such problems as long-waiting time, congestion, and delayed
patient care, faced by emergency departments, this study concentrates
on building a hybrid methodology, combining data mining techniques
such as association rules and classification trees. The methodology is
applied to real-world emergency data collected from a hospital and is
evaluated by comparing with other techniques. The methodology is
expected to help physicians to make a faster and more accurate
classification of chest pain diseases.
Abstract: Number of documents being created increases at an
increasing pace while most of them being in already known topics
and little of them introducing new concepts. This fact has started a
new era in information retrieval discipline where the requirements
have their own specialties. That is digging into topics and concepts
and finding out subtopics or relations between topics. Up to now IR
researches were interested in retrieving documents about a general
topic or clustering documents under generic subjects. However these
conventional approaches can-t go deep into content of documents
which makes it difficult for people to reach to right documents they
were searching. So we need new ways of mining document sets
where the critic point is to know much about the contents of the
documents. As a solution we are proposing to enhance LSI, one of
the proven IR techniques by supporting its vector space with n-gram
forms of words. Positive results we have obtained are shown in two
different application area of IR domain; querying a document
database, clustering documents in the document database.
Abstract: Functioning of a biometric system in large part
depends on the performance of the similarity measure function.
Frequently a generalized similarity distance measure function such as
Euclidian distance or Mahalanobis distance is applied to the task of
matching biometric feature vectors. However, often accuracy of a
biometric system can be greatly improved by designing a customized
matching algorithm optimized for a particular biometric application.
In this paper we propose a tailored similarity measure function for
behavioral biometric systems based on the expert knowledge of the
feature level data in the domain. We compare performance of a
proposed matching algorithm to that of other well known similarity
distance functions and demonstrate its superiority with respect to the
chosen domain.
Abstract: The distillation process in the general sense is a
relatively simple technique from the standpoints of its principles.
When dedicating distillation to water treatment and specifically
producing fresh water from sea, ocean and/ briny waters it is
interesting to notice that distillation has no limitations or domains of
applicability regarding the nature or the type of the feedstock water.
This is not the case however for other techniques that are
technologically quite complex, necessitate bigger capital investments
and are limited in their usability. In a previous paper we have
explored some of the effects of temperature on yield. In this paper,
we continue building onto that knowledge base and focus on the
effects of several additional engineering and design variables on
productivity.
Abstract: Well-developed strategic marketing planning is the essential
prerequisite for establishment of the right and unique competitive
advantage. Typical market, however, is a heterogeneous
and decentralized structure with natural involvement of individual
or group subjectivity and irrationality. These features cannot be
fully expressed with one-shot rigorous formal models based on,
e.g. mathematics, statistics or empirical formulas. We present an
innovative solution, extending the domain of agent based computational
economics towards the concept of hybrid modeling in service
provider and consumer market such as telecommunications. The
behavior of the market is described by two classes of agents -
consumer and service provider agents - whose internal dynamics
are fundamentally different. Customers are rather free multi-state
structures, adjusting behavior and preferences quickly in accordance
with time and changing environment. Producers, on the contrary,
are traditionally structured companies with comparable internal processes
and specific managerial policies. Their business momentum is
higher and immediate reaction possibilities limited. This limitation
underlines importance of proper strategic planning as the main
process advising managers in time whether to continue with more
or less the same business or whether to consider the need for future
structural changes that would ensure retention of existing customers
or acquisition of new ones.
Abstract: The development of distributed systems has been affected by the need to accommodate an increasing degree of flexibility, adaptability, and autonomy. The Mobile Agent technology is emerging as an alternative to build a smart generation of highly distributed systems. In this work, we investigate the performance aspect of agent-based technologies for information retrieval. We present a comparative performance evaluation model of Mobile Agents versus Remote Method Invocation by means of an analytical approach. We demonstrate the effectiveness of mobile agents for dynamic code deployment and remote data processing by reducing total latency and at the same time producing minimum network traffic. We argue that exploiting agent-based technologies significantly enhances the performance of distributed systems in the domain of information retrieval.
Abstract: The objective of this paper is the introduction to a
unified optimization framework for research and education. The
OPTILIB framework implements different general purpose algorithms
for combinatorial optimization and minimum search on standard continuous
test functions. The preferences of this library are the straightforward
integration of new optimization algorithms and problems
as well as the visualization of the optimization process of different
methods exploring the search space exclusively or for the real time
visualization of different methods in parallel. Further the usage of
several implemented methods is presented on the basis of two use
cases, where the focus is especially on the algorithm visualization.
First it is demonstrated how different methods can be compared
conveniently using OPTILIB on the example of different iterative
improvement schemes for the TRAVELING SALESMAN PROBLEM.
A second study emphasizes how the framework can be used to find
global minima in the continuous domain.
Abstract: Smoke discharging is a main reason of air pollution
problem from industrial plants. The obstacle of a building has an
affect with the air pollutant discharge. In this research, a mathematical
model of the smoke dispersion from two sources and one source with
a structural obstacle is considered. The governing equation of the
model is an isothermal mass transfer model in a viscous fluid. The
finite element method is used to approximate the solutions of the
model. The triangular linear elements have been used for discretising
the domain, and time integration has been carried out by semi-implicit
finite difference method. The simulations of smoke dispersion in
cases of one chimney and two chimneys are presented. The maximum
calculated smoke concentration of both cases are compared. It is then
used to make the decision for smoke discharging and air pollutant
control problems on industrial area.
Abstract: The rapid expansion of the web is causing the
constant growth of information, leading to several problems such as
increased difficulty of extracting potentially useful knowledge. Web
content mining confronts this problem gathering explicit information
from different web sites for its access and knowledge discovery.
Query interfaces of web databases share common building blocks.
After extracting information with parsing approach, we use a new
data mining algorithm to match a large number of schemas in
databases at a time. Using this algorithm increases the speed of
information matching. In addition, instead of simple 1:1 matching,
they do complex (m:n) matching between query interfaces. In this
paper we present a novel correlation mining algorithm that matches
correlated attributes with smaller cost. This algorithm uses Jaccard
measure to distinguish positive and negative correlated attributes.
After that, system matches the user query with different query
interfaces in special domain and finally chooses the nearest query
interface with user query to answer to it.
Abstract: Mendelian Disease Genes represent a collection of single points of failure for the various systems they constitute. Such genes have been shown, on average, to encode longer proteins than 'non-disease' proteins. Existing models suggest that this results from the increased likeli-hood of longer genes undergoing mutations. Here, we show that in saturated mutagenesis experiments performed on model organisms, where the likelihood of each gene mutating is one, a similar relationship between length and the probability of a gene being lethal was observed. We thus suggest an extended model demonstrating that the likelihood of a mutated gene to produce a severe phenotype is length-dependent. Using the occurrence of conserved domains, we bring evidence that this dependency results from a correlation between protein length and the number of functions it performs. We propose that protein length thus serves as a proxy for protein cardinality in different networks required for the organism's survival and well-being. We use this example to argue that the collection of Mendelian Disease Genes can, and should, be used to study the rules governing systems vulnerability in living organisms.
Abstract: The study aimed to identify the nature of autistic
talent, the manifestations of their weak central coherence, and their
sensory characteristics. The case study consisted of four talented
autistic males. Two of them in drawing, one in clay formation and
one in jigsaw puzzle. Tools of data collection were Group Embedded
Figures Test, Block Design Test, Sensory Profile Checklist Revised,
Interview forms and direct observation. Results indicated that talent
among autistics emerges in limited domain and being extraordinary
for each case. Also overlapping construction properties. Indeed, they
show three perceptual aspects of weak central coherence: The weak
in visual spatial-constructional coherence, the weak in perceptual
coherence and the weak in verbal – semantic coherence. Moreover,
the majority of the study cases used the three strategies of weak
central coherence (segmentation, obliqueness and rotation). As for
the sensory characteristics, all study cases have numbers of that
characteristics that especially emerges in the visual system.
Abstract: This paper addresses an efficient technique to embed and detect digital fingerprint code. Orthogonal modulation method is a straightforward and widely used approach for digital fingerprinting but shows several limitations in computational cost and signal efficiency. Coded modulation method can solve these limitations in theory. However it is difficult to perform well in practice if host signals are not available during tracing colluders, other kinds of attacks are applied, and the size of fingerprint code becomes large. In this paper, we propose a hybrid modulation method, in which the merits of or-thogonal modulation and coded modulation method are combined so that we can achieve low computational cost and high signal efficiency. To analyze the performance, we design a new fingerprint code based on GD-PBIBD theory and modulate this code into images by our method using spread-spectrum watermarking on frequency domain. The results show that the proposed method can efficiently handle large fingerprint code and trace colluders against averaging attacks.
Abstract: Repeated observation of a given area over time yields
potential for many forms of change detection analysis. These
repeated observations are confounded in terms of radiometric
consistency due to changes in sensor calibration over time,
differences in illumination, observation angles and variation in
atmospheric effects.
This paper demonstrates applicability of an empirical relative
radiometric normalization method to a set of multitemporal cloudy
images acquired by Resourcesat1 LISS III sensor. Objective of this
study is to detect and remove cloud cover and normalize an image
radiometrically. Cloud detection is achieved by using Average
Brightness Threshold (ABT) algorithm. The detected cloud is
removed and replaced with data from another images of the same
area. After cloud removal, the proposed normalization method is
applied to reduce the radiometric influence caused by non surface
factors. This process identifies landscape elements whose reflectance
values are nearly constant over time, i.e. the subset of non-changing
pixels are identified using frequency based correlation technique. The
quality of radiometric normalization is statistically assessed by R2
value and mean square error (MSE) between each pair of analogous
band.
Abstract: Software and applications are subjected to serious and damaging security threats, these threats are increasing as a result of increased number of potential vulnerabilities. Security testing is an indispensable process to validate software security requirements and to identify security related vulnerabilities. In this paper we analyze and compare different available vulnerabilities testing techniques based on a pre defined criteria using analytical hierarchy process (AHP). We have selected five testing techniques which includes Source code analysis, Fault code injection, Robustness, Stress and Penetration testing techniques. These testing techniques have been evaluated against five criteria which include cost, thoroughness, Ease of use, effectiveness and efficiency. The outcome of the study is helpful for researchers, testers and developers to understand effectiveness of each technique in its respective domain. Also the study helps to compare the inner working of testing techniques against a selected criterion to achieve optimum testing results.