Abstract: The temporal nature of negative selection is an under exploited area. In a negative selection system, newly generated antibodies go through a maturing phase, and the survivors of the phase then wait to be activated by the incoming antigens after certain number of matches. These without having enough matches will age and die, while these with enough matches (i.e., being activated) will become active detectors. A currently active detector may also age and die if it cannot find any match in a pre-defined (lengthy) period of time. Therefore, what matters in a negative selection system is the dynamics of the involved parties in the current time window, not the whole time duration, which may be up to eternity. This property has the potential to define the uniqueness of negative selection in comparison with the other approaches. On the other hand, a negative selection system is only trained with “normal" data samples. It has to learn and discover unknown “abnormal" data patterns on the fly by itself. Consequently, it is more appreciate to utilize negation selection as a system for pattern discovery and recognition rather than just pattern recognition. In this paper, we study the potential of using negative selection in discovering unknown temporal patterns.
Abstract: Mining sequential patterns from large customer transaction databases has been recognized as a key research topic in database systems. However, the previous works more focused on mining sequential patterns at a single concept level. In this study, we introduced concept hierarchies into this problem and present several algorithms for discovering multiple-level sequential patterns based on the hierarchies. An experiment was conducted to assess the performance of the proposed algorithms. The performances of the algorithms were measured by the relative time spent on completing the mining tasks on two different datasets. The experimental results showed that the performance depends on the characteristics of the datasets and the pre-defined threshold of minimal support for each level of the concept hierarchy. Based on the experimental results, some suggestions were also given for how to select appropriate algorithm for a certain datasets.
Abstract: In the present paper some recommendations for the
use of software package “Mathematica" in a basic numerical analysis
course are presented. The methods which are covered in the course
include solution of systems of linear equations, nonlinear equations
and systems of nonlinear equations, numerical integration,
interpolation and solution of ordinary differential equations. A set of
individual assignments developed for the course covering all the
topics is discussed in detail.
Abstract: The purpose of this paper is to conceptualize a futureoriented
human work environment and organizational activity in
deep mines that entails a vision of good and safe workplace. Futureoriented
technological challenges and mental images required for
modern work organization design were appraised. It is argued that an
intelligent-deep-mine covering the entire value chain, including
environmental issues and with work organization that supports good
working and social conditions towards increased human productivity
could be designed. With such intelligent system and work
organization in place, the mining industry could be seen as a place
where cooperation, skills development and gender equality are key
components. By this perspective, both the youth and women might
view mining activity as an attractive job and the work environment
as a safe, and this could go a long way in breaking the unequal
gender balance that exists in most mines today.
Abstract: In the present paper, a set of parametric FE stress
analyses is carried out for two-planar welded tubular DKT-joints
under two different axial load cases. Analysis results are used to
present general remarks on the effect of geometrical parameters on
the stress concentration factors (SCFs) at the inner saddle, outer
saddle, toe, and heel positions on the main (outer) brace. Then a new
set of SCF parametric equations is developed through nonlinear
regression analysis for the fatigue design of two-planar DKT-joints.
An assessment study of these equations is conducted against the
experimental data; and the satisfaction of the criteria regarding the
acceptance of parametric equations is checked. Significant effort has
been devoted by researchers to the study of SCFs in various uniplanar
tubular connections. Nevertheless, for multi-planar joints
covering the majority of practical applications, very few
investigations have been reported due to the complexity and high
cost involved.
Abstract: A co-generation system in automobile can improve
thermal efficiency of vehicle in some degree. The waste heat from the
engine exhaust and coolant is still attractive energy source that reaches
around 60% of the total energy converted from fuel. To maximize the
effectiveness of heat exchangers for recovering the waste heat, it is
vital to select the most suitable working fluid for the system, not to
mention that it is important to find the optimum design for the heat
exchangers. The design of heat exchanger is out of scoop of this study;
rather, the main focus has been on the right selection of working fluid
for the co-generation system. Simulation study was carried out to find
the most suitable working fluid that can allow the system to achieve
the optimum efficiency in terms of the heat recovery rate and thermal
efficiency.
Abstract: The goal of this research is discovering the
determinants of the success or failure of external cooperation in small
and medium enterprises (SMEs). For this, a survey was given to 190
SMEs that experienced external cooperation within the last 3 years. A
logistic regression model was used to derive organizational or strategic
characteristics that significantly influence whether external
collaboration of domestic SMEs is successful or not. Results suggest
that research and development (R&D) features in general
characteristics (both idea creation and discovering market
opportunities) that focused on and emphasized indirected-market
stakeholders (such as complementary companies and affiliates) and
strategies in innovative strategic characteristics raise the probability of
successful external cooperation. This can be used meaningfully to
build a policy or strategy for inducing successful external cooperation
or to understand the innovation of SMEs.
Abstract: Knowledge is indispensable but voluminous knowledge becomes a bottleneck for efficient processing. A great challenge for data mining activity is the generation of large number of potential rules as a result of mining process. In fact sometimes result size is comparable to the original data. Traditional data mining pruning activities such as support do not sufficiently reduce the huge rule space. Moreover, many practical applications are characterized by continual change of data and knowledge, thereby making knowledge voluminous with each change. The most predominant representation of the discovered knowledge is the standard Production Rules (PRs) in the form If P Then D. Michalski & Winston proposed Censored Production Rules (CPRs), as an extension of production rules, that exhibit variable precision and supports an efficient mechanism for handling exceptions. A CPR is an augmented production rule of the form: If P Then D Unless C, where C (Censor) is an exception to the rule. Such rules are employed in situations in which the conditional statement 'If P Then D' holds frequently and the assertion C holds rarely. By using a rule of this type we are free to ignore the exception conditions, when the resources needed to establish its presence, are tight or there is simply no information available as to whether it holds or not. Thus the 'If P Then D' part of the CPR expresses important information while the Unless C part acts only as a switch changes the polarity of D to ~D. In this paper a scheme based on Dempster-Shafer Theory (DST) interpretation of a CPR is suggested for discovering CPRs from the discovered flat PRs. The discovery of CPRs from flat rules would result in considerable reduction of the already discovered rules. The proposed scheme incrementally incorporates new knowledge and also reduces the size of knowledge base considerably with each episode. Examples are given to demonstrate the behaviour of the proposed scheme. The suggested cumulative learning scheme would be useful in mining data streams.
Abstract: The interrelationship between international stock
markets has been a key study area among the financial market
researchers for international portfolio management and risk
measurement. The characteristics of security returns and their
dynamics play a vital role in the financial market theory. This study
is an attempt to find out the dynamic linkages among the equity
market of USA and emerging markets of Pakistan and India using
daily data covering the period of January 2003–December 2009. The
study utilizes Johansen (Journal of Economic Dynamics and Control,
12, 1988) and Johansen and Juselius (Oxford Bulletin of Economics
and Statistics, 52, 1990) cointegration procedure for long run
relationship and Granger-causality tests based on Toda and
Yamamoto (Journal of Econometrics, 66, 1995) methodology.
No cointegration was found among stock markets of USA, Pakistan
and India, while Granger-causality test showed the evidence of
unidirectional causality running from New York stock exchange to
Bombay and Karachi stock exchanges.
Abstract: One of object oriented software developing problem
is the difficulty of searching the appropriate and suitable objects for
starting the system. In this work, ontologies appear in the part of
supporting the object discovering in the initial of object oriented
software developing. There are many researches try to demonstrate
that there is a great potential between object model and ontologies.
Constructing ontology from object model is called ontology
engineering can be done; On the other hand, this research is aiming to
support the idea of building object model from ontology is also
promising and practical. Ontology classes are available online in any
specific areas, which can be searched by semantic search engine.
There are also many helping tools to do so; one of them which are
used in this research is Protégé ontology editor and Visual Paradigm.
To put them together give a great outcome. This research will be
shown how it works efficiently with the real case study by using
ontology classes in travel/tourism domain area. It needs to combine
classes, properties, and relationships from more than two ontologies
in order to generate the object model. In this paper presents a simple
methodology framework which explains the process of discovering
objects. The results show that this framework has great value while
there is possible for expansion. Reusing of existing ontologies offers
a much cheaper alternative than building new ones from scratch.
More ontologies are becoming available on the web, and online
ontologies libraries for storing and indexing ontologies are increasing
in number and demand. Semantic and Ontologies search engines have
also started to appear, to facilitate search and retrieval of online
ontologies.
Abstract: This paper attempts to identify the significance of
Information and Communications Technology (ICT) and
competitiveness to the profit efficiency of commercial banks in
Malaysia. The profit efficiency of commercial banks in Malaysia, the
dependent variable, was estimated using the Stochastic Frontier
Approach (SFA) on a sample of unbalanced panel data, covering 23
commercial banks, between 1995 to 2007. Based on the empirical
results, ICT was not found to exert a significant impact on profit
efficiency, whereas competitiveness, non ICT stock expenditure and
ownership were significant contributors. On the other hand, the size
of banks was found to have significantly reduced profit efficiency,
opening up for various interpretations of the interrelated role of ICT
and competition.
Abstract: Modernizing legacy applications is the key issue facing IT managers today because there's enormous pressure on organizations to change the way they run their business to meet the new requirements. The importance of software maintenance and reengineering is forever increasing. Understanding the architecture of existing legacy applications is the most critical issue for maintenance and reengineering. The artifacts recovery can be facilitated with different recovery approaches, methods and tools. The existing methods provide static and dynamic set of techniques for extracting architectural information, but are not suitable for all users in different domains. This paper presents a simple and lightweight pattern extraction technique to extract different artifacts from legacy systems using regular expression pattern specifications with multiple language support. We used our custom-built tool DRT to recover artifacts from existing system at different levels of abstractions. In order to evaluate our approach a case study is conducted.
Abstract: The implementation of electronic government started since the initiation of Multimedia Super Corridor (MSC) by the Malaysia government. The introduction of ICT in the public sector especially e-Government initiatives opens up a new book in the government administration throughout the world. The aim or this paper is to discuss the implementation of e-government in Malaysia, covering the result of public user self assessment on Malaysia's electronic government applications. E-services, e-procurement, Generic Office Environment (GOE), Human Resources Management Information System (HRMIS), Project Monitoring System (PMS), Electronic Labor Exchange (ELX) and e-syariah(religion) were the seven flagship application assessed. The study adopted a crosssectional survey research approach and information system literature were used. The analysis was done for 35 responden in pilot test and there was evidence from public user's perspective to suggest that the e-government applications were generally successful.
Abstract: Due to their high power-to-weight ratio and low cost, pneumatic actuators are attractive for robotics and automation applications; however, achieving fast and accurate control of their position have been known as a complex control problem. The paper presents a methodology for obtaining controllers that achieve high position accuracy and preserve the closed-loop characteristics over a broad operating range. Experimentation with a number of conventional (or "classical") three-term controllers shows that, as repeated operations accumulate, the characteristics of the pneumatic actuator change requiring frequent re-tuning of the controller parameters (PID gains). Furthermore, three-term controllers are found to perform poorly in recovering the closed-loop system after the application of load or other external disturbances. The key reason for these problems lies in the non-linear exchange of energy inside the cylinder relating, in particular, to the complex friction forces that develop on the piston-wall interface. In order to overcome this problem but still remain within the boundaries of classical control methods, we designed an auto selective classicaql controller so that the system performance would benefit from all three control gains (KP, Kd, Ki) according to system requirements and the characteristics of each type of controller. This challenging experimentation took place for consistent performance in the face of modelling imprecision and disturbances. In the work presented, a selective PID controller is presented for an experimental rig comprising an air cylinder driven by a variable-opening pneumatic valve and equipped with position and pressure sensors. The paper reports on tests carried out to investigate the capability of this specific controller to achieve consistent control performance under, repeated operations and other changes in operating conditions.
Abstract: As global industry developed rapidly, the energy
demand also rises simultaneously. In the production process, there’s a
lot of energy consumed in the process. Formally, the energy used in
generating the heat in the production process. In the total energy
consumption, 40% of the heat was used in process heat, mechanical
work, chemical energy and electricity. The remaining 50% were
released into the environment. It will cause energy waste and
environment pollution. There are many ways for recovering the waste
heat in factory. Organic Rankine Cycle (ORC) system can produce
electricity and reduce energy costs by recovering the waste of low
temperature heat in the factory. In addition, ORC is the technology
with the highest power generating efficiency in low-temperature heat
recycling. However, most of factories executives are still hesitated
because of the high implementation cost of the ORC system, even a lot
of heat are wasted. Therefore, this study constructs a nonlinear
mathematical model of waste heat recovery equipment configuration
to maximize profits. A particle swarm optimization algorithm is
developed to generate the optimal facility installation plan for the ORC
system.
Abstract: Documents clustering become an essential technology
with the popularity of the Internet. That also means that fast and
high-quality document clustering technique play core topics. Text
clustering or shortly clustering is about discovering semantically
related groups in an unstructured collection of documents. Clustering
has been very popular for a long time because it provides unique
ways of digesting and generalizing large amounts of information.
One of the issues of clustering is to extract proper feature (concept)
of a problem domain. The existing clustering technology mainly
focuses on term weight calculation. To achieve more accurate
document clustering, more informative features including concept
weight are important. Feature Selection is important for clustering
process because some of the irrelevant or redundant feature may
misguide the clustering results. To counteract this issue, the proposed
system presents the concept weight for text clustering system
developed based on a k-means algorithm in accordance with the
principles of ontology so that the important of words of a cluster can
be identified by the weight values. To a certain extent, it has resolved
the semantic problem in specific areas.
Abstract: The present study presents a new approach to automatic
data clustering and classification problems in large and complex
databases and, at the same time, derives specific types of explicit rules
describing each cluster. The method works well in both sparse and
dense multidimensional data spaces. The members of the data space
can be of the same nature or represent different classes. A number
of N-dimensional ellipsoids are used for enclosing the data clouds.
Due to the geometry of an ellipsoid and its free rotation in space
the detection of clusters becomes very efficient. The method is based
on genetic algorithms that are used for the optimization of location,
orientation and geometric characteristics of the hyper-ellipsoids. The
proposed approach can serve as a basis for the development of
general knowledge systems for discovering hidden knowledge and
unexpected patterns and rules in various large databases.
Abstract: Grid environments include aggregation of
geographical distributed resources. Grid is put forward in three types
of computational, data and storage. This paper presents a research on
data grid. Data grid is used for covering and securing accessibility to
data from among many heterogeneous sources. Users are not worry
on the place where data is located in it, provided that, they should get
access to the data. Metadata is used for getting access to data in data
grid. Presently, application metadata catalogue and SRB middle-ware
package are used in data grids for management of metadata. At this
paper, possibility of updating, streamlining and searching is provided
simultaneously and rapidly through classified table of preserving
metadata and conversion of each table to numerous tables.
Meanwhile, with regard to the specific application, the most
appropriate and best division is set and determined. Concurrency of
implementation of some of requests and execution of pipeline is
adaptability as a result of this technique.
Abstract: Knowledge management is a process taking any steps
that needed to get the most out of available knowledge resources.
KM involved several steps; capturing the knowledge discovering
new knowledge, sharing the knowledge and applied the knowledge in
the decision making process. In applying the knowledge, it is not
necessary for the individual that use the knowledge to comprehend it
as long as the available knowledge is used in guiding the decision
making and actions. When an expert is called and he provides stepby-
step procedure on how to solve the problems to the caller, the
expert is transferring the knowledge or giving direction to the caller.
And the caller is 'applying' the knowledge by following the
instructions given by the expert. An appropriate mechanism is
needed to ensure effective knowledge transfer which in this case is
by telephone or email. The problem with email and telephone is that
the knowledge is not fully circulated and disseminated to all users. In
this paper, with related experience of local university Help Desk, it is
proposed the usage of Information Technology (IT)to effectively
support the knowledge transfer in the organization. The issues
covered include the existing knowledge, the related works, the
methodology used in defining the knowledge management
requirements as well the overview of the prototype.
Abstract: Knowledge Discovery in Databases (KDD) has
evolved into an important and active area of research because of
theoretical challenges and practical applications associated with the
problem of discovering (or extracting) interesting and previously
unknown knowledge from very large real-world databases. Rough
Set Theory (RST) is a mathematical formalism for representing
uncertainty that can be considered an extension of the classical set
theory. It has been used in many different research areas, including
those related to inductive machine learning and reduction of
knowledge in knowledge-based systems. One important concept
related to RST is that of a rough relation. In this paper we presented
the current status of research on applying rough set theory to KDD,
which will be helpful for handle the characteristics of real-world
databases. The main aim is to show how rough set and rough set
analysis can be effectively used to extract knowledge from large
databases.