Abstract: Usually, the solid-fuel flow of an iron ore sinter plant
consists of different types of the solid-fuels, which differ from each
other. Information about the composition of the solid-fuel flow
usually comes every 8-24 hours. It can be clearly seen that this
information cannot be used to control the sintering process in real
time. Due to this, we propose an expert system which uses indirect
measurements from the process in order to obtain the composition of
the solid-fuel flow by solving an optimization task. Then this
information can be used to control the sintering process. The
proposed technique can be successfully used to improve sinter
quality and reduce the amount of solid-fuel used by the process.
Abstract: In this contribution an innovative platform is being
presented that integrates intelligent agents and evolutionary
computation techniques in legacy e-learning environments. It
introduces the design and development of a scalable and
interoperable integration platform supporting:
I) various assessment agents for e-learning environments,
II) a specific resource retrieval agent for the provision of
additional information from Internet sources matching the
needs and profile of the specific user and
III) a genetic algorithm designed to extract efficient information
(classifying rules) based on the students- answering input
data.
The agents are implemented in order to provide intelligent
assessment services based on computational intelligence techniques
such as Bayesian Networks and Genetic Algorithms.
The proposed Genetic Algorithm (GA) is used in order to extract
efficient information (classifying rules) based on the students-
answering input data. The idea of using a GA in order to fulfil this
difficult task came from the fact that GAs have been widely used in
applications including classification of unknown data.
The utilization of new and emerging technologies like web
services allows integrating the provided services to any web based
legacy e-learning environment.
Abstract: Documents retrieval in Information Retrieval
Systems (IRS) is generally about understanding of
information in the documents concern. The more the system
able to understand the contents of documents the more
effective will be the retrieval outcomes. But understanding of the
contents is a very complex task. Conventional IRS apply algorithms
that can only approximate the meaning of document contents through
keywords approach using vector space model. Keywords may be
unstemmed or stemmed. When keywords are stemmed and conflated
in retrieving process, we are a step forwards in applying semantic
technology in IRS. Word stemming is a process in morphological
analysis under natural language processing, before syntactic and
semantic analysis. We have developed algorithms for Malay and
Arabic and incorporated stemming in our experimental systems in
order to measure retrieval effectiveness. The results have shown that
the retrieval effectiveness has increased when stemming is used in
the systems.
Abstract: In this study, the Scots pine (Pinus sylvestris L.) C
needles (i.e. the current-year-needles) were used as bioindicators in
determining the aerial distribution pattern of sulphur emissions
around industrial point sources at Kemi, Northern Finland. The
average sulphur concentration in the C needles was 897 mg/kg
(d.w.), with a standard deviation of 118 mg/kg (d.w.) and range 740 –
1350 mg/kg (d.w.). According to results in this study, Scots pine
needles (Pinus sylvestris L.) appear to be an ideal bioindicators for
identifying atmospheric sulphur pollution derived from industrial
plants and can complement the information provided by plant
mapping studies around industrial plants.
Abstract: The work presented in this paper focus on Knowledge Management services enabling CSCW (Computer Supported Cooperative Work) applications to provide an appropriate adaptation to the user and the situation in which the user is working. In this paper, we explain how a knowledge management system can be designed to support users in different situations exploiting contextual data, users' preferences, and profiles of involved artifacts (e.g., documents, multimedia files, mockups...). The presented work roots in the experience we had in the MILK project and early steps made in the MAIS project.
Abstract: Today, Genetic Algorithm has been used to solve
wide range of optimization problems. Some researches conduct on
applying Genetic Algorithm to text classification, summarization
and information retrieval system in text mining process. This
researches show a better performance due to the nature of Genetic
Algorithm. In this paper a new algorithm for using Genetic
Algorithm in concept weighting and topic identification, based on
concept standard deviation will be explored.
Abstract: Texture information plays increasingly an important
role in remotely sensed imagery classification and many pattern
recognition applications. However, the selection of relevant textural
features to improve this classification accuracy is not a straightforward
task. This work investigates the effectiveness of two Mutual
Information Feature Selector (MIFS) algorithms to select salient
textural features that contain highly discriminatory information for
multispectral imagery classification. The input candidate features are
extracted from a SPOT High Resolution Visible(HRV) image using
Wavelet Transform (WT) at levels (l = 1,2).
The experimental results show that the selected textural features
according to MIFS algorithms make the largest contribution to
improve the classification accuracy than classical approaches such
as Principal Components Analysis (PCA) and Linear Discriminant
Analysis (LDA).
Abstract: In this paper the Analytic Network Process (ANP) is
applied to the selection of photovoltaic (PV) solar power projects.
These projects follow a long management and execution process
from plant site selection to plant start-up. As a consequence, there are
many risks of time delays and even of project stoppage.
In the case study presented in this paper a top manager of an
important Spanish company that operates in the power market has to
decide on the best PV project (from four alternative projects) to
invest based on risk minimization. The manager identified 50 project
execution delay and/or stoppage risks.
The influences among elements of the network (groups of risks
and alternatives) were identified and analyzed using the ANP
multicriteria decision analysis method. After analyzing the results the
main conclusion is that the network model can manage all the
information of the real-world problem and thus it is a decision
analysis model recommended by the authors. The strengths and
weaknesses ANP as a multicriteria decision analysis tool are also
described in the paper.
Abstract: The goal of Gene Expression Analysis is to understand the processes that underlie the regulatory networks and pathways controlling inter-cellular and intra-cellular activities. In recent times microarray datasets are extensively used for this purpose. The scope of such analysis has broadened in recent times towards reconstruction of gene networks and other holistic approaches of Systems Biology. Evolutionary methods are proving to be successful in such problems and a number of such methods have been proposed. However all these methods are based on processing of genotypic information. Towards this end, there is a need to develop evolutionary methods that address phenotypic interactions together with genotypic interactions. We present a novel evolutionary approach, called Phenomic algorithm, wherein the focus is on phenotypic interaction. We use the expression profiles of genes to model the interactions between them at the phenotypic level. We apply this algorithm to the yeast sporulation dataset and show that the algorithm can identify gene networks with relative ease.
Abstract: During recent years, the traditional learning
approaches have undergone fundamental changes due to the
emergence of new technologies such as multimedia, hypermedia and
telecommunication. E-learning is a modern world phenomenon that
has come into existence in the information age and in a knowledgebased
society. E-learning has developed significantly within a short
period of time. Thus it is of a great significant to secure information,
allow a confident access and prevent unauthorized accesses. Making
use of individuals- physiologic or behavioral (biometric) properties is
a confident method to make the information secure. Among the
biometrics, fingerprint is more acceptable and most countries use it as
an efficient methods of identification. This article provides a new
method to compare the fingerprint comparison by pattern recognition
and image processing techniques. To verify fingerprint, the shortest
distance method is used together with perceptronic multilayer neural
network functioning based on minutiae. This method is highly
accurate in the extraction of minutiae and it accelerates comparisons
due to elimination of false minutiae and is more reliable compared
with methods that merely use directional images.
Abstract: Concerning the measurement of friction properties of
textiles and fabrics using Kawabata Evaluation System (KES), whose
output is constrained to the surface friction factor of fabric, and no
other data would be generated; this research has been conducted to
gain information about surface roughness regarding its surface
friction factor. To assess roughness properties of light nonwovens, a
3-dimensional model of a surface has been simulated with regular
sinuous waves through it as an ideal surface. A new factor was
defined, namely Surface Roughness Factor, through comparing
roughness properties of simulated surface and real specimens. The
relation between the proposed factor and friction factor of specimens
has been analyzed by regression, and results showed a meaningful
correlation between them. It can be inferred that the new presented
factor can be used as an acceptable criterion for evaluating the
roughness properties of light nonwoven fabrics.
Abstract: In this paper, image compression using hybrid vector
quantization scheme such as Multistage Vector Quantization
(MSVQ) and Pyramid Vector Quantization (PVQ) are introduced. A
combined MSVQ and PVQ are utilized to take advantages provided
by both of them. In the wavelet decomposition of the image, most of
the information often resides in the lowest frequency subband.
MSVQ is applied to significant low frequency coefficients. PVQ is
utilized to quantize the coefficients of other high frequency
subbands. The wavelet coefficients are derived using lifting scheme.
The main aim of the proposed scheme is to achieve high compression
ratio without much compromise in the image quality. The results are
compared with the existing image compression scheme using MSVQ.
Abstract: The segmentation of mouth and lips is a fundamental
problem in facial image analyisis. In this paper we propose a method
for lip segmentation based on rg-color histogram. Statistical analysis
shows, using the rg-color-space is optimal for this purpose of a pure
color based segmentation. Initially a rough adaptive threshold selects
a histogram region, that assures that all pixels in that region are
skin pixels. Based on that pixels we build a gaussian model which
represents the skin pixels distribution and is utilized to obtain a
refined, optimal threshold. We are not incorporating shape or edge
information. In experiments we show the performance of our lip pixel
segmentation method compared to the ground truth of our dataset and
a conventional watershed algorithm.
Abstract: The 2008 Candlelight Protests of Korea was very
significant to portray the political environment among the South
Korean youth. Many challenges and new advanced technologies have
driven the youth community to be engaged in the political arena that
has shifted them from traditional Korean youth to a very greater
community. Due to historical perspective with the people of North
Korea, the young generation has embraced different view of ethnic
nationalism. This study examines the youth involvement in politics in
line with their level of acceptance the practice of democracy. The
increase usage of new media has shown great results in the survey
results whereby the youth used as a platform to gain political
information and brought higher degree of their sociopolitical interests
among them. Furthermore, the rise of nationalism and patriotism will
be discussed in this paper to the dynamism of the political approaches
used by the Korea government
Abstract: In blended learning environments, the Internet can be combined with other technologies. The aim of this research was to design, introduce and validate a model to support synchronous and asynchronous activities by managing content domains in an Adaptive Hypermedia System (AHS). The application is based on information recovery techniques, clustering algorithms and adaptation rules to adjust the user's model to contents and objects of study. This system was applied to blended learning in higher education. The research strategy used was the case study method. Empirical studies were carried out on courses at two universities to validate the model. The results of this research show that the model had a positive effect on the learning process. The students indicated that the synchronous and asynchronous scenario is a good option, as it involves a combination of work with the lecturer and the AHS. In addition, they gave positive ratings to the system and stated that the contents were adapted to each user profile.
Abstract: Many natural language expressions are ambiguous, and
need to draw on other sources of information to be interpreted.
Interpretation of the e word تعاون to be considered as a noun or a verb
depends on the presence of contextual cues. To interpret words we
need to be able to discriminate between different usages. This paper
proposes a hybrid of based- rules and a machine learning method for
tagging Arabic words. The particularity of Arabic word that may be
composed of stem, plus affixes and clitics, a small number of rules
dominate the performance (affixes include inflexional markers for
tense, gender and number/ clitics include some prepositions,
conjunctions and others). Tagging is closely related to the notion of
word class used in syntax. This method is based firstly on rules (that
considered the post-position, ending of a word, and patterns), and
then the anomaly are corrected by adopting a memory-based learning
method (MBL). The memory_based learning is an efficient method to
integrate various sources of information, and handling exceptional
data in natural language processing tasks. Secondly checking the
exceptional cases of rules and more information is made available to
the learner for treating those exceptional cases. To evaluate the
proposed method a number of experiments has been run, and in
order, to improve the importance of the various information in
learning.
Abstract: Quantitative trait loci (QTL) experiments have yielded
important biological and biochemical information necessary for
understanding the relationship between genetic markers and
quantitative traits. For many years, most QTL algorithms only
allowed one observation per genotype. Recently, there has been an
increasing demand for QTL algorithms that can accommodate more
than one observation per genotypic distribution. The Bayesian
hierarchical model is very flexible and can easily incorporate this
information into the model. Herein a methodology is presented that
uses a Bayesian hierarchical model to capture the complexity of the
data. Furthermore, the Markov chain Monte Carlo model composition
(MC3) algorithm is used to search and identify important markers. An
extensive simulation study illustrates that the method captures the
true QTL, even under nonnormal noise and up to 6 QTL.
Abstract: The ability of information systems to operate in conjunction with each other encompassing communication protocols, hardware, software, application, and data compatibility layers. There has been considerable work in industry on the development of component interoperability models, such as CORBA, (D)COM and JavaBeans. These models are intended to reduce the complexity of software development and to facilitate reuse of off-the-shelf components. The focus of these models is syntactic interface specification, component packaging, inter-component communications, and bindings to a runtime environment. What these models lack is a consideration of architectural concerns – specifying systems of communicating components, explicitly representing loci of component interaction, and exploiting architectural styles that provide well-understood global design solutions. The development of complex business applications is now focused on an assembly of components available on a local area network or on the net. These components must be localized and identified in terms of available services and communication protocol before any request. The first part of the article introduces the base concepts of components and middleware while the following sections describe the different up-todate models of communication and interaction and the last section shows how different models can communicate among themselves.
Abstract: Calibration estimation is a method of adjusting the
original design weights to improve the survey estimates by using
auxiliary information such as the known population total (or mean)
of the auxiliary variables. A calibration estimator uses calibrated
weights that are determined to minimize a given distance measure to
the original design weights while satisfying a set of constraints
related to the auxiliary information. In this paper, we propose a new
multivariate calibration estimator for the population mean in the
stratified sampling design, which incorporates information available
for more than one auxiliary variable. The problem of determining the
optimum calibrated weights is formulated as a Mathematical
Programming Problem (MPP) that is solved using the Lagrange
multiplier technique.
Abstract: In this paper, we present the information life cycle, and analyze the importance of managing the corporate application portfolio across this life cycle. The approach presented here does not correspond just to the extension of the traditional information system development life cycle. This approach is based in the generic life cycle employed in other contexts like manufacturing or marketing. In this paper it is proposed a model of an information system life cycle, supported in the assumption that a system has a limited life. But, this limited life may be extended. This model is also applied in several cases; being reported here two examples of the framework application in a construction enterprise, and in a manufacturing enterprise.