Abstract: In this paper we describe the recognition process of Greek compound words using the PC-KIMMO software. We try to show certain limitations of the system with respect to the principles of compound formation in Greek. Moreover, we discuss the computational processing of phenomena such as stress and syllabification which are indispensable for the analysis of such constructions and we try to propose linguistically-acceptable solutions within the particular system.
Abstract: Fuel cells have become one of the major areas of
research in the academia and the industry. The goal of most fish
farmers is to maximize production and profits while holding labor
and management efforts to the minimum. Risk of fish kills, disease
outbreaks, poor water quality in most pond culture operations,
aeration offers the most immediate and practical solution to water
quality problems encountered at higher stocking and feeding rates.
Many units of aeration system are electrical units so using a
continuous, high reliability, affordable, and environmentally friendly
power sources is necessary. Aeration of water by using PEM fuel cell
power is not only a new application of the renewable energy, but
also, it provides an affordable method to promote biodiversity in
stagnant ponds and lakes. This paper presents a new design and
control of PEM fuel cell powered a diffused air aeration system for a
shrimp farm in Mersa Matruh in Egypt. Also Artificial intelligence
(AI) techniques control is used to control the fuel cell output power
by control input gases flow rate. Moreover the mathematical
modeling and simulation of PEM fuel cell is introduced. A
comparison study is applied between the performance of fuzzy logic
control (FLC) and neural network control (NNC). The results show
the effectiveness of NNC over FLC.
Abstract: CO2 is the primary anthropogenic greenhouse gas,
accounting for 77% of the human contribution to the greenhouse
effect in 2004. In the recent years, global concentration of CO2 in the
atmosphere is increasing rapidly. CO2 emissions have an impact on
global climate change. Anthropogenic CO2 is emitted primarily from
fossil fuel combustion. Carbon capture and storage (CCS) is one
option for reducing CO2 emissions. There are three major approaches
for CCS: post-combustion capture, pre-combustion capture and
oxyfuel process. Post-combustion capture offers some advantages as
existing combustion technologies can still be used without radical
changes on them.
There are several post combustion gas separation and capture
technologies being investigated, namely; (a) absorption, (b)
cryogenic separation, (c) membrane separation (d) micro algal biofixation
and (e) adsorption. Apart from establishing new techniques,
the exploration of capture materials with high separation performance
and low capital cost are paramount importance. However, the
application of adsorption from either technology, require easily
regenerable and durable adsorbents with a high CO2 adsorption
capacity. It has recently been reported that the cost of the CO2
capture can be reduced by using this technology. In this paper, the
research progress (from experimental results) in adsorbents for CO2
adsorption, storage, and separations were reviewed and future
research directions were suggested as well.
Abstract: In this paper a new approach to prioritize urban planning projects in an efficient and reliable way is presented. It is based on environmental pressure indices and multicriteria decision methods. The paper introduces a rigorous method with acceptable complexity of rank ordering urban development proposals according to their environmental pressure. The technique combines the use of Environmental Pressure Indicators, the aggregation of indicators in an Environmental Pressure Index by means of the Analytic Network Process method and interpreting the information obtained from the experts during the decision-making process. The ANP method allows the aggregation of the experts- judgments on each of the indicators into one Environmental Pressure Index. In addition, ANP is based on utility ratio functions which are the most appropriate for the analysis of uncertain data, like experts- estimations. Finally, unlike the other multicriteria techniques, ANP allows the decision problem to be modelled using the relationships among dependent criteria. The method has been applied to the proposal for urban development of La Carlota airport in Caracas (Venezuela). The Venezuelan Government would like to see a recreational project develop on the abandoned area and mean a significant improvement for the capital. There are currently three options on their table which are currently under evaluation. They include a Health Club, a Residential area and a Theme Park. The participating experts coincided in the appreciation that the method proposed in this paper is useful and an improvement from traditional techniques such as environmental impact studies, lifecycle analysis, etc. They find the results obtained coherent, the process seems sufficiently rigorous and precise, and the use of resources is significantly less than in other methods.
Abstract: One of the major challenges in the Information
Retrieval field is handling the massive amount of information
available to Internet users. Existing ranking techniques and strategies
that govern the retrieval process fall short of expected accuracy.
Often relevant documents are buried deep in the list of documents
returned by the search engine. In order to improve retrieval accuracy
we examine the issue of language effect on the retrieval process.
Then, we propose a solution for a more biased, user-centric relevance
for retrieved data. The results demonstrate that using indices based
on variations of the same language enhances the accuracy of search
engines for individual users.
Abstract: In this paper test generation methods and appropriate fault models for testing and analysis of embedded systems described as (extended) finite state machines ((E)FSMs) are presented. Compared to simple FSMs, EFSMs specify not only the control flow but also the data flow. Thus, we define a two-level fault model to cover both aspects. The goal of this paper is to reuse well-known FSM-based test generation methods for automation of embedded system testing. These methods have been widely used in testing and validation of protocols and communicating systems. In particular, (E)FSMs-based specification and testing is more advantageous because (E)FSMs support the formal semantic of already standardised formal description techniques (FDTs) despite of their popularity in the design of hardware and software systems.
Abstract: Data mining techniques have been used in medical
research for many years and have been known to be effective. In order
to solve such problems as long-waiting time, congestion, and delayed
patient care, faced by emergency departments, this study concentrates
on building a hybrid methodology, combining data mining techniques
such as association rules and classification trees. The methodology is
applied to real-world emergency data collected from a hospital and is
evaluated by comparing with other techniques. The methodology is
expected to help physicians to make a faster and more accurate
classification of chest pain diseases.
Abstract: Number of documents being created increases at an
increasing pace while most of them being in already known topics
and little of them introducing new concepts. This fact has started a
new era in information retrieval discipline where the requirements
have their own specialties. That is digging into topics and concepts
and finding out subtopics or relations between topics. Up to now IR
researches were interested in retrieving documents about a general
topic or clustering documents under generic subjects. However these
conventional approaches can-t go deep into content of documents
which makes it difficult for people to reach to right documents they
were searching. So we need new ways of mining document sets
where the critic point is to know much about the contents of the
documents. As a solution we are proposing to enhance LSI, one of
the proven IR techniques by supporting its vector space with n-gram
forms of words. Positive results we have obtained are shown in two
different application area of IR domain; querying a document
database, clustering documents in the document database.
Abstract: This paper presents a physics-based model for the
high-voltage fast recovery diodes. The model provides a good
trade-off between reverse recovery time and forward voltage drop
realized through a combination of lifetime control and emitter
efficiency reduction techniques. The minority carrier lifetime can be
extracted from the reverse recovery transient response and forward
characteristics. This paper also shows that decreasing the amount of
the excess carriers stored in the drift region will result in softer
characteristics which can be achieved using a lower doping level. The
developed model is verified by experiment and the measurement data
agrees well with the model.
Abstract: The growth and interconnection of power networks in many regions has invited complicated techniques for energy management services (EMS). State estimation techniques become a powerful tool in power system control centers, and that more information is required to achieve the objective of EMS. For the online state estimator, assuming the continuous time is equidistantly sampled with period Δt, processing events must be finished within this period. Advantage of Kalman Filtering (KF) algorithm in using system information to improve the estimation precision is utilized. Computational power is a major issue responsible for the achievement of the objective, i.e. estimators- solution at a small sampled period. This paper presents the optimum utilization of processors in a state estimator based on KF. The model used is presented using Petri net (PN) theory.
Abstract: The typical coupled-tanks process that is TITO
plant has the difficulty in controller design because changing
of system dynamics and interacting of process. This paper
presents design methodology of auto-adjustable PI controller
using MRAC technique. The proposed method can adjust the
controller parameters in response to changes in plant and
disturbance real time by referring to the reference model that
specifies properties of the desired control system.
Abstract: The distillation process in the general sense is a
relatively simple technique from the standpoints of its principles.
When dedicating distillation to water treatment and specifically
producing fresh water from sea, ocean and/ briny waters it is
interesting to notice that distillation has no limitations or domains of
applicability regarding the nature or the type of the feedstock water.
This is not the case however for other techniques that are
technologically quite complex, necessitate bigger capital investments
and are limited in their usability. In a previous paper we have
explored some of the effects of temperature on yield. In this paper,
we continue building onto that knowledge base and focus on the
effects of several additional engineering and design variables on
productivity.
Abstract: Determining how many virtual machines a Linux host
could run can be a challenge. One of tough missions is to find the
balance among performance, density and usability. Now KVM
hypervisor has become the most popular open source full
virtualization solution. It supports several ways of running guests with
more memory than host really has. Due to large differences between
minimum and maximum guest memory requirements, this paper
presents initial results on same-page merging, ballooning and live
migration techniques that aims at optimum memory usage on
KVM-based cloud platform. Given the design of initial experiments,
the results data is worth reference for system administrators. The
results from these experiments concluded that each method offers
different reliability tradeoff.
Abstract: Since the presentation of the backpropagation algorithm, a vast variety of improvements of the technique for training a feed forward neural networks have been proposed. This article focuses on two classes of acceleration techniques, one is known as Local Adaptive Techniques that are based on weightspecific only, such as the temporal behavior of the partial derivative of the current weight. The other, known as Dynamic Adaptation Methods, which dynamically adapts the momentum factors, α, and learning rate, η, with respect to the iteration number or gradient. Some of most popular learning algorithms are described. These techniques have been implemented and tested on several problems and measured in terms of gradient and error function evaluation, and percentage of success. Numerical evidence shows that these techniques improve the convergence of the Backpropagation algorithm.
Abstract: In this paper comparison of Reflector Antenna
analyzing techniques based on wave and ray nature of optics is
presented for an offset reflector antenna using GRASP (General
Reflector antenna Analysis Software Package) software. The results
obtained using PO (Physical Optics), PTD (Physical theory of
Diffraction), and GTD (Geometrical Theory of Diffraction) are
compared. The validity of PO and GTD techniques in regions around
the antenna, caustic behavior of GTD in main beam, and deviation of
GTD in case of near-in sidelobes of radiation pattern are discussed.
The comparison for far-out sidelobes predicted by PO, PO + PTD
and GTD is described. The effect of Direct Radiations from feed
which results in feed selection for the system is addressed.
Abstract: We present here the results for a comparative study of
some techniques, available in the literature, related to the relevance
feedback mechanism in the case of a short-term learning. Only one
method among those considered here is belonging to the data mining
field which is the K-nearest neighbors algorithm (KNN) while the
rest of the methods is related purely to the information retrieval field
and they fall under the purview of the following three major axes:
Shifting query, Feature Weighting and the optimization of the
parameters of similarity metric. As a contribution, and in addition to
the comparative purpose, we propose a new version of the KNN
algorithm referred to as an incremental KNN which is distinct from
the original version in the sense that besides the influence of the
seeds, the rate of the actual target image is influenced also by the
images already rated. The results presented here have been obtained
after experiments conducted on the Wang database for one iteration
and utilizing color moments on the RGB space. This compact
descriptor, Color Moments, is adequate for the efficiency purposes
needed in the case of interactive systems. The results obtained allow
us to claim that the proposed algorithm proves good results; it even
outperforms a wide range of techniques available in the literature.
Abstract: In this contribution an innovative platform is being
presented that integrates intelligent agents in legacy e-learning environments. It introduces the design and development of a scalable
and interoperable integration platform supporting various assessment agents for e-learning environments. The agents are implemented in
order to provide intelligent assessment services to computational intelligent techniques such as Bayesian Networks and Genetic
Algorithms. The utilization of new and emerging technologies like web services allows integrating the provided services to any web
based legacy e-learning environment.
Abstract: Clustering techniques have received attention in many areas including engineering, medicine, biology and data mining. The purpose of clustering is to group together data points, which are close to one another. The K-means algorithm is one of the most widely used techniques for clustering. However, K-means has two shortcomings: dependency on the initial state and convergence to local optima and global solutions of large problems cannot found with reasonable amount of computation effort. In order to overcome local optima problem lots of studies done in clustering. This paper is presented an efficient hybrid evolutionary optimization algorithm based on combining Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO), called PSO-ACO, for optimally clustering N object into K clusters. The new PSO-ACO algorithm is tested on several data sets, and its performance is compared with those of ACO, PSO and K-means clustering. The simulation results show that the proposed evolutionary optimization algorithm is robust and suitable for handing data clustering.
Abstract: Software reuse can be considered as the most realistic
and promising way to improve software engineering productivity and
quality. Automated assistance for software reuse involves the
representation, classification, retrieval and adaptation of components.
The representation and retrieval of components are important to
software reuse in Component-Based on Software Development
(CBSD). However, current industrial component models mainly focus
on the implement techniques and ignore the semantic information
about component, so it is difficult to retrieve the components that
satisfy user-s requirements. This paper presents a method of business
component retrieval based on specification matching to solve the
software reuse of enterprise information system. First, a business
component model oriented reuse is proposed. In our model, the
business data type is represented as sign data type based on XML,
which can express the variable business data type that can describe the
variety of business operations. Based on this model, we propose
specification match relationships in two levels: business operation
level and business component level. In business operation level, we
use input business data types, output business data types and the
taxonomy of business operations evaluate the similarity between
business operations. In the business component level, we propose five
specification matches between business components. To retrieval
reusable business components, we propose the measure of similarity
degrees to calculate the similarities between business components.
Finally, a business component retrieval command like SQL is
proposed to help user to retrieve approximate business components
from component repository.
Abstract: This research seeks to investigate the frequency and
profitability of index arbitrage opportunities involving the SET50
futures, SET50 component stocks, and the ThaiDEX SET50 ETF
(ticker symbol: TDEX). In particular, the frequency and profit of
arbitrage are measured in the following three arbitrage tests: (1)
SET50 futures vs. ThaiDEX SET50 ETF, (2) SET50 futures vs.
SET50 component stocks, and (3) ThaiDEX SET50 ETF vs. SET50
component stocks are investigated. For tests (2) and (3), the problems
involve conic optimization and quadratic programming as subproblems.
This research is first to apply conic optimization and
quadratic programming techniques in the context of index arbitrage
and is first to investigate such index arbitrage in the Thai equity and
derivatives markets. Thus, the contribution of this study is twofold.
First, its results would help understand the contribution of the
derivatives securities to the efficiency of the Thai markets. Second,
the methodology employed in this study can be applied to other
geographical markets, with minor adjustments.