Abstract: In this paper, the implementation of a rule-based
intuitive reasoner is presented. The implementation included two
parts: the rule induction module and the intuitive reasoner. A large
weather database was acquired as the data source. Twelve weather
variables from those data were chosen as the “target variables"
whose values were predicted by the intuitive reasoner. A “complex"
situation was simulated by making only subsets of the data available
to the rule induction module. As a result, the rules induced were
based on incomplete information with variable levels of certainty.
The certainty level was modeled by a metric called "Strength of
Belief", which was assigned to each rule or datum as ancillary
information about the confidence in its accuracy. Two techniques
were employed to induce rules from the data subsets: decision tree
and multi-polynomial regression, respectively for the discrete and the
continuous type of target variables. The intuitive reasoner was tested
for its ability to use the induced rules to predict the classes of the
discrete target variables and the values of the continuous target
variables. The intuitive reasoner implemented two types of
reasoning: fast and broad where, by analogy to human thought, the
former corresponds to fast decision making and the latter to deeper
contemplation. . For reference, a weather data analysis approach
which had been applied on similar tasks was adopted to analyze the
complete database and create predictive models for the same 12
target variables. The values predicted by the intuitive reasoner and
the reference approach were compared with actual data. The intuitive
reasoner reached near-100% accuracy for two continuous target
variables. For the discrete target variables, the intuitive reasoner
predicted at least 70% as accurately as the reference reasoner. Since
the intuitive reasoner operated on rules derived from only about 10%
of the total data, it demonstrated the potential advantages in dealing
with sparse data sets as compared with conventional methods.
Abstract: In this paper variation of spot price and total profits of
the generating companies- through wholesale electricity trading are
discussed with and without Central Generating Stations (CGS) share
and seasonal variations are also considered. It demonstrates how
proper analysis of generators- efficiencies and capabilities, types of
generators owned, fuel costs, transmission losses and settling price
variation using the solutions of Optimal Power Flow (OPF), can
allow companies to maximize overall revenue. It illustrates how
solutions of OPF can be used to maximize companies- revenue under
different scenarios. And is also extended to computation of Available
Transfer Capability (ATC) is very important to the transmission
system security and market forecasting. From these results it is
observed that how crucial it is for companies to plan their daily
operations and is certainly useful in an online environment of
deregulated power system. In this paper above tasks are demonstrated
on 124 bus real-life Indian utility power system of Andhra Pradesh
State Grid and results have been presented and analyzed.
Abstract: Knowledge is attributed to human whose problemsolving
behavior is subjective and complex. In today-s knowledge
economy, the need to manage knowledge produced by a community
of actors cannot be overemphasized. This is due to the fact that
actors possess some level of tacit knowledge which is generally
difficult to articulate. Problem-solving requires searching and sharing
of knowledge among a group of actors in a particular context.
Knowledge expressed within the context of a problem resolution
must be capitalized for future reuse. In this paper, an approach that
permits dynamic capitalization of relevant and reliable actors-
knowledge in solving decision problem following Economic
Intelligence process is proposed. Knowledge annotation method and
temporal attributes are used for handling the complexity in the
communication among actors and in contextualizing expressed
knowledge. A prototype is built to demonstrate the functionalities of
a collaborative Knowledge Management system based on this
approach. It is tested with sample cases and the result showed that
dynamic capitalization leads to knowledge validation hence
increasing reliability of captured knowledge for reuse. The system
can be adapted to various domains.
Abstract: A new design of a planar passive T-micromixer with fin-shaped baffles in the mixing channel is presented. The mixing efficiency and the level of pressure loss in the channel have been investigated by numerical simulations in the range of Reynolds number (Re) 1 to 50. A Mixing index (Mi) has been defined to quantify the mixing efficiency, which results over 85% at both ends of the Re range, what demonstrates the micromixer can enhance mixing using the mechanisms of diffusion (lower Re) and convection (higher Re). Three geometric dimensions: radius of baffle, baffles pitch and height of the channel define the design parameters, and the mixing index and pressure loss are the performance parameters used to optimize the micromixer geometry with a multi-criteria optimization method. The Pareto front of designs with the optimum trade-offs, maximum mixing index with minimum pressure loss, is obtained. Experiments for qualitative and quantitative validation have been implemented.
Abstract: The struggle between modern and postmodern
understanding is also displayed in terms of the superiorities of
quantitative and qualitative methods to each other which are
evaluated within the scope of these understandings. By way of
assuming that the quantitative researches (modern) are able to
account for structure while the qualitative researches (postmodern)
explain the process, these methods are turned into a means for
worldviews specific to a period. In fact, process is not a functioning
independent of structure. In addition to this issue, the ability of
quantitative methods to provide scientific knowledge is also
controversial so long as they exclude the dialectical method. For this
reason, the critiques charged against modernism in terms of
quantitative methods are, in a sense, legitimate. Nevertheless, the
main issue is in which parameters postmodernist critique tries to
legitimize its critiques and whether these parameters represent a point
of view enabling democratic solutions.
In this respect, the scientific knowledge covered in Turkish media
as a means through which ordinary people have access to scientific
knowledge will be evaluated by means of content analysis within a
new objectivity conception.
Abstract: This paper includes a review of three physics simulation packages that can be used to provide researchers with a virtual ground for modeling, implementing and simulating complex models, as well as testing their control methods with less cost and time of development. The inverted pendulum model was used as a test bed for comparing ODE, DANCE and Webots, while Linear State Feedback was used to control its behavior. The packages were compared with respect to model creation, solving systems of differential equation, data storage, setting system variables, control the experiment and ease of use. The purpose of this paper is to give an overview about our experience with these environments and to demonstrate some of the benefits and drawbacks involved in practice for each package.
Abstract: The ElectroEncephaloGram (EEG) is useful for
clinical diagnosis and biomedical research. EEG signals often
contain strong ElectroOculoGram (EOG) artifacts produced
by eye movements and eye blinks especially in EEG recorded
from frontal channels. These artifacts obscure the underlying
brain activity, making its visual or automated inspection
difficult. The goal of ocular artifact removal is to remove
ocular artifacts from the recorded EEG, leaving the underlying
background signals due to brain activity. In recent times,
Independent Component Analysis (ICA) algorithms have
demonstrated superior potential in obtaining the least
dependent source components. In this paper, the independent
components are obtained by using the JADE algorithm (best
separating algorithm) and are classified into either artifact
component or neural component. Neural Network is used for
the classification of the obtained independent components.
Neural Network requires input features that exactly represent
the true character of the input signals so that the neural
network could classify the signals based on those key
characters that differentiate between various signals. In this
work, Auto Regressive (AR) coefficients are used as the input
features for classification. Two neural network approaches
are used to learn classification rules from EEG data. First, a
Polynomial Neural Network (PNN) trained by GMDH (Group
Method of Data Handling) algorithm is used and secondly,
feed-forward neural network classifier trained by a standard
back-propagation algorithm is used for classification and the
results show that JADE-FNN performs better than JADEPNN.
Abstract: The paper proposes a new concept in developing
collaborative design system. The concept framework involves
applying simulation of supply chain management to collaborative
design called – 'SCM–Based Design Tool'. The system is developed
particularly to support design activities and to integrate all facilities
together. The system is aimed to increase design productivity and
creativity. Therefore, designers and customers can collaborate by the
system since conceptual design. JAG: Jewelry Art Generator based
on artificial intelligence techniques is integrated into the system.
Moreover, the proposed system can support users as decision tool
and data propagation. The system covers since raw material supply
until product delivery. Data management and sharing information are
visually supported to designers and customers via user interface. The
system is developed on Web–assisted product development
environment. The prototype system is presented for Thai jewelry
industry as a system prototype demonstration, but applicable for
other industry.
Abstract: RF performance of SOI CMOS device has attracted
significant amount of interest recently. In order to improve RF
parameters, Strained Si/Relaxed Si0.8Ge0.2 investigated as a
replacement for Si technology .Enhancement of carrier mobility
associated with strain engineering makes Strained Si a promising
candidate for improving RF performance of CMOS technology.
From the simulation, the cut-off frequency is estimated to be 224
GHZ, whereas in SOI at similar bias is about 188 GHZ. Therefore,
Strained Si exhibits 19% improvement in cut-off frequency over
similar Si counterpart. In this paper, Ion/Ioff ratio is studied as one of
the key parameters in logic and digital application. Strained Si/SiGe
demonstrates better Ion/Ioff characteristic than SOI, in similar channel
length of 100 nm.Another important key analog figures of merit such
as Early Voltage (VEA) ,transconductance vs drain current (gm /Ids)
are studied. They introduce the efficiency of the devices to convert
dc power into ac frequency.
Abstract: Ethanol is generally used as a therapeutic reagent against Hepatocellular carcinoma (HCC or hepatoma) worldwide, as it can induce Hepatocellular carcinoma cell apoptosis at low concentration through a multifactorial process regulated by several unknown proteins. This paper provides a simple and available proteomic strategy for exploring differentially expressed proteins in the apoptotic pathway. The appropriate concentrations of ethanol required to induce HepG2 cell apoptosis were first assessed by MTT assay, Gisma and fluorescence staining. Next, the central proteins involved in the apoptosis pathway processs were determined using 2D-PAGE, SDS-PAGE, and bio-software analysis. Finally the downregulation of two proteins, AFP and survivin, were determined by immunocytochemistry and reverse transcriptase PCR (RT-PCR) technology. The simple, useful method demonstrated here provides a new approach to proteomic analysis in key bio-regulating process including proliferation, differentiation, apoptosis, immunity and metastasis.
Abstract: We have measured the pressure drop and convective
heat transfer coefficient of water – based AL(25nm),AL2O3(30nm)
and CuO(50nm) Nanofluids flowing through a uniform heated
circular tube in the fully developed laminar flow regime. The
experimental results show that the data for Nanofluids friction factor
show a good agreement with analytical prediction from the Darcy's
equation for single-phase flow. After reducing the experimental
results to the form of Reynolds, Rayleigh and Nusselt numbers. The
results show the local Nusselt number and temperature have
distribution with the non-dimensional axial distance from the tube
entry. Study decided that thenNanofluid as Newtonian fluids through
the design of the linear relationship between shear stress and the rate
of stress has been the study of three chains of the Nanofluid with
different concentrations and where the AL, AL2O3 and CuO – water
ranging from (0.25 - 2.5 vol %). In addition to measuring the four
properties of the Nanofluid in practice so as to ensure the validity of
equations of properties developed by the researchers in this area and
these properties is viscosity, specific heat, and density and found that
the difference does not exceed 3.5% for the experimental equations
between them and the practical. The study also demonstrated that the
amount of the increase in heat transfer coefficient for three types of
Nano fluid is AL, AL2O3, and CuO – Water and these ratios are
respectively (45%, 32%, 25%) with insulation and without insulation
(36%, 23%, 19%), and the statement of any of the cases the best
increase in heat transfer has been proven that using insulation is
better than not using it. I have been using three types of Nano
particles and one metallic Nanoparticle and two oxide Nanoparticle
and a statement, whichever gives the best increase in heat transfer.
Abstract: The problem of robust stability and robust stabilization for a class of discrete-time uncertain systems with time delay is investigated. Based on Tchebychev inequality, by constructing a new augmented Lyapunov function, some improved sufficient conditions ensuring exponential stability and stabilization are established. These conditions are expressed in the forms of linear matrix inequalities (LMIs), whose feasibility can be easily checked by using Matlab LMI Toolbox. Compared with some previous results derived in the literature, the new obtained criteria have less conservatism. Two numerical examples are provided to demonstrate the improvement and effectiveness of the proposed method.
Abstract: Mathematical programming has been applied to various
problems. For many actual problems, the assumption that the parameters
involved are deterministic known data is often unjustified. In
such cases, these data contain uncertainty and are thus represented
as random variables, since they represent information about the
future. Decision-making under uncertainty involves potential risk.
Stochastic programming is a commonly used method for optimization
under uncertainty. A stochastic programming problem with recourse
is referred to as a two-stage stochastic problem. In this study, we
consider a stochastic programming problem with simple integer
recourse in which the value of the recourse variable is restricted to a
multiple of a nonnegative integer. The algorithm of a dynamic slope
scaling procedure for solving this problem is developed by using a
property of the expected recourse function. Numerical experiments
demonstrate that the proposed algorithm is quite efficient. The
stochastic programming model defined in this paper is quite useful
for a variety of design and operational problems.
Abstract: Many exist studies always use Markov decision
processes (MDPs) in modeling optimal route choice in
stochastic, time-varying networks. However, taking many
variable traffic data and transforming them into optimal route
decision is a computational challenge by employing MDPs in
real transportation networks. In this paper we model finite
horizon MDPs using directed hypergraphs. It is shown that the
problem of route choice in stochastic, time-varying networks
can be formulated as a minimum cost hyperpath problem, and
it also can be solved in linear time. We finally demonstrate the
significant computational advantages of the introduced
methods.
Abstract: Injection molding is a very complicated process to
monitor and control. With its high complexity and many process
parameters, the optimization of these systems is a very challenging
problem. To meet the requirements and costs demanded by the
market, there has been an intense development and research with the
aim to maintain the process under control. This paper outlines the
latest advances in necessary algorithms for plastic injection process
and monitoring, and also a flexible data acquisition system that
allows rapid implementation of complex algorithms to assess their
correct performance and can be integrated in the quality control
process. This is the main topic of this paper. Finally, to demonstrate
the performance achieved by this combination, a real case of use is
presented.
Abstract: A novel PDE solver using the multidimensional wave
digital filtering (MDWDF) technique to achieve the solution of a 2D
seismic wave system is presented. In essence, the continuous physical
system served by a linear Kirchhoff circuit is transformed to an
equivalent discrete dynamic system implemented by a MD wave
digital filtering (MDWDF) circuit. This amounts to numerically
approximating the differential equations used to describe elements of a
MD passive electronic circuit by a grid-based difference equations
implemented by the so-called state quantities within the passive
MDWDF circuit. So the digital model can track the wave field on a
dense 3D grid of points. Details about how to transform the continuous
system into a desired discrete passive system are addressed. In
addition, initial and boundary conditions are properly embedded into
the MDWDF circuit in terms of state quantities. Graphic results have
clearly demonstrated some physical effects of seismic wave (P-wave
and S–wave) propagation including radiation, reflection, and
refraction from and across the hard boundaries. Comparison between
the MDWDF technique and the finite difference time domain (FDTD)
approach is also made in terms of the computational efficiency.
Abstract: Enzymatic saccharification of biomass for reducing
sugar production is one of the crucial processes in biofuel production
through biochemical conversion. In this study, enzymatic
saccharification of dilute potassium hydroxide (KOH) pre-treated
Tetraselmis suecica biomass was carried out by using cellulase
enzyme obtained from Trichoderma longibrachiatum. Initially, the
pre-treatment conditions were optimised by changing alkali reagent
concentration, retention time for reaction, and temperature. The T.
suecica biomass after pre-treatment was also characterized using
Fourier Transform Infrared Spectra and Scanning Electron
Microscope. These analyses revealed that the functional group such
as acetyl and hydroxyl groups, structure and surface of T. suecica
biomass were changed through pre-treatment, which is favourable for
enzymatic saccharification process. Comparison of enzymatic
saccharification of untreated and pre-treated microalgal biomass
indicated that higher level of reducing sugar can be obtained from
pre-treated T. suecica. Enzymatic saccharification of pre-treated T.
suecica biomass was optimised by changing temperature, pH, and
enzyme concentration to solid ratio ([E]/[S]). Highest conversion of
carbohydrate into reducing sugar of 95% amounted to reducing sugar
yield of 20 (wt%) from pre-treated T. suecica was obtained from
saccharification, at temperature: 40°C, pH: 4.5 and [E]/[S] of 0.1
after 72 h of incubation. Hydrolysate obtained from enzymatic
saccharification of pretreated T. suecica biomass was further
fermented into biobutanol using Clostridium saccharoperbutyliticum
as biocatalyst. The results from this study demonstrate a positive
prospect of application of dilute alkaline pre-treatment to enhance
enzymatic saccharification and biobutanol production from
microalgal biomass.
Abstract: This paper studies the mean square exponential synchronization problem of a class of stochastic neutral type chaotic neural networks with mixed delay. On the Basis of Lyapunov stability theory, some sufficient conditions ensuring the mean square exponential synchronization of two identical chaotic neural networks are obtained by using stochastic analysis and inequality technique. These conditions are expressed in the form of linear matrix inequalities (LMIs), whose feasibility can be easily checked by using Matlab LMI Toolbox. The feedback controller used in this paper is more general than those used in previous literatures. One simulation example is presented to demonstrate the effectiveness of the derived results.
Abstract: Every organization is continually subject to new damages and threats which can be resulted from their operations or their goal accomplishment. Methods of providing the security of space and applied tools have been widely changed with increasing application and development of information technology (IT). From this viewpoint, information security management systems were evolved to construct and prevent reiterating the experienced methods. In general, the correct response in information security management systems requires correct decision making, which in turn requires the comprehensive effort of managers and everyone involved in each plan or decision making. Obviously, all aspects of work or decision are not defined in all decision making conditions; therefore, the possible or certain risks should be considered when making decisions. This is the subject of risk management and it can influence the decisions. Investigation of different approaches in the field of risk management demonstrates their progress from quantitative to qualitative methods with a process approach.
Abstract: Voltage flicker problems have long existed in several
of the distribution areas served by the Taiwan Power Company. In
the past, those research results indicating that the estimated ΔV10
value based on the conventional method is significantly smaller than
the survey value. This paper is used to study the relationship between
the voltage flicker problems and harmonic power variation for the
power system with electric arc furnaces. This investigation discussed
thought the effect of harmonic power fluctuation with flicker
estimate value. The method of field measurement, statistics and
simulation is used. The survey results demonstrate that 10 ΔV
estimate must account for the effect of harmonic power variation.