Abstract: The join dependency provides the basis for obtaining
lossless join decomposition in a classical relational schema. The
existence of Join dependency shows that that the tables always
represent the correct data after being joined. Since the classical
relational databases cannot handle imprecise data, they were
extended to fuzzy relational databases so that uncertain, ambiguous,
imprecise and partially known information can also be stored in
databases in a formal way. However like classical databases, the
fuzzy relational databases also undergoes decomposition during
normalization, the issue of joining the decomposed fuzzy relations
remains intact. Our effort in the present paper is to emphasize on this
issue. In this paper we define fuzzy join dependency in the
framework of type-1 fuzzy relational databases & type-2 fuzzy
relational databases using the concept of fuzzy equality which is
defined using fuzzy functions. We use the fuzzy equi-join operator
for computing the fuzzy equality of two attribute values. We also
discuss the dependency preservation property on execution of this
fuzzy equi- join and derive the necessary condition for the fuzzy
functional dependencies to be preserved on joining the decomposed
fuzzy relations. We also derive the conditions for fuzzy join
dependency to exist in context of both type-1 and type-2 fuzzy
relational databases. We find that unlike the classical relational
databases even the existence of a trivial join dependency does not
ensure lossless join decomposition in type-2 fuzzy relational
databases. Finally we derive the conditions for the fuzzy equality to
be non zero and the qualification of an attribute for fuzzy key.
Abstract: The POD-assisted projective integration method based on the equation-free framework is presented in this paper. The method is essentially based on the slow manifold governing of given system. We have applied two variants which are the “on-line" and “off-line" methods for solving the one-dimensional viscous Bergers- equation. For the on-line method, we have computed the slow manifold by extracting the POD modes and used them on-the-fly along the projective integration process without assuming knowledge of the underlying slow manifold. In contrast, the underlying slow manifold must be computed prior to the projective integration process for the off-line method. The projective step is performed by the forward Euler method. Numerical experiments show that for the case of nonperiodic system, the on-line method is more efficient than the off-line method. Besides, the online approach is more realistic when apply the POD-assisted projective integration method to solve any systems. The critical value of the projective time step which directly limits the efficiency of both methods is also shown.
Abstract: Freeways are originally designed to provide high
mobility to road users. However, the increase in population and
vehicle numbers has led to increasing congestions around the world.
Daily recurrent congestion substantially reduces the freeway capacity
when it is most needed. Building new highways and expanding the
existing ones is an expensive solution and impractical in many
situations. Intelligent and vision-based techniques can, however, be
efficient tools in monitoring highways and increasing the capacity of
the existing infrastructures. The crucial step for highway monitoring
is vehicle detection. In this paper, we propose one of such
techniques. The approach is based on artificial neural networks
(ANN) for vehicles detection and counting. The detection process
uses the freeway video images and starts by automatically extracting
the image background from the successive video frames. Once the
background is identified, subsequent frames are used to detect
moving objects through image subtraction. The result is segmented
using Sobel operator for edge detection. The ANN is, then, used in
the detection and counting phase. Applying this technique to the
busiest freeway in Riyadh (King Fahd Road) achieved higher than
98% detection accuracy despite the light intensity changes, the
occlusion situations, and shadows.
Abstract: Many difficulties are faced in the process of learning
computer programming. This paper will propose a system framework
intended to reduce cognitive load in learning programming. In first
section focus is given on the process of learning and the
shortcomings of the current approaches to learning programming.
Finally the proposed prototype is suggested along with the
justification of the prototype. In the proposed prototype the concept
map is used as visualization metaphor. Concept maps are similar to
the mental schema in long term memory and hence it can reduce
cognitive load well. In addition other method such as part code
method is also proposed in this framework to can reduce cognitive
load.
Abstract: Cancers could normally be marked by a number of
differentially expressed genes which show enormous potential as
biomarkers for a certain disease. Recent years, cancer classification
based on the investigation of gene expression profiles derived by
high-throughput microarrays has widely been used. The selection of
discriminative genes is, therefore, an essential preprocess step in
carcinogenesis studies. In this paper, we have proposed a novel gene
selector using information-theoretic measures for biological
discovery. This multivariate filter is a four-stage framework through
the analyses of feature relevance, feature interdependence, feature
redundancy-dependence and subset rankings, and having been
examined on the colon cancer data set. Our experimental result show
that the proposed method outperformed other information theorem
based filters in all aspect of classification errors and classification
performance.
Abstract: Characteristics and sonocatalytic activity of zeolite
Y catalysts loaded with TiO2 using impregnation and ion exchange
methods for the degradation of amaranth dye were investigated.
The Ion-exchange method was used to encapsulate the TiO2 into
the internal pores of the zeolite while the incorporation of TiO2
mostly on the external surface of zeolite was carried out using the
impregnation method. Different characterization techniques were
used to elucidate the physicochemical properties of the produced
catalysts. The framework of zeolite Y remained virtually
unchanged after the encapsulation of TiO2 while the crystallinity of
zeolite decreased significantly after the incorporation of 15 wt% of
TiO2. The sonocatalytic activity was enhanced by TiO2
incorporation with maximum degradation efficiencies of 50% and
68% for the encapsulated titanium and titanium loaded onto the
zeolite, respectively after 120min of reaction. Catalysts
characteristics and sonocatalytic behaviors were significantly
affected by the preparation method and the location of TiO2
introduced with zeolite structure. Behaviors in the sonocatalytic
process were successfully correlated with the characteristics of the
catalysts used.
Abstract: The paper presents the optimization problem for the
multi-element synthetic transmit aperture method (MSTA) in
ultrasound imaging applications. The optimal choice of the transmit
aperture size is performed as a trade-off between the lateral
resolution, penetration depth and the frame rate. Results of the
analysis obtained by a developed optimization algorithm are
presented. Maximum penetration depth and the best lateral resolution
at given depths are chosen as the optimization criteria. The
optimization algorithm was tested using synthetic aperture data of
point reflectors simulated by Filed II program for Matlab® for the
case of 5MHz 128-element linear transducer array with 0.48 mm
pitch are presented. The visualization of experimentally obtained
synthetic aperture data of a tissue mimicking phantom and in vitro
measurements of the beef liver are also shown. The data were
obtained using the SonixTOUCH Research systemequipped with a
linear 4MHz 128 element transducerwith 0.3 mm element pitch, 0.28
mm element width and 70% fractional bandwidth was excited by one
sine cycle pulse burst of transducer's center frequency.
Abstract: The number of framework conceived for e-learning
constantly increase, unfortunately the creators of learning materials
and educational institutions engaged in e-formation adopt a
“proprietor" approach, where the developed products (courses,
activities, exercises, etc.) can be exploited only in the framework
where they were conceived, their uses in the other learning
environments requires a greedy adaptation in terms of time and
effort. Each one proposes courses whose organization, contents,
modes of interaction and presentations are unique for all learners,
unfortunately the latter are heterogeneous and are not interested by
the same information, but only by services or documents adapted to
their needs. Currently the new tendency for the framework
conceived for e-learning, is the interoperability of learning materials,
several standards exist (DCMI (Dublin Core Metadata Initiative)[2],
LOM (Learning Objects Meta data)[1], SCORM (Shareable Content
Object Reference Model)[6][7][8], ARIADNE (Alliance of Remote
Instructional Authoring and Distribution Networks for Europe)[9],
CANCORE (Canadian Core Learning Resource Metadata
Application Profiles)[3]), they converge all to the idea of learning
objects. They are also interested in the adaptation of the learning
materials according to the learners- profile. This article proposes an
approach for the composition of courses adapted to the various
profiles (knowledge, preferences, objectives) of learners, based on
two ontologies (domain to teach and educational) and the learning
objects.
Abstract: More and more home videos are being generated with the ever growing popularity of digital cameras and camcorders. For many home videos, a photo rendering, whether capturing a moment or a scene within the video, provides a complementary representation to the video. In this paper, a video motion mining framework for creative rendering is presented. The user-s capture intent is derived by analyzing video motions, and respective metadata is generated for each capture type. The metadata can be used in a number of applications, such as creating video thumbnail, generating panorama posters, and producing slideshows of video.
Abstract: This article outlines conceptualization and
implementation of an intelligent system capable of extracting
knowledge from databases. Use of hybridized features of both the
Rough and Fuzzy Set theory render the developed system flexibility
in dealing with discreet as well as continuous datasets. A raw data set
provided to the system, is initially transformed in a computer legible
format followed by pruning of the data set. The refined data set is
then processed through various Rough Set operators which enable
discovery of parameter relationships and interdependencies. The
discovered knowledge is automatically transformed into a rule base
expressed in Fuzzy terms. Two exemplary cancer repository datasets
(for Breast and Lung Cancer) have been used to test and implement
the proposed framework.
Abstract: The purpose of this study is i) to investigate the driving factors and barriers of the adoption of Information and Communication Technology (ICT) in Halal logistic and ii) to develop an ICT adoption framework for Halal logistic service provider. The Halal LSPs selected for the study currently used ICT service platforms, such as accounting and management system for Halal logistic business. The study categorizes the factors influencing the adoption decision and process by LSPs into four groups: technology related factors, organizational and environmental factors, Halal assurance related factors, and government related factors. The major contribution in this study is the discovery that technology related factors (ICT compatibility with Halal requirement) and Halal assurance related factors are the most crucial factors among the Halal LSPs applying ICT for Halal control in transportation-s operation. Among the government related factors, ICT requirement for monitoring Halal included in Halal Logistic Standard on Transportation (MS2400:2010) are the most influencing factors in the adoption of ICT with the support of the government. In addition, the government related factors are very important in the reducing the main barriers and the creation of the atmosphere of ICT adoption in Halal LSP sector.
Abstract: Evolutionary Algorithms are population-based,
stochastic search techniques, widely used as efficient global
optimizers. However, many real life optimization problems often
require finding optimal solution to complex high dimensional,
multimodal problems involving computationally very expensive
fitness function evaluations. Use of evolutionary algorithms in such
problem domains is thus practically prohibitive. An attractive
alternative is to build meta models or use an approximation of the
actual fitness functions to be evaluated. These meta models are order
of magnitude cheaper to evaluate compared to the actual function
evaluation. Many regression and interpolation tools are available to
build such meta models. This paper briefly discusses the
architectures and use of such meta-modeling tools in an evolutionary
optimization context. We further present two evolutionary algorithm
frameworks which involve use of meta models for fitness function
evaluation. The first framework, namely the Dynamic Approximate
Fitness based Hybrid EA (DAFHEA) model [14] reduces
computation time by controlled use of meta-models (in this case
approximate model generated by Support Vector Machine
regression) to partially replace the actual function evaluation by
approximate function evaluation. However, the underlying
assumption in DAFHEA is that the training samples for the metamodel
are generated from a single uniform model. This does not take
into account uncertain scenarios involving noisy fitness functions.
The second model, DAFHEA-II, an enhanced version of the original
DAFHEA framework, incorporates a multiple-model based learning
approach for the support vector machine approximator to handle
noisy functions [15]. Empirical results obtained by evaluating the
frameworks using several benchmark functions demonstrate their
efficiency
Abstract: The aim of this paper is to present a new method
which can be used for progressive transmission of electrocardiogram
(ECG). The idea consists in transforming any ECG signal to an
image, containing one beat in each row. In the first step, the beats are
synchronized in order to reduce the high frequencies due to inter-beat
transitions. The obtained image is then transformed using a discrete
version of Radon Transform (DRT). Hence, transmitting the ECG,
leads to transmit the most significant energy of the transformed
image in Radon domain. For decoding purpose, the receptor needs to
use the inverse Radon Transform as well as the two synchronization
frames.
The presented protocol can be adapted for lossy to lossless
compression systems. In lossy mode we show that the compression
ratio can be multiplied by an average factor of 2 for an acceptable
quality of reconstructed signal. These results have been obtained on
real signals from MIT database.
Abstract: We present a new method to reconstruct a temporally
coherent 3D animation from single or multi-view RGB-D video data
using unbiased feature point sampling. Given RGB-D video data, in
form of a 3D point cloud sequence, our method first extracts feature
points using both color and depth information. In the subsequent
steps, these feature points are used to match two 3D point clouds in
consecutive frames independent of their resolution. Our new motion
vectors based dynamic alignement method then fully reconstruct
a spatio-temporally coherent 3D animation. We perform extensive
quantitative validation using novel error functions to analyze the
results. We show that despite the limiting factors of temporal and
spatial noise associated to RGB-D data, it is possible to extract
temporal coherence to faithfully reconstruct a temporally coherent
3D animation from RGB-D video data.
Abstract: Studies in neuroscience suggest that both global and
local feature information are crucial for perception and recognition of
faces. It is widely believed that local feature is less sensitive to
variations caused by illumination, expression and illumination. In
this paper, we target at designing and learning local features for face
recognition. We designed three types of local features. They are
semi-global feature, local patch feature and tangent shape feature.
The designing of semi-global feature aims at taking advantage of
global-like feature and meanwhile avoiding suppressing AdaBoost
algorithm in boosting weak classifies established from small local
patches. The designing of local patch feature targets at automatically
selecting discriminative features, and is thus different with traditional
ways, in which local patches are usually selected manually to cover
the salient facial components. Also, shape feature is considered in
this paper for frontal view face recognition. These features are
selected and combined under the framework of boosting algorithm
and cascade structure. The experimental results demonstrate that the
proposed approach outperforms the standard eigenface method and
Bayesian method. Moreover, the selected local features and
observations in the experiments are enlightening to researches in
local feature design in face recognition.
Abstract: The two-stage compensator designs of linear system are
investigated in the framework of the factorization approach. First, we
give “full feedback" two-stage compensator design. Based on this
result, various types of the two-stage compensator designs with partial
feedbacks are derived.
Abstract: This paper introduces a process for the module level integration of computer based systems. It is based on the Six Sigma Process Improvement Model, where the goal of the process is to improve the overall quality of the system under development. We also present a conceptual framework that shows how this process can be implemented as an integration solution. Finally, we provide a partial implementation of key components in the conceptual framework.
Abstract: The production of a plant can be measured in terms of
seeds. The generation of seeds plays a critical role in our social and
daily life. The fruit production which generates seeds, depends on the
various parameters of the plant, such as shoot length, leaf number,
root length, root number, etc When the plant is growing, some leaves
may be lost and some new leaves may appear. It is very difficult to
use the number of leaves of the tree to calculate the growth of the
plant.. It is also cumbersome to measure the number of roots and
length of growth of root in several time instances continuously after
certain initial period of time, because roots grow deeper and deeper
under ground in course of time. On the contrary, the shoot length of
the tree grows in course of time which can be measured in different
time instances. So the growth of the plant can be measured using the
data of shoot length which are measured at different time instances
after plantation. The environmental parameters like temperature, rain
fall, humidity and pollution are also play some role in production of
yield. The soil, crop and distance management are taken care to
produce maximum amount of yields of plant. The data of the growth
of shoot length of some mustard plant at the initial stage (7,14,21 &
28 days after plantation) is available from the statistical survey by a
group of scientists under the supervision of Prof. Dilip De. In this
paper, initial shoot length of Ken( one type of mustard plant) has
been used as an initial data. The statistical models, the methods of
fuzzy logic and neural network have been tested on this mustard
plant and based on error analysis (calculation of average error) that
model with minimum error has been selected and can be used for the
assessment of shoot length at maturity. Finally, all these methods
have been tested with other type of mustard plants and the particular
soft computing model with the minimum error of all types has been
selected for calculating the predicted data of growth of shoot length.
The shoot length at the stage of maturity of all types of mustard
plants has been calculated using the statistical method on the
predicted data of shoot length.
Abstract: The daily growing use of agents in software environments, because of many reasons such as independence and intelligence is not a secret anymore. One of such environments in which there is a prominent job for the agents would be emarketplaces in which a user is able to give those agents the responsibility of buying and selling, instead of searching the emarketplace himself. Making up a framework which has sufficient attention to the required roles and their relations, is the first step of achieving such e-markets. In this paper, we suggest a framework in order to establish such e-markets and we will continue investigating the roles such as seller or buyer and the relations in JADE environment in details.
Abstract: This paper presents a time control liquids mixing
system in the tanks as an application of fuzzy time control discrete
model. The system is designed for a wide range of industrial
applications. The simulation design of control system has three
inputs: volume, viscosity, and selection of product, along with the
three external control adjustments for the system calibration or to
take over the control of the system autonomously in local or
distributed environment. There are four controlling elements: rotatory
motor, grinding motor, heating and cooling units, and valves
selection, each with time frame limit. The system consists of three
controlled variables measurement through its sensing mechanism for
feed back control. This design also facilitates the liquids mixing
system to grind certain materials in tanks and mix with fluids under
required temperature controlled environment to achieve certain
viscous level. Design of: fuzzifier, inference engine, rule base,
deffuzifiers, and discrete event control system, is discussed. Time
control fuzzy rules are formulated, applied and tested using
MATLAB simulation for the system.