Abstract: In this work social stratification is considered as one
of significant factor which generate the phenomena “terrorism” and it
puts the accent on correlation connection between them, with the
object of creation info-logical model generation of phenomena of
“terrorism” based on stratification process.
Abstract: The coastal sediments of West Port of Malaysia was monitored from Nov. 2009 to Oct. 2010 to assess spatial distribution of heavy metals As, Cu, Cd, Cr, Hg, Ni, Zn and Pb. Sediment samples were collected from 10 stations in dry and rainy season in West Port. The range concentrations measured (Mg/g dry weight ) were from 23.4 to 98.3 for Zn, 22.3 to 80 for Pb, 7.4 to 27.6 Cu, 0.244 to 3.53 for Cd, 7.2 to 22.2 for Ni, 20.2 to 162 for As, 0.11 to 0.409 for Hg and 11.5 to 61.5 for Cr. The geochemical indexes used in this study were Geoaccumulation (Igeo), Contamination Factor (CF) and Pollution Load Index (PLI); these indexes were used to evaluate the levels of sediment contaminations. The results of these indexes show that, the status of West Port sediment quality are moderately polluted by heavy metals except in arsenic which shows the high level of pollution.
Abstract: Power consumption of nodes in ad hoc networks is a
critical issue as they predominantly operate on batteries. In order to
improve the lifetime of an ad hoc network, all the nodes must be
utilized evenly and the power required for connections must be
minimized. In this project a link layer algorithm known as Power
Aware medium Access Control (PAMAC) protocol is proposed
which enables the network layer to select a route with minimum total
power requirement among the possible routes between a source and a
destination provided all nodes in the routes have battery capacity
above a threshold. When the battery capacity goes below a
predefined threshold, routes going through these nodes will be
avoided and these nodes will act only as source and destination.
Further, the first few nodes whose battery power drained to the set
threshold value are pushed to the exterior part of the network and the
nodes in the exterior are brought to the interior. Since less total
power is required to forward packets for each connection. The
network layer protocol AOMDV is basically an extension to the
AODV routing protocol. AOMDV is designed to form multiple
routes to the destination and it also avoid the loop formation so that it
reduces the unnecessary congestion to the channel. In this project, the
performance of AOMDV is evaluated using PAMAC as a MAC layer
protocol and the average power consumption, throughput and
average end to end delay of the network are calculated and the results
are compared with that of the other network layer protocol AODV.
Abstract: Sensor networks are often deployed in unattended
environments, thus leaving these networks vulnerable to false data
injection attacks in which an adversary injects forged reports into the
network through compromised nodes, with the goal of deceiving the
base station or depleting the resources of forwarding nodes. Several
research solutions have been recently proposed to detect and drop such
forged reports during the forwarding process. Each design can provide
the equivalent resilience in terms of node compromising. However,
their energy consumption characteristics differ from each other. Thus,
employing only a single filtering scheme for a network is not a
recommendable strategy in terms of energy saving. It's very important
the threshold determination for message authentication to identify. We
propose the recursive contract net protocols which less energy level of
terminal node in wireless sensor network.
Abstract: Using Dynamic Bayesian Networks (DBN) to model genetic regulatory networks from gene expression data is one of the major paradigms for inferring the interactions among genes. Averaging a collection of models for predicting network is desired, rather than relying on a single high scoring model. In this paper, two kinds of model searching approaches are compared, which are Greedy hill-climbing Search with Restarts (GSR) and Markov Chain Monte Carlo (MCMC) methods. The GSR is preferred in many papers, but there is no such comparison study about which one is better for DBN models. Different types of experiments have been carried out to try to give a benchmark test to these approaches. Our experimental results demonstrated that on average the MCMC methods outperform the GSR in accuracy of predicted network, and having the comparable performance in time efficiency. By proposing the different variations of MCMC and employing simulated annealing strategy, the MCMC methods become more efficient and stable. Apart from comparisons between these approaches, another objective of this study is to investigate the feasibility of using DBN modeling approaches for inferring gene networks from few snapshots of high dimensional gene profiles. Through synthetic data experiments as well as systematic data experiments, the experimental results revealed how the performances of these approaches can be influenced as the target gene network varies in the network size, data size, as well as system complexity.
Abstract: In rail vehicles, air springs are very important isolating component, which guarantee good ride comfort for passengers during their trip. In the most new rail–vehicle models, developed by researchers, the thermo–dynamical effects of air springs are ignored and secondary suspension is modeled by simple springs and dampers. As the performance of suspension components have significant effects on rail–vehicle dynamics and ride comfort of passengers, a complete nonlinear thermo–dynamical air spring model, which is a combination of two different models, is introduced. Result from field test shows remarkable agreement between proposed model and experimental data. Effects of air suspension parameters on the system performances are investigated here and then these parameters are tuned to minimize Sperling ride comfort index during the trip. Results showed that by modification of air suspension parameters, passengers comfort is improved and ride comfort index is reduced about 10%.
Abstract: This paper suggests an algorithm for the evaluation
and selection of suppliers. At the beginning, all the needed materials and services used by the organization were identified and categorized
with regard to their nature by ABC method. Afterwards, in order to reduce risk factors and maximize the organization's profit, purchase strategies were determined. Then, appropriate criteria were identified for primary evaluation of suppliers applying to the organization. The output of this stage was a list of suppliers qualified by the organization to participate in its tenders. Subsequently, considering a material in particular, appropriate criteria on the ordering of the
mentioned material were determined, taking into account the particular materials' specifications as well as the organization's needs. Finally, for the purpose of validation and verification of the
proposed model, it was applied to Mobarakeh Steel Company (MSC), the qualified suppliers of this Company are ranked by the means of a Hierarchical Fuzzy TOPSIS method. The obtained results
show that the proposed algorithm is quite effective, efficient and easy to apply.
Abstract: All over the world, including the Middle and East
European countries, sustainable tillage and sowing technologies are
applied increasingly broadly with a view to optimising soil resources,
mitigating soil degradation processes, saving energy resources,
preserving biological diversity, etc. As a result, altered conditions of
tillage and sowing technological processes are faced inevitably. The
purpose of this study is to determine the seedbed topsoil hardness
when using a combined sowing coulter in different sustainable tillage
technologies. The research involved a combined coulter consisting
of two dissected blade discs and a shoe coulter. In order to determine
soil hardness at the seedbed area, a multipenetrometer was used. It
was found by experimental studies that in loosened soil, a combined
sowing coulter equally suppresses the furrow bottom, walls and soil
near the furrow; therefore, here, soil hardness was similar at all
researched depths and no significant differences were established. In
loosened and compacted (double-rolled) soil, the impact of a
combined coulter on the hardness of seedbed soil surface was more
considerable at a depth of 2 mm. Soil hardness at the furrow bottom
and walls to a distance of up to 26 mm was 1.1 MPa. At a depth of 10
mm, the greatest hardness was established at the furrow bottom. In
loosened and heavily compacted (rolled for 6 times) soil, at a depth
of 2 and 10 mm a combined coulter most of all compacted the furrow
bottom, which has a hardness of 1.8 MPa. At a depth of 20 mm, soil
hardness within the whole investigated area varied insignificantly and
fluctuated by around 2.0 MPa. The hardness of furrow walls and soil
near the furrow was by approximately 1.0 MPa lower than that at the
furrow bottom
Abstract: This article presents the developments of efficient
algorithms for tablet copies comparison. Image recognition has
specialized use in digital systems such as medical imaging,
computer vision, defense, communication etc. Comparison between
two images that look indistinguishable is a formidable task. Two
images taken from different sources might look identical but due to
different digitizing properties they are not. Whereas small variation
in image information such as cropping, rotation, and slight
photometric alteration are unsuitable for based matching
techniques. In this paper we introduce different matching
algorithms designed to facilitate, for art centers, identifying real
painting images from fake ones. Different vision algorithms for
local image features are implemented using MATLAB. In this
framework a Table Comparison Computer Tool “TCCT" is
designed to facilitate our research. The TCCT is a Graphical Unit
Interface (GUI) tool used to identify images by its shapes and
objects. Parameter of vision system is fully accessible to user
through this graphical unit interface. And then for matching, it
applies different description technique that can identify exact
figures of objects.
Abstract: This paper presents modeling and analysis of 12-phase distribution static compensator (DSTATCOM), which is capable of balancing the source currents in spite of unbalanced loading and phase outages. In addition to balance the supply current, the power factor can be set to a desired value. The theory of instantaneous symmetrical components is used to generate the twelve-phase reference currents. These reference currents are then tracked using current controlled voltage source inverter, operated in a hysteresis band control scheme. An ideal compensator in place of physical realization of the compensator is used. The performance of the proposed DTATCOM is validated through MATLAB simulation and detailed simulation results are given.
Abstract: Stream Control Transmission Protocol (SCTP) has been
proposed to provide reliable transport of real-time communications.
Due to its attractive features, such as multi-streaming and multihoming,
the SCTP is often expected to be an alternative protocol
for TCP and UDP. In the original SCTP standard, the secondary path
is mainly regarded as a redundancy. Recently, most of researches
have focused on extending the SCTP to enable a host to send its
packets to a destination over multiple paths simultaneously. In order
to transfer packets concurrently over the multiple paths, the SCTP
should be well designed to avoid unnecessary fast retransmission
and the mis-estimation of congestion window size through the paths.
Therefore, we propose an Enhanced Cooperative ACK SCTP (ECASCTP)
to improve the path recovery efficiency of multi-homed host
which is under concurrent multiple transfer mode. We evaluated the
performance of our proposed scheme using ns-2 simulation in terms
of cwnd variation, path recovery time, and goodput. Our scheme
provides better performance in lossy and path asymmetric networks.
Abstract: A new dual-fluid concept was studied that could eventually find application for cold-gas propulsion for small space satellites or other constant flow applications. In basic form, the concept uses two different refrigerant working fluids, each having a different saturation vapor pressure. The higher vapor pressure refrigerant remains in the saturation phase and is used to pressurize the lower saturation vapor pressure fluid (the propellant) which remains in the compressed liquid phase. A demonstration thruster concept based on this principle was designed and built to study its operating characteristics. An automotive-type electronic fuel injector was used to meter and deliver the propellant. Ejected propellant mass and momentum were measured for several combinations of refrigerants and hydrocarbon fluids. The thruster has the advantage of delivering relatively large total impulse at low tank pressure within a small volume.
Abstract: The human head representations usually are based on
the morphological – structural components of a real model. Over the
time became more and more necessary to achieve full virtual models
that comply very rigorous with the specifications of the human
anatomy. Still, making and using a model perfectly fitted with the
real anatomy is a difficult task, because it requires large hardware
resources and significant times for processing. That is why it is
necessary to choose the best compromise solution, which keeps the
right balance between the details perfection and the resources
consumption, in order to obtain facial animations with real-time
rendering. We will present here the way in which we achieved such a
3D system that we intend to use as a base point in order to create
facial animations with real-time rendering, used in medicine to find
and to identify different types of pathologies.
Abstract: This paper addresses the development of an intelligent vision system for human-robot interaction. The two novel contributions of this paper are 1) Detection of human faces and 2) Localizing the eye. The method is based on visual attributes of human skin colors and geometrical analysis of face skeleton. This paper introduces a spatial domain filtering method named ?Fuzzily skewed filter' which incorporates Fuzzy rules for deciding the gray level of pixels in the image in their neighborhoods and takes advantages of both the median and averaging filters. The effectiveness of the method has been justified over implementing the eye tracking commands to an entertainment robot, named ''AIBO''.
Abstract: Covering-based rough sets is an extension of rough
sets and it is based on a covering instead of a partition of the
universe. Therefore it is more powerful in describing some practical
problems than rough sets. However, by extending the rough sets,
covering-based rough sets can increase the roughness of each model
in recognizing objects. How to obtain better approximations from
the models of a covering-based rough sets is an important issue.
In this paper, two concepts, determinate elements and indeterminate
elements in a universe, are proposed and given precise definitions
respectively. This research makes a reasonable refinement of the
covering-element from a new viewpoint. And the refinement may
generate better approximations of covering-based rough sets models.
To prove the theory above, it is applied to eight major coveringbased
rough sets models which are adapted from other literature.
The result is, in all these models, the lower approximation increases
effectively. Correspondingly, in all models, the upper approximation
decreases with exceptions of two models in some special situations.
Therefore, the roughness of recognizing objects is reduced. This
research provides a new approach to the study and application of
covering-based rough sets.
Abstract: In the proposed method for Web page-ranking, a
novel theoretic model is introduced and tested by examples of order
relationships among IP addresses. Ranking is induced using a
convexity feature, which is learned according to these examples
using a self-organizing procedure. We consider the problem of selforganizing
learning from IP data to be represented by a semi-random
convex polygon procedure, in which the vertices correspond to IP
addresses. Based on recent developments in our regularization
theory for convex polygons and corresponding Euclidean distance
based methods for classification, we develop an algorithmic
framework for learning ranking functions based on a Computational
Geometric Theory. We show that our algorithm is generic, and
present experimental results explaining the potential of our approach.
In addition, we explain the generality of our approach by showing its
possible use as a visualization tool for data obtained from diverse
domains, such as Public Administration and Education.
Abstract: Little research has examined working memory
capacity (WMC) in signed language interpreters and deaf signers.
This paper presents the findings of a study that investigated WMC in
professional Australian Sign Language (Auslan)/English interpreters
and deaf signers. Thirty-one professional Auslan/English interpreters
(14 hearing native signers and 17 hearing non-native signers)
completed an English listening span task and then an Auslan working
memory span task, which tested their English WMC and their Auslan
WMC, respectively. Moreover, 26 deaf signers (6 deaf native signers
and 20 deaf non-native signers) completed the Auslan working
memory span task. The results revealed a non-significant difference
between the hearing native signers and the hearing non-native signers
in their English WMC, and a non-significant difference between the
hearing native signers and the hearing non-native signers in their
Auslan WMC. Moreover, the results yielded a non-significant
difference between the hearing native signers- English WMC and
their Auslan WMC, and a non-significant difference between the
hearing non-native signers- English WMC and their Auslan WMC.
Furthermore, a non-significant difference was found between the deaf
native signers and the deaf non-native signers in their Auslan WMC.
Abstract: The purpose of this paper is to study Database Models
to use them efficiently in E-commerce websites. In this paper we are
going to find a method which can save and retrieve information in Ecommerce
websites. Thus, semantic web applications can work with,
and we are also going to study different technologies of E-commerce
databases and we know that one of the most important deficits in
semantic web is the shortage of semantic data, since most of the
information is still stored in relational databases, we present an
approach to map legacy data stored in relational databases into the
Semantic Web using virtually any modern RDF query language, as
long as it is closed within RDF. To achieve this goal we study XML
structures for relational data bases of old websites and eventually we
will come up one level over XML and look for a map from relational
model (RDM) to RDF. Noting that a large number of semantic webs
get advantage of relational model, opening the ways which can be
converted to XML and RDF in modern systems (semantic web) is
important.
Abstract: Sweet cherries (Prunus avium L.) contain various
phenolic compounds which contribute to total antioxidant activity.
Total polyphenols, tannins, flavonoids and anthocyanins, and
antioxidant capacity in a fruits of a number of selected sweet cherry
genotypes were investigated. Total polyphenols content ranged from
4.12 to 8.34 mg gallic acid equivantents/g dry fruit weight and total
tannins content ranged from 0.19 to 1.95 mg gallic acid equivalent/g
dry fruit weight. Total flavonoids were within the range 0.42-1.56 mg
of rutin equivalents/g dry fruit weight and total anthocyanins content
were between 0.35 and 0.69 mg cyanidin 3-glucoside equivalent/ g
dry fruit weight. Although sweet cherry fruits are a significant source
of different phenolic compounds, antioxidant activity of sweet
cherries is not related only with the total polyphenolics, flavonoids or
anthocyanins.
Abstract: Microarray data profiles gene expression on a whole
genome scale, therefore, it provides a good way to study associations
between gene expression and occurrence or progression of cancer.
More and more researchers realized that microarray data is helpful
to predict cancer sample. However, the high dimension of gene
expressions is much larger than the sample size, which makes this
task very difficult. Therefore, how to identify the significant genes
causing cancer becomes emergency and also a hot and hard research
topic. Many feature selection algorithms have been proposed in
the past focusing on improving cancer predictive accuracy at the
expense of ignoring the correlations between the features. In this
work, a novel framework (named by SGS) is presented for stable gene
selection and efficient cancer prediction . The proposed framework
first performs clustering algorithm to find the gene groups where
genes in each group have higher correlation coefficient, and then
selects the significant genes in each group with Bayesian Lasso and
important gene groups with group Lasso, and finally builds prediction
model based on the shrinkage gene space with efficient classification
algorithm (such as, SVM, 1NN, Regression and etc.). Experiment
results on real world data show that the proposed framework often
outperforms the existing feature selection and prediction methods,
say SAM, IG and Lasso-type prediction model.