Abstract: Open Agent System platform based on High Level
Architecture is firstly proposed to support the application involving
heterogeneous agents. The basic idea is to develop different wrappers
for different agent systems, which are wrapped as federates to join a
federation. The platform is based on High Level Architecture and the
advantages for this open standard are naturally inherited, such as
system interoperability and reuse. Especially, the federal architecture
allows different federates to be heterogeneous so as to support the
integration of different agent systems. Furthermore, both implicit
communication and explicit communication between agents can be
supported. Then, as the wrapper RTI_JADE an example, the
components are discussed. Finally, the performance of RTI_JADE is
analyzed. The results show that RTI_JADE works very efficiently.
Abstract: In this paper, we investigate a blind channel estimation method for Multi-carrier CDMA systems that use a subspace decomposition technique. This technique exploits the orthogonality property between the noise subspace and the received user codes to obtain channel of each user. In the past we used Singular Value Decomposition (SVD) technique but SVD have most computational complexity so in this paper use a new algorithm called URV Decomposition, which serve as an intermediary between the QR decomposition and SVD, replaced in SVD technique to track the noise space of the received data. Because of the URV decomposition has almost the same estimation performance as the SVD, but has less computational complexity.
Abstract: This paper discusses a new model of Islamic code of
ethics for directors. Several corporate scandals and local (example
Transmile and Megan Media) and overseas corporate (example
Parmalat and Enron) collapses show that the current corporate
governance and regulatory reform are unable to prevent these events
from recurring. Arguably, the code of ethics for directors is under
research and the current code of ethics only concentrates on binding
the work of the employee of the organization as a whole, without
specifically putting direct attention to the directors, the group of
people responsible for the performance of the company. This study
used a semi-structured interview survey of well-known Islamic
scholars such as the Mufti to develop the model. It is expected that
the outcome of the research is a comprehensive model of code of
ethics based on the Islamic principles that can be applied and used by
the company to construct a code of ethics for their directors.
Abstract: Active vibration control is an important problem in
structures. The objective of active vibration control is to reduce the vibrations of a system by automatic modification of the system-s
structural response. In this paper, the modeling and design of a fast
output sampling feedback controller for a smart flexible beam system embedded with shear sensors and actuators for SISO system using
Timoshenko beam theory is proposed. FEM theory, Timoshenko beam theory and the state space techniques are used to model the
aluminum cantilever beam. For the SISO case, the beam is divided into 5 finite elements and the control actuator is placed at finite
element position 1, whereas the sensor is varied from position 2 to 5, i.e., from the nearby fixed end to the free end. Controllers are
designed using FOS method and the performance of the designed FOS controller is evaluated for vibration control for 4 SISO models
of the same plant. The effect of placing the sensor at different locations on the beam is observed and the performance of the
controller is evaluated for vibration control. Some of the limitations of the Euler-Bernoulli theory such as the neglection of shear and
axial displacement are being considered here, thus giving rise to an accurate beam model. Embedded shear sensors and actuators have
been considered in this paper instead of the surface mounted sensors
and actuators for vibration suppression because of lot of advantages. In controlling the vibration modes, the first three dominant modes of
vibration of the system are considered.
Abstract: The perfect operation of common Active Filters is depended on accuracy of identification system distortion. Also, using a suitable method in current injection and reactive power compensation, leads to increased filter performance. Due to this fact, this paper presents a method based on predictive current control theory in shunt active filter applications. The harmonics of the load current is identified by using o–d–q reference frame on load current and eliminating the DC part of d–q components. Then, the rest of these components deliver to predictive current controller as a Threephase reference current by using Park inverse transformation. System is modeled in discreet time domain. The proposed method has been tested using MATLAB model for a nonlinear load (with Total Harmonic Distortion=20%). The simulation results indicate that the proposed filter leads to flowing a sinusoidal current (THD=0.15%) through the source. In addition, the results show that the filter tracks the reference current accurately.
Abstract: In this paper, based on the past project cost and time
performance, a model for forecasting project cost performance is
developed. This study presents a probabilistic project control concept
to assure an acceptable forecast of project cost performance. In this
concept project activities are classified into sub-groups entitled
control accounts. Then obtain the Stochastic S-Curve (SS-Curve), for
each sub-group and the project SS-Curve is obtained by summing
sub-groups- SS-Curves. In this model, project cost uncertainties are
considered through Beta distribution functions of the project
activities costs required to complete the project at every selected time
sections through project accomplishment, which are extracted from a
variety of sources. Based on this model, after a percentage of the
project progress, the project performance is measured via Earned
Value Management to adjust the primary cost probability distribution
functions. Then, accordingly the future project cost performance is
predicted by using the Monte-Carlo simulation method.
Abstract: Principle component analysis is often combined with
the state-of-art classification algorithms to recognize human faces.
However, principle component analysis can only capture these
features contributing to the global characteristics of data because it is a
global feature selection algorithm. It misses those features
contributing to the local characteristics of data because each principal
component only contains some levels of global characteristics of data.
In this study, we present a novel face recognition approach using
non-negative principal component analysis which is added with the
constraint of non-negative to improve data locality and contribute to
elucidating latent data structures. Experiments are performed on the
Cambridge ORL face database. We demonstrate the strong
performances of the algorithm in recognizing human faces in
comparison with PCA and NREMF approaches.
Abstract: Improving performance measures in the construction
processes has been a major concern for managers and decision
makers in the industry. They seek for ways to recognize the key
factors which have the largest effect on the process. Identifying such
factors can guide them to focus on the right parts of the process in
order to gain the best possible result. In the present study design of
experiment (DOE) has been applied to a computer simulation model
of brick laying process to determine significant factors while
productivity has been chosen as the response of the experiment. To
this end, four controllable factors and their interaction have been
experimented and the best factor level has been calculated for each
one. The results indicate that three factors, namely, labor of brick,
labor of mortar and inter arrival time of mortar along with interaction
of labor of brick and labor of mortar are significant.
Abstract: Radio propagation from point-to-point is affected by
the physical channel in many ways. A signal arriving at a destination
travels through a number of different paths which are referred to as
multi-paths. Research in this area of wireless communications has
progressed well over the years with the research taking different
angles of focus. By this is meant that some researchers focus on
ways of reducing or eluding Multipath effects whilst others focus on
ways of mitigating the effects of Multipath through compensation
schemes. Baseband processing is seen as one field of signal
processing that is cardinal to the advancement of software defined
radio technology. This has led to wide research into the carrying out
certain algorithms at baseband. This paper considers compensating
for Multipath for Frequency Modulated signals. The compensation
process is carried out at Radio frequency (RF) and at Quadrature
baseband (QBB) and the results are compared. Simulations are
carried out using MatLab so as to show the benefits of working at
lower QBB frequencies than at RF.
Abstract: The paper presents an optimization study based on
genetic algorithms (GA-s) for a radio-frequency applicator used in
heating dielectric band products. The weakly coupled electro-thermal
problem is analyzed using 2D-FEM. The design variables in the
optimization process are: the voltage of a supplementary “guard"
electrode and six geometric parameters of the applicator. Two
objective functions are used: temperature uniformity and total active
power absorbed by the dielectric. Both mono-objective and multiobjective
formulations are implemented in GA optimization.
Abstract: Heart disease (HD) is a major cause of morbidity and mortality in the modern society. Medical diagnosis is an important but complicated task that should be performed accurately and efficiently and its automation would be very useful. All doctors are unfortunately not equally skilled in every sub specialty and they are in many places a scarce resource. A system for automated medical diagnosis would enhance medical care and reduce costs. In this paper, a new approach based on coactive neuro-fuzzy inference system (CANFIS) was presented for prediction of heart disease. The proposed CANFIS model combined the neural network adaptive capabilities and the fuzzy logic qualitative approach which is then integrated with genetic algorithm to diagnose the presence of the disease. The performances of the CANFIS model were evaluated in terms of training performances and classification accuracies and the results showed that the proposed CANFIS model has great potential in predicting the heart disease.
Abstract: Rotation or tilt present in an image capture by digital
means can be detected and corrected using Artificial Neural Network
(ANN) for application with a Face Recognition System (FRS). Principal
Component Analysis (PCA) features of faces at different angles
are used to train an ANN which detects the rotation for an input image
and corrected using a set of operations implemented using another
system based on ANN. The work also deals with the recognition
of human faces with features from the foreheads, eyes, nose and
mouths as decision support entities of the system configured using
a Generalized Feed Forward Artificial Neural Network (GFFANN).
These features are combined to provide a reinforced decision for
verification of a person-s identity despite illumination variations. The
complete system performing facial image rotation detection, correction
and recognition using re-enforced decision support provides a
success rate in the higher 90s.
Abstract: Human amniotic membrane (HAM) is a useful
biological material for the reconstruction of damaged ocular surface.
The processing and preservation of HAM is critical to prevent the
patients undergoing amniotic membrane transplant (AMT) from cross
infections. For HAM preparation human placenta is obtained after an
elective cesarean delivery. Before collection, the donor is screened
for seronegativity of HCV, Hbs Ag, HIV and Syphilis. After
collection, placenta is washed in balanced salt solution (BSS) in
sterile environment. Amniotic membrane is then separated from the
placenta as well as chorion while keeping the preparation in BSS.
Scrapping of HAM is then carried out manually until all the debris is
removed and clear transparent membrane is acquired. Nitrocellulose
membrane filters are then placed on the stromal side of HAM, cut
around the edges with little membrane folded towards other side
making it easy to separate during surgery. HAM is finally stored in
solution of glycerine and Dulbecco-s Modified Eagle Medium
(DMEM) in 1:1 ratio containing antibiotics. The capped borosil vials
containing HAM are kept at -80°C until use. This vial is thawed to
room temperature and opened under sterile operation theatre
conditions at the time of surgery.
Abstract: In the present paper, the three-dimensional
temperature field of tool is determined during the machining and
compared with experimental work on C45 workpiece using carbide
cutting tool inserts. During the metal cutting operations, high
temperature is generated in the tool cutting edge which influence on
the rate of tool wear. Temperature is most important characteristic of
machining processes; since many parameters such as cutting speed,
surface quality and cutting forces depend on the temperature and high
temperatures can cause high mechanical stresses which lead to early
tool wear and reduce tool life. Therefore, considerable attention is
paid to determine tool temperatures. The experiments are carried out
for dry and orthogonal machining condition. The results show that
the increase of tool temperature depends on depth of cut and
especially cutting speed in high range of cutting conditions.
Abstract: Many factors affect the success of Machine Learning
(ML) on a given task. The representation and quality of the instance
data is first and foremost. If there is much irrelevant and redundant
information present or noisy and unreliable data, then knowledge
discovery during the training phase is more difficult. It is well known
that data preparation and filtering steps take considerable amount of
processing time in ML problems. Data pre-processing includes data
cleaning, normalization, transformation, feature extraction and
selection, etc. The product of data pre-processing is the final training
set. It would be nice if a single sequence of data pre-processing
algorithms had the best performance for each data set but this is not
happened. Thus, we present the most well know algorithms for each
step of data pre-processing so that one achieves the best performance
for their data set.
Abstract: Wireless Sensor Network (WSN) comprises of sensor
nodes which are designed to sense the environment, transmit sensed
data back to the base station via multi-hop routing to reconstruct
physical phenomena. Since physical phenomena exists significant
overlaps between temporal redundancy and spatial redundancy, it is
necessary to use Redundancy Suppression Algorithms (RSA) for sensor
node to lower energy consumption by reducing the transmission
of redundancy. A conventional algorithm of RSAs is threshold-based
RSA, which sets threshold to suppress redundant data. Although
many temporal and spatial RSAs are proposed, temporal-spatial RSA
are seldom to be proposed because it is difficult to determine when
to utilize temporal or spatial RSAs. In this paper, we proposed a
novel temporal-spatial redundancy suppression algorithm, Codebookbase
Redundancy Suppression Mechanism (CRSM). CRSM adopts
vector quantization to generate a codebook, which is easily used to
implement temporal-spatial RSA. CRSM not only achieves power
saving and reliability for WSN, but also provides the predictability
of network lifetime. Simulation result shows that the network lifetime
of CRSM outperforms at least 23% of that of other RSAs.
Abstract: The paper discusses the results obtained to predict
reinforcement in singly reinforced beam using Neural Net (NN),
Support Vector Machines (SVM-s) and Tree Based Models. Major
advantage of SVM-s over NN is of minimizing a bound on the
generalization error of model rather than minimizing a bound on
mean square error over the data set as done in NN. Tree Based
approach divides the problem into a small number of sub problems to
reach at a conclusion. Number of data was created for different
parameters of beam to calculate the reinforcement using limit state
method for creation of models and validation. The results from this
study suggest a remarkably good performance of tree based and
SVM-s models. Further, this study found that these two techniques
work well and even better than Neural Network methods. A
comparison of predicted values with actual values suggests a very
good correlation coefficient with all four techniques.
Abstract: This paper addresses a stock-cutting problem with rotation of items and without the guillotine cutting constraint. In order to solve the large-scale problem effectively and efficiently, we propose a simple but fast heuristic algorithm. It is shown that this heuristic outperforms the latest published algorithms for large-scale problem instances.
Abstract: In this paper, a design methodology to implement low-power and high-speed 2nd order recursive digital Infinite Impulse Response (IIR) filter has been proposed. Since IIR filters suffer from a large number of constant multiplications, the proposed method replaces the constant multiplications by using addition/subtraction and shift operations. The proposed new 6T adder cell is used as the Carry-Save Adder (CSA) to implement addition/subtraction operations in the design of recursive section IIR filter to reduce the propagation delay. Furthermore, high-level algorithms designed for the optimization of the number of CSA blocks are used to reduce the complexity of the IIR filter. The DSCH3 tool is used to generate the schematic of the proposed 6T CSA based shift-adds architecture design and it is analyzed by using Microwind CAD tool to synthesize low-complexity and high-speed IIR filters. The proposed design outperforms in terms of power, propagation delay, area and throughput when compared with MUX-12T, MCIT-7T based CSA adder filter design. It is observed from the experimental results that the proposed 6T based design method can find better IIR filter designs in terms of power and delay than those obtained by using efficient general multipliers.
Abstract: The objective of this study was to investigate hydrogen production from alcohol wastewater by anaerobic sequencing batch reactor (ASBR) under thermophillic operation. The ASBR unit used in this study had a liquid holding volume of 4 L and was operated at 6 cycles per day. The seed sludge taken from an upflow anaerobic sludge blanket unit treating the same wastewater was boiled at 95 °C for 15 min before being fed to the ASBR unit. The ASBR system was operated at different COD loading rates at a thermophillic temperature (55 °C), and controlled pH of 5.5. When the system was operated under optimum conditions (providing maximum hydrogen production performance) at a feed COD of 60 000 mg/l, and a COD loading rate of 68 kg/m3 d, the produced gas contained 43 % H2 content in the produced gas. Moreover, the hydrogen yield and the specific hydrogen production rate (SHPR) were 130 ml H2/g COD removed and 2100 ml H2/l d, respectively.