Abstract: Wireless Sensor Networks consist of inexpensive, low power sensor nodes deployed to monitor the environment and collect
data. Gathering information in an energy efficient manner is a critical aspect to prolong the network lifetime. Clustering algorithms have an advantage of enhancing the network lifetime. Current clustering algorithms usually focus on global re-clustering and local re-clustering separately. This paper, proposed a combination of those two reclustering methods to reduce the energy consumption of the network. Furthermore, the proposed algorithm can apply to homogeneous as well as heterogeneous wireless sensor networks. In addition, the cluster head rotation happens, only when its energy drops below a dynamic threshold value computed by the algorithm. The simulation result shows that the proposed algorithm prolong the network lifetime compared to existing algorithms.
Abstract: Many advanced Routing protocols for wireless sensor networks have been implemented for the effective routing of data. Energy awareness is an essential design issue and almost all of these routing protocols are considered as energy efficient and its ultimate objective is to maximize the whole network lifetime. However, the introductions of video and imaging sensors have posed additional challenges. Transmission of video and imaging data requires both energy and QoS aware routing in order to ensure efficient usage of the sensors and effective access to the gathered measurements. In this paper, the performance of the energy-aware QoS routing Protocol are analyzed in different performance metrics like average lifetime of a node, average delay per packet and network throughput. The parameters considered in this study are end-to-end delay, real time data generation/capture rates, packet drop probability and buffer size. The network throughput for realtime and non-realtime data was also has been analyzed. The simulation has been done in NS2 simulation environment and the simulation results were analyzed with respect to different metrics.
Abstract: The African Great Lakes Region refers to the zone
around lakes Victoria, Tanganyika, Albert, Edward, Kivu, and
Malawi. The main source of electricity in this region is hydropower
whose systems are generally characterized by relatively weak,
isolated power schemes, poor maintenance and technical deficiencies
with limited electricity infrastructures. Most of the hydro sources are
rain fed, and as such there is normally a deficiency of water during
the dry seasons and extended droughts. In such calamities fossil fuels
sources, in particular petroleum products and natural gas, are
normally used to rescue the situation but apart from them being nonrenewable,
they also release huge amount of green house gases to our
environment which in turn accelerates the global warming that has at
present reached an amazing stage. Wind power is ample, renewable,
widely distributed, clean, and free energy source that does not
consume or pollute water. Wind generated electricity is one of the
most practical and commercially viable option for grid quality and
utility scale electricity production. However, the main shortcoming
associated with electric wind power generation is fluctuation in its
output both in space and time. Before making a decision to establish
a wind park at a site, the wind speed features there should therefore
be known thoroughly as well as local demand or transmission
capacity. The main objective of this paper is to utilise monthly
average wind speed data collected from one prospective site within
the African Great Lakes Region to demonstrate that the available
wind power there is high enough to generate electricity. The mean
monthly values were calculated from records gathered on hourly
basis for a period of 5 years (2001 to 2005) from a site in Tanzania.
The documentations that were collected at a height of 2 m were
projected to a height of 50 m which is the standard hub height of
wind turbines. The overall monthly average wind speed was found to
be 12.11 m/s whereas June to November was established to be the
windy season as the wind speed during the session is above the
overall monthly wind speed. The available wind power density
corresponding to the overall mean monthly wind speed was evaluated
to be 1072 W/m2, a potential that is worthwhile harvesting for the
purpose of electric generation.
Abstract: A new design approach for three-stage operational
amplifiers (op-amps) is proposed. It allows to actually implement a
symmetrical push-pull class-AB amplifier output stage for wellestablished
three-stage amplifiers using a feedforward
transconductance stage. Compared with the conventional design
practice, the proposed approach leads to a significant
improvement of the symmetry between the positive and the
negative op-amp step response, resulting in similar values of the
positive/negative settling time. The new approach proves to be very
useful in order to fully exploit the potentiality allowed by the op-amp
in terms of speed performances. Design examples in a commercial
0.35-μm CMOS prove the effectiveness of theproposed strategy.
Abstract: Metrics is the process by which numbers or symbols
are assigned to attributes of entities in the real world in such a way as
to describe them according to clearly defined rules. Software metrics
are instruments or ways to measuring all the aspect of software
product. These metrics are used throughout a software project to
assist in estimation, quality control, productivity assessment, and
project control. Object oriented software metrics focus on
measurements that are applied to the class and other characteristics.
These measurements convey the software engineer to the behavior of
the software and how changes can be made that will reduce
complexity and improve the continuing capability of the software.
Object oriented software metric can be classified in two types static
and dynamic. Static metrics are concerned with all the aspects of
measuring by static analysis of software and dynamic metrics are
concerned with all the measuring aspect of the software at run time.
Major work done before, was focusing on static metric. Also some
work has been done in the field of dynamic nature of the software
measurements. But research in this area is demanding for more work.
In this paper we give a set of dynamic metrics specifically for
polymorphism in object oriented system.
Abstract: This paper presents a cold flow simulation study of a small gas turbine combustor performed using laboratory scale test rig. The main objective of this investigation is to obtain physical insight of the main vortex, responsible for the efficient mixing of fuel and air. Such models are necessary for predictions and optimization of real gas turbine combustors. Air swirler can control the combustor performance by assisting in the fuel-air mixing process and by producing recirculation region which can act as flame holders and influences residence time. Thus, proper selection of a swirler is needed to enhance combustor performance and to reduce NOx emissions. Three different axial air swirlers were used based on their vane angles i.e., 30°, 45°, and 60°. Three-dimensional, viscous, turbulent, isothermal flow characteristics of the combustor model operating at room temperature were simulated via Reynolds- Averaged Navier-Stokes (RANS) code. The model geometry has been created using solid model, and the meshing has been done using GAMBIT preprocessing package. Finally, the solution and analysis were carried out in a FLUENT solver. This serves to demonstrate the capability of the code for design and analysis of real combustor. The effects of swirlers and mass flow rate were examined. Details of the complex flow structure such as vortices and recirculation zones were obtained by the simulation model. The computational model predicts a major recirculation zone in the central region immediately downstream of the fuel nozzle and a second recirculation zone in the upstream corner of the combustion chamber. It is also shown that swirler angles changes have significant effects on the combustor flowfield as well as pressure losses.
Abstract: This article demonstrated development of
controlled release system of an NSAID drug, Diclofenac
sodium employing different ratios of Ethyl cellulose.
Diclofenac sodium and ethyl cellulose in different proportions
were processed by microencapsulation based on phase
separation technique to formulate microcapsules. The
prepared microcapsules were then compressed into tablets to
obtain controlled release oral formulations. In-vitro evaluation
was performed by dissolution test of each preparation was
conducted in 900 ml of phosphate buffer solution of pH 7.2
maintained at 37 ± 0.5 °C and stirred at 50 rpm. At predetermined
time intervals (0, 0.5, 1.0, 1.5, 2, 3, 4, 6, 8, 10, 12,
16, 20 and 24 hrs). The drug concentration in the collected
samples was determined by UV spectrophotometer at 276 nm.
The physical characteristics of diclofenac sodium
microcapsules were according to accepted range. These were
off-white, free flowing and spherical in shape. The release
profile of diclofenac sodium from microcapsules was found to
be directly proportional to the proportion of ethylcellulose and
coat thickness. The in-vitro release pattern showed that with
ratio of 1:1 and 1:2 (drug: polymer), the percentage release of
drug at first hour was 16.91 and 11.52 %, respectively as
compared to 1:3 which is only 6.87 % with in this time. The
release mechanism followed higuchi model for its release
pattern. Tablet Formulation (F2) of present study was found
comparable in release profile the marketed brand Phlogin-SR,
microcapsules showed an extended release beyond 24 h.
Further, a good correlation was found between drug release
and proportion of ethylcellulose in the microcapsules.
Microencapsulation based on coacervation found as good
technique to control release of diclofenac sodium for making
the controlled release formulations.
Abstract: In any trust model, the two information sources that a peer relies on to predict trustworthiness of another peer are direct experience as well as reputation. These two vital components evolve over time. Trust evolution is an important issue, where the objective is to observe a sequence of past values of a trust parameter and determine the future estimates. Unfortunately, trust evolution algorithms received little attention and the proposed algorithms in the literature do not comply with the conditions and the nature of trust. This paper contributes to this important problem in the following ways: (a) presents an algorithm that manages and models trust evolution in a P2P environment, (b) devises new mechanisms for effectively maintaining trust values based on the conditions that influence trust evolution , and (c) introduces a new methodology for incorporating trust-nurture incentives into the trust evolution algorithm. Simulation experiments are carried out to evaluate our trust evolution algorithm.
Abstract: Solar power plants(SPPs) have shown a lot of good outcomes
in providing a various functions depending on industrial expectations by
deploying ad-hoc networking with helps of light loaded and battery powered
sensor nodes. In particular, it is strongly requested to develop an algorithm to
deriver the sensing data from the end node of solar power plants to the sink node
on time. In this paper, based on the above observation we have proposed an
IEEE802.15.4 based self routing scheme for solar power plants. The proposed
beacon based priority routing Algorithm (BPRA) scheme utilizes beacon
periods in sending message with embedding the high priority data and thus
provides high quality of service(QoS) in the given criteria. The performance
measures are the packet Throughput, delivery, latency, total energy
consumption. Simulation results under TinyOS Simulator(TOSSIM) have
shown the proposed scheme outcome the conventional Ad hoc On-Demand
Distance Vector(AODV) Routing in solar power plants.
Abstract: The expectation of network performance from the
early days of ARPANET until now has been changed significantly.
Every day, new advancement in technological infrastructure opens
the doors for better quality of service and accordingly level of
perceived quality of network services have been increased over the
time. Nowadays for many applications, late information has no value
or even may result in financial or catastrophic loss, on the other hand,
demands for some level of guarantee in providing and maintaining
quality of service are ever increasing. Based on this history, having a
QoS aware routing system which is able to provide today's required
level of quality of service in the networks and effectively adapt to the
future needs, seems as a key requirement for future Internet. In this
work we have extended the traditional AntNet routing system to
support QoS with multiple metrics such as bandwidth and delay
which is named Q-Net. This novel scalable QoS routing system aims
to provide different types of services in the network simultaneously.
Each type of service can be provided for a period of time in the
network and network nodes do not need to have any previous
knowledge about it. When a type of quality of service is requested,
Q-Net will allocate required resources for the service and will
guarantee QoS requirement of the service, based on target objectives.
Abstract: The elution process for the removal of Co and Cu from clinoptilolite as an ion-exchanger was investigated using three parameters: bed volume, pH and contact time. The present paper study has shown quantitatively that acid concentration has a significant effect on the elution process. The favorable eluant concentration was found to be 2 M HCl and 2 M H2SO4, respectively. The multi-component equilibrium relationship in the process can be very complex, and perhaps ill-defined. In such circumstances, it is preferable to use a non-parametric technique such as Neural Network to represent such an equilibrium relationship.
Abstract: In this paper, we propose a fast and efficient method for drawing very large-scale graph data. The conventional force-directed method proposed by Fruchterman and Rheingold (FR method) is well-known. It defines repulsive forces between every pair of nodes and attractive forces between connected nodes on a edge and calculates corresponding potential energy. An optimal layout is obtained by iteratively updating node positions to minimize the potential energy. Here, the positions of the nodes are updated every global timestep at the same time. In the proposed method, each node has its own individual time and time step, and nodes are updated at different frequencies depending on the local situation. The proposed method is inspired by the hierarchical individual time step method used for the high accuracy calculations for dense particle fields such as star clusters in astrophysical dynamics. Experiments show that the proposed method outperforms the original FR method in both speed and accuracy. We implement the proposed method on the MDGRAPE-3 PCI-X special purpose parallel computer and realize a speed enhancement of several hundred times.
Abstract: The future of business intelligence (BI) is to integrate
intelligence into operational systems that works in real-time
analyzing small chunks of data based on requirements on continuous
basis. This is moving away from traditional approach of doing
analysis on ad-hoc basis or sporadically in passive and off-line mode
analyzing huge amount data. Various AI techniques such as expert
systems, case-based reasoning, neural-networks play important role
in building business intelligent systems. Since BI involves various
tasks and models various types of problems, hybrid intelligent
techniques can be better choice. Intelligent systems accessible
through web services make it easier to integrate them into existing
operational systems to add intelligence in every business processes.
These can be built to be invoked in modular and distributed way to
work in real time. Functionality of such systems can be extended to
get external inputs compatible with formats like RSS. In this paper,
we describe a framework that use effective combinations of these
techniques, accessible through web services and work in real-time.
We have successfully developed various prototype systems and done
few commercial deployments in the area of personalization and
recommendation on mobile and websites.
Abstract: Functional near infrared spectroscopy (fNIRS) is a
practical non-invasive optical technique to detect characteristic of
hemoglobin density dynamics response during functional activation of
the cerebral cortex. In this paper, fNIRS measurements were made in
the area of motor cortex from C4 position according to international
10-20 system. Three subjects, aged 23 - 30 years, were participated in
the experiment.
The aim of this paper was to evaluate the effects of different motor
activation tasks of the hemoglobin density dynamics of fNIRS signal.
The chaotic concept based on deterministic dynamics is an important
feature in biological signal analysis. This paper employs the chaotic
properties which is a novel method of nonlinear analysis, to analyze
and to quantify the chaotic property in the time series of the
hemoglobin dynamics of the various motor imagery tasks of fNIRS
signal. Usually, hemoglobin density in the human brain cortex is
found to change slowly in time. An inevitable noise caused by various
factors is to be included in a signal. So, principle component analysis
method (PCA) is utilized to remove high frequency component. The
phase pace is reconstructed and evaluated the Lyapunov spectrum, and
Lyapunov dimensions. From the experimental results, it can be
conclude that the signals measured by fNIRS are chaotic.
Abstract: Flash floods are considered natural disasters that can
cause casualties and demolishing of infra structures. The problem is
that flash floods, particularly in arid and semi arid zones, take place
in very short time. So, it is important to forecast flash floods earlier to
its events with a lead time up to 48 hours to give early warning alert
to avoid or minimize disasters. The flash flood took place over Wadi
Watier - Sinai Peninsula, in October 24th, 2008, has been simulated,
investigated and analyzed using the state of the art regional weather
model. The Weather Research and Forecast (WRF) model, which is a
reliable short term forecasting tool for precipitation events, has been
utilized over the study area. The model results have been calibrated
with the real data, for the same date and time, of the rainfall
measurements recorded at Sorah gauging station. The WRF model
forecasted total rainfall of 11.6 mm while the real measured one was
10.8 mm. The calibration shows significant consistency between
WRF model and real measurements results.
Abstract: Several studies have been carried out, using various techniques, including neural networks, to discriminate vigilance states in humans from electroencephalographic (EEG) signals, but we are still far from results satisfactorily useable results. The work presented in this paper aims at improving this status with regards to 2 aspects. Firstly, we introduce an original procedure made of the association of two neural networks, a self organizing map (SOM) and a learning vector quantization (LVQ), that allows to automatically detect artefacted states and to separate the different levels of vigilance which is a major breakthrough in the field of vigilance. Lastly and more importantly, our study has been oriented toward real-worked situation and the resulting model can be easily implemented as a wearable device. It benefits from restricted computational and memory requirements and data access is very limited in time. Furthermore, some ongoing works demonstrate that this work should shortly results in the design and conception of a non invasive electronic wearable device.
Abstract: Data mining has been used very frequently to extract
hidden information from large databases. This paper suggests the use
of decision trees for continuously extracting the clinical reasoning in
the form of medical expert-s actions that is inherent in large number
of EMRs (Electronic Medical records). In this way the extracted data
could be used to teach students of oral medicine a number of orderly
processes for dealing with patients who represent with different
problems within the practice context over time.
Abstract: A direct adaptive controller for a class of unknown nonlinear discrete-time systems is presented in this article. The proposed controller is constructed by fuzzy rules emulated network (FREN). With its simple structure, the human knowledge about the plant is transferred to be if-then rules for setting the network. These adjustable parameters inside FREN are tuned by the learning mechanism with time varying step size or learning rate. The variation of learning rate is introduced by main theorem to improve the system performance and stabilization. Furthermore, the boundary of adjustable parameters is guaranteed through the on-line learning and membership functions properties. The validation of the theoretical findings is represented by some illustrated examples.
Abstract: This work presents the Risk Threshold RED (RTRED)
congestion control strategy for TCP networks. In addition to the
maximum and minimum thresholds in existing RED-based strategies,
we add a third dropping level. This new dropping level is the risk
threshold which works with the actual and average queue sizes to
detect the immediate congestion in gateways. Congestion reaction
by RTRED is on time. The reaction to congestion is neither too
early, to avoid unfair packet losses, nor too late to avoid packet
dropping from time-outs. We compared our novel strategy with RED
and ARED strategies for TCP congestion handling using a NS-2
simulation script. We found that the RTRED strategy outperformed
RED and ARED.
Abstract: Composite pins of rubber dust collected from tyre
retreading centres of trucks, cars and buses etc.and epoxy with
weight percentages of 10. 15, and 20 % of rubber (weight fractions of
9, 13 and 17 % respectively) have been prepared in house with the
help of a split wooden mould. The pins were tested in a pin-on-disc
wear monitor to determine the co-efficient of friction and weight
losses with varying speeds, loads and time. The wear volume and
wear rates have also been found out for all these three specimens.. It
is observed that all the specimens have exhibited very low coefficient
of friction and low wear rates under dry sliding condition. Out of the
above three samples tested, the specimen with 10 % rubber dust by
weight has shown lowest wear rates. However a peculiar result i.e
decreasing trend has been obtained with 20% reinforcement of rubber
in epoxy while rubbed against steel at varying speeds. This might
have occurred due to high surface finish of the disc and formation of
a thin transfer layer from the composite