Abstract: At present, it is very common to find renewable
energy resources, especially wind power, connected to distribution
systems. The impact of this wind power on voltage distribution levels
has been addressed in the literature. The majority of this works deals
with the determination of the maximum active and reactive power
that is possible to be connected on a system load bus, until the
voltage at that bus reaches the voltage collapse point. It is done by the
traditional methods of PV curves reported in many references.
Theoretical expression of maximum power limited by voltage
stability transfer through a grid is formulated using an exact
representation of distribution line with ABCD parameters. The
expression is used to plot PV curves at various power factors of a
radial system. Limited values of reactive power can be obtained. This
paper presents a method to study the relationship between the active
power and voltage (PV) at the load bus to identify the voltage
stability limit. It is a foundation to build a permitted working
operation region in complying with the voltage stability limit at the
point of common coupling (PCC) connected wind farm.
Abstract: Traditionally, Internet has provided best-effort service to every user regardless of its requirements. However, as Internet becomes universally available, users demand more bandwidth and applications require more and more resources, and interest has developed in having the Internet provide some degree of Quality of Service. Although QoS is an important issue, the question of how it will be brought into the Internet has not been solved yet. Researches, due to the rapid advances in technology are proposing new and more desirable capabilities for the next generation of IP infrastructures. But neither all applications demand the same amount of resources, nor all users are service providers. In this way, this paper is the first of a series of papers that presents an architecture as a first step to the optimization of QoS in the Internet environment as a solution to a SMSE's problem whose objective is to provide public service to internet with certain Quality of Service expectations. The service provides new business opportunities, but also presents new challenges. We have designed and implemented a scalable service framework that supports adaptive bandwidth based on user demands, and the billing based on usage and on QoS. The developed application has been evaluated and the results show that traffic limiting works at optimum and so it does exceeding bandwidth distribution. However, some considerations are done and currently research is under way in two basic areas: (i) development and testing new transfer protocols, and (ii) developing new strategies for traffic improvements based on service differentiation.
Abstract: Multi-Radio Multi-Channel Wireless Mesh Networks (MRMC-WMNs) operate at the backbone to access and route high volumes of traffic simultaneously. Such roles demand high network capacity, and long “online" time at the expense of accelerated transmission energy depletion and poor connectivity. This is the problem of transmission power control. Numerous power control methods for wireless networks are in literature. However, contributions towards MRMC configurations still face many challenges worth considering. In this paper, an energy-efficient power selection protocol called PMMUP is suggested at the Link-Layer. This protocol first divides the MRMC-WMN into a set of unified channel graphs (UCGs). A UCG consists of multiple radios interconnected to each other via a common wireless channel. In each UCG, a stochastic linear quadratic cost function is formulated. Each user minimizes this cost function consisting of trade-off between the size of unification states and the control action. Unification state variables come from independent UCGs and higher layers of the protocol stack. The PMMUP coordinates power optimizations at the network interface cards (NICs) of wireless mesh routers. The proposed PMMUP based algorithm converges fast analytically with a linear rate. Performance evaluations through simulations confirm the efficacy of the proposed dynamic power control.
Abstract: Careful design and selection of daylighting systems can greatly help in reducing not only artificial lighting use, but also decrease cooling energy consumption and, therefore, potential for downsizing air-conditioning systems. This paper aims to evaluate the energy performance of two types of top-light daylighting systems due to the integration of daylight together with artificial lighting in an existing examinaton hall in University Kebangsaan Malaysia, based on a hot and humid climate. Computer simulation models have been created for building case study (base case) and the two types of toplight daylighting designs for building energy performance evaluation using the VisualDOE 4.0 building energy simulation program. The finding revealed that daylighting through top-light systems is a very beneficial design strategy in reducing annual lighting energy consumption and the overall total annual energy consumption.
Abstract: This paper presents a new problem solving approach
that is able to generate optimal policy solution for finite-state
stochastic sequential decision-making problems with high data
efficiency. The proposed algorithm iteratively builds and improves
an approximate Markov Decision Process (MDP) model along with
cost-to-go value approximates by generating finite length trajectories
through the state-space. The approach creates a synergy between an
approximate evolving model and approximate cost-to-go values to
produce a sequence of improving policies finally converging to the
optimal policy through an intelligent and structured search of the
policy space. The approach modifies the policy update step of the
policy iteration so as to result in a speedy and stable convergence to
the optimal policy. We apply the algorithm to a non-holonomic
mobile robot control problem and compare its performance with
other Reinforcement Learning (RL) approaches, e.g., a) Q-learning,
b) Watkins Q(λ), c) SARSA(λ).
Abstract: Researches show that probability-statistical methods application, especially at the early stage of the aviation Gas Turbine Engine (GTE) technical condition diagnosing, when the flight information has property of the fuzzy, limitation and uncertainty is unfounded. Hence the efficiency of application of new technology Soft Computing at these diagnosing stages with the using of the Fuzzy Logic and Neural Networks methods is considered. According to the purpose of this problem training with high accuracy of fuzzy multiple linear and non-linear models (fuzzy regression equations) which received on the statistical fuzzy data basis is made. For GTE technical condition more adequate model making dynamics of skewness and kurtosis coefficients- changes are analysed. Researches of skewness and kurtosis coefficients values- changes show that, distributions of GTE work parameters have fuzzy character. Hence consideration of fuzzy skewness and kurtosis coefficients is expedient. Investigation of the basic characteristics changes- dynamics of GTE work parameters allows drawing conclusion on necessity of the Fuzzy Statistical Analysis at preliminary identification of the engines' technical condition. Researches of correlation coefficients values- changes shows also on their fuzzy character. Therefore for models choice the application of the Fuzzy Correlation Analysis results is offered. At the information sufficiency is offered to use recurrent algorithm of aviation GTE technical condition identification (Hard Computing technology is used) on measurements of input and output parameters of the multiple linear and non-linear generalised models at presence of noise measured (the new recursive Least Squares Method (LSM)). The developed GTE condition monitoring system provides stageby- stage estimation of engine technical conditions. As application of the given technique the estimation of the new operating aviation engine technical condition was made.
Abstract: Isobaric vapor-liquid equilibrium measurements are reported for the binary mixtures of n-Butylamine and Triethylamine with Cumene at 97.3 kPa. The measurements have been performed using a vapor recirculating type (modified Othmer's) equilibrium still. The binary mixture of n-Butylamine + Cumene shows positive deviation from ideality. Triethylamine + Cumene mixture shows negligible deviation from ideality. None of the systems form an azeotrope. The activity coefficients have been calculated taking into consideration the vapor phase nonideality. The data satisfy the thermodynamic consistency test of Herington. The activity coefficients have been satisfactorily correlated by means of the Margules, NRTL, and Black equations. The activity coefficient values obtained by the UNIFAC model are also reported.
Abstract: Sedimentation process resulting from soil erosion in
the water basin especially in arid and semi-arid where poor
vegetation cover in the slope of the mountains upstream could
contribute to sediment formation. The consequence of sedimentation
not only makes considerable change in the morphology of the river
and the hydraulic characteristics but would also have a major
challenge for the operation and maintenance of the canal network
which depend on water flow to meet the stakeholder-s requirements.
For this reason mathematical modeling can be used to simulate the
effective factors on scouring, sediment transport and their settling
along the waterways. This is particularly important behind the
reservoirs which enable the operators to estimate the useful life of
these hydraulic structures. The aim of this paper is to simulate the
sedimentation and erosion in the eastern and western water intake
structures of the Dez Diversion weir using GSTARS-3 software. This
is done to estimate the sedimentation and investigate the ways in
which to optimize the process and minimize the operational
problems. Results indicated that the at the furthest point upstream of
the diversion weir, the coarser sediment grains tended to settle. The
reason for this is the construction of the phantom bridge and the
outstanding rocks just upstream of the structure. The construction of
these along the river course has reduced the momentum energy
require to push the sediment loads and make it possible for them to
settle wherever the river regime allows it. Results further indicated a
trend for the sediment size in such a way that as the focus of study
shifts downstream the size of grains get smaller and vice versa. It
was also found that the finding of the GSTARS-3 had a close
proximity with the sets of the observed data. This suggests that the
software is a powerful analytical tool which can be applied in the
river engineering project with a minimum of costs and relatively
accurate results.
Abstract: Oxidative stress and overwhelming free radicals
associated with diabetes mellitus are likely to be linked with
development of certain complication such as retinopathy,
nephropathy and neuropathy. Treatment of diabetic subjects with
antioxidant may be of advantage in attenuating these complications.
Olive leaf (Oleaeuropaea), has been endowed with many beneficial
and health promoting properties mostly linked to its antioxidant
activity. This study aimed to evaluate the significance of
supplementation of Olive leaves extract (OLE) in reducing oxidative
stress, hyperglycemia and hyperlipidemia in Sterptozotocin (STZ)-
induced diabetic rats. After induction of diabetes, a significant rise in
plasma glucose, lipid profiles except High density lipoproteincholestrol
(HDLc), malondialdehyde (MDA) and significant decrease
of plasma insulin, HDLc and Plasma reduced glutathione GSH as
well as alteration in enzymatic antioxidants was observed in all
diabetic animals. During treatment of diabetic rats with 0.5g/kg body
weight of Olive leaves extract (OLE) the levels of plasma (MDA)
,(GSH), insulin, lipid profiles along with blood glucose and
erythrocyte enzymatic antioxidant enzymes were significantly
restored to establish values that were not different from normal
control rats. Untreated diabetic rats on the other hand demonstrated
persistent alterations in the oxidative stress marker (MDA), blood
glucose, insulin, lipid profiles and the antioxidant parameters. These
results demonstrate that OLE may be of advantage in inhibiting
hyperglycemia, hyperlipidemia and oxidative stress induced by
diabetes and suggest that administration of OLE may be helpful in
the prevention or at least reduced of diabetic complications
associated with oxidative stress.
Abstract: The quality of short term load forecasting can improve the efficiency of planning and operation of electric utilities. Artificial Neural Networks (ANNs) are employed for nonlinear short term load forecasting owing to their powerful nonlinear mapping capabilities. At present, there is no systematic methodology for optimal design and training of an artificial neural network. One has often to resort to the trial and error approach. This paper describes the process of developing three layer feed-forward large neural networks for short-term load forecasting and then presents a heuristic search algorithm for performing an important task of this process, i.e. optimal networks structure design. Particle Swarm Optimization (PSO) is used to develop the optimum large neural network structure and connecting weights for one-day ahead electric load forecasting problem. PSO is a novel random optimization method based on swarm intelligence, which has more powerful ability of global optimization. Employing PSO algorithms on the design and training of ANNs allows the ANN architecture and parameters to be easily optimized. The proposed method is applied to STLF of the local utility. Data are clustered due to the differences in their characteristics. Special days are extracted from the normal training sets and handled separately. In this way, a solution is provided for all load types, including working days and weekends and special days. The experimental results show that the proposed method optimized by PSO can quicken the learning speed of the network and improve the forecasting precision compared with the conventional Back Propagation (BP) method. Moreover, it is not only simple to calculate, but also practical and effective. Also, it provides a greater degree of accuracy in many cases and gives lower percent errors all the time for STLF problem compared to BP method. Thus, it can be applied to automatically design an optimal load forecaster based on historical data.
Abstract: Gastric ulceration is a discontinuity in gastric mucosa, usually occurs due to imbalance between the gastric mucosal protective factors, that is called gastric mucosal barrier, and the aggressive factors, to which the mucosa is exposed. This study was carried out on sixty male Sprague-Dowely rats (12- 16 weeks old) allocated into two groups. The first control group and the second Gastric lesion group which induced by oral administration of a single daily dose of aspirin at a dose of 300 mg/kg body weight for 7 consecutive-days (6% aspirin solution will be prepared and each rat will be given 5 ml of that solution/kg body weight). Blood is collected 1, 2 and 3 weeks after induction of gastric ulceration. Significant increase in serum copper, nitric oxide, and prostaglandin E2 all over the period of experiment. Significant decrease in erythrocyte superoxide dismutase (t-SOD) activities, serum (calcium, phosphorus, glucose and insulin) levels. Non-significant changes in serum sodium and potassium levels are obtained.
Abstract: With a surge of stream processing applications novel
techniques are required for generation and analysis of association
rules in streams. The traditional rule mining solutions cannot handle
streams because they generally require multiple passes over the data
and do not guarantee the results in a predictable, small time. Though
researchers have been proposing algorithms for generation of rules
from streams, there has not been much focus on their analysis.
We propose Association rule profiling, a user centric process for
analyzing association rules and attaching suitable profiles to them
depending on their changing frequency behavior over a previous
snapshot of time in a data stream.
Association rule profiles provide insights into the changing nature
of associations and can be used to characterize the associations. We
discuss importance of characteristics such as predictability of
linkages present in the data and propose metric to quantify it. We
also show how association rule profiles can aid in generation of user
specific, more understandable and actionable rules.
The framework is implemented as SUPAR: System for Usercentric
Profiling of Association Rules in streaming data. The
proposed system offers following capabilities:
i) Continuous monitoring of frequency of streaming item-sets
and detection of significant changes therein for association rule
profiling.
ii) Computation of metrics for quantifying predictability of
associations present in the data.
iii) User-centric control of the characterization process: user
can control the framework through a) constraint specification and b)
non-interesting rule elimination.
Abstract: An attempt has been made to investigate the
machinability of zirconia toughened alumina (ZTA) inserts while
turning AISI 4340 steel. The insert was prepared by powder
metallurgy process route and the machining experiments were
performed based on Response Surface Methodology (RSM) design
called Central Composite Design (CCD). The mathematical model of
flank wear, cutting force and surface roughness have been developed
using second order regression analysis. The adequacy of model has
been carried out based on Analysis of variance (ANOVA) techniques.
It can be concluded that cutting speed and feed rate are the two most
influential factor for flank wear and cutting force prediction. For
surface roughness determination, the cutting speed & depth of cut
both have significant contribution. Key parameters effect on each
response has also been presented in graphical contours for choosing
the operating parameter preciously. 83% desirability level has been
achieved using this optimized condition.
Abstract: In this paper, the periodic surveillance scheme has
been proposed for any convex region using mobile wireless sensor
nodes. A sensor network typically consists of fixed number of
sensor nodes which report the measurements of sensed data such as
temperature, pressure, humidity, etc., of its immediate proximity
(the area within its sensing range). For the purpose of sensing an
area of interest, there are adequate number of fixed sensor
nodes required to cover the entire region of interest. It implies
that the number of fixed sensor nodes required to cover a given
area will depend on the sensing range of the sensor as well as
deployment strategies employed. It is assumed that the sensors to
be mobile within the region of surveillance, can be mounted on
moving bodies like robots or vehicle. Therefore, in our
scheme, the surveillance time period determines the number of
sensor nodes required to be deployed in the region of interest.
The proposed scheme comprises of three algorithms namely:
Hexagonalization, Clustering, and Scheduling, The first algorithm
partitions the coverage area into fixed sized hexagons that
approximate the sensing range (cell) of individual sensor node.
The clustering algorithm groups the cells into clusters, each of
which will be covered by a single sensor node. The later
determines a schedule for each sensor to serve its respective cluster.
Each sensor node traverses all the cells belonging to the cluster
assigned to it by oscillating between the first and the last cell for
the duration of its life time. Simulation results show that our
scheme provides full coverage within a given period of time using
few sensors with minimum movement, less power consumption,
and relatively less infrastructure cost.
Abstract: One main drawback of intrusion detection system is the
inability of detecting new attacks which do not have known
signatures. In this paper we discuss an intrusion detection method
that proposes independent component analysis (ICA) based feature
selection heuristics and using rough fuzzy for clustering data. ICA is
to separate these independent components (ICs) from the monitored
variables. Rough set has to decrease the amount of data and get rid of
redundancy and Fuzzy methods allow objects to belong to several
clusters simultaneously, with different degrees of membership. Our
approach allows us to recognize not only known attacks but also to
detect activity that may be the result of a new, unknown attack. The
experimental results on Knowledge Discovery and Data Mining-
(KDDCup 1999) dataset.
Abstract: Design and implementation of a novel B-ACOSD CFAR algorithm is presented in this paper. It is proposed for detecting radar target in log-normal distribution environment. The BACOSD detector is capable to detect automatically the number interference target in the reference cells and detect the real target by an adaptive threshold. The detector is implemented as a System on Chip on FPGA Altera Stratix II using parallelism and pipelining technique. For a reference window of length 16 cells, the experimental results showed that the processor works properly with a processing speed up to 115.13MHz and processing time0.29 ┬Ás, thus meets real-time requirement for a typical radar system.
Abstract: In this paper, we propose an improvement of pattern
growth-based PrefixSpan algorithm, called I-PrefixSpan. The general idea of I-PrefixSpan is to use sufficient data structure for Seq-Tree
framework and separator database to reduce the execution time and
memory usage. Thus, with I-PrefixSpan there is no in-memory database stored after index set is constructed. The experimental result
shows that using Java 2, this method improves the speed of PrefixSpan up to almost two orders of magnitude as well as the memory usage to more than one order of magnitude.
Abstract: This paper presents a solution for the behavioural
animation of autonomous virtual agent navigation in virtual environments.
We focus on using Dempster-Shafer-s Theory of Evidence
in developing visual sensor for virtual agent. The role of the visual
sensor is to capture the information about the virtual environment
or identifie which part of an obstacle can be seen from the position
of the virtual agent. This information is require for vitual agent to
coordinate navigation in virtual environment. The virual agent uses
fuzzy controller as a navigation system and Fuzzy α - level for
the action selection method. The result clearly demonstrates the path
produced is reasonably smooth even though there is some sharp turn
and also still not diverted too far from the potential shortest path.
This had indicated the benefit of our method, where more reliable
and accurate paths produced during navigation task.
Abstract: It is important to remove manganese from water
because of its effects on human and the environment. Human
activities are one of the biggest contributors for excessive manganese
concentration in the environment. The proposed method to remove
manganese in aqueous solution by using adsorption as in carbon
nanotubes (CNT) at different parameters: The parameters are CNT
dosage, pH, agitation speed and contact time. Different pHs are pH
6.0, pH 6.5, pH 7.0, pH 7.5 and pH 8.0, CNT dosages are 5mg,
6.25mg, 7.5mg, 8.75mg or 10mg, contact time are 10 min, 32.5 min,
55 min, 87.5 min and 120 min while the agitation speeds are 100rpm,
150rpm, 200rpm, 250rpm and 300rpm. The parameters chosen for
experiments are based on experimental design done by using Central
Composite Design, Design Expert 6.0 with 4 parameters, 5 levels and
2 replications. Based on the results, condition set at pH 7.0, agitation
speed of 300 rpm, 7.5mg and contact time 55 minutes gives the
highest removal with 75.5%. From ANOVA analysis in Design
Expert 6.0, the residual concentration will be very much affected by
pH and CNT dosage. Initial manganese concentration is 1.2mg/L
while the lowest residual concentration achieved is 0.294mg/L,
which almost satisfy DOE Malaysia Standard B requirement.
Therefore, further experiments must be done to remove manganese
from model water to the required standard (0.2 mg/L) with the initial
concentration set to 0.294 mg/L.
Abstract: In non destructive testing by radiography, a perfect
knowledge of the weld defect shape is an essential step to
appreciate the quality of the weld and make decision on its
acceptability or rejection. Because of the complex nature of the
considered images, and in order that the detected defect region
represents the most accurately possible the real defect, the choice
of thresholding methods must be done judiciously. In this paper,
performance criteria are used to conduct a comparative study of
four non parametric histogram thresholding methods for automatic
extraction of weld defect in radiographic images.