Abstract: The issue of unintentional islanding in PV grid
interconnection still remains as a challenge in grid-connected
photovoltaic (PV) systems. This paper discusses the overview of
popularly used anti-islanding detection methods, practically applied
in PV grid-connected systems. Anti-islanding methods generally can
be classified into four major groups, which include passive methods,
active methods, hybrid methods and communication base methods.
Active methods have been the preferred detection technique over the
years due to very small non-detected zone (NDZ) in small scale
distribution generation. Passive method is comparatively simpler
than active method in terms of circuitry and operations. However, it
suffers from large NDZ that significantly reduces its performance.
Communication base methods inherit the advantages of active and
passive methods with reduced drawbacks. Hybrid method which
evolved from the combination of both active and passive methods
has been proven to achieve accurate anti-islanding detection by many
researchers. For each of the studied anti-islanding methods, the
operation analysis is described while the advantages and
disadvantages are compared and discussed. It is difficult to pinpoint a
generic method for a specific application, because most of the
methods discussed are governed by the nature of application and
system dependent elements. This study concludes that the setup and
operation cost is the vital factor for anti-islanding method selection in
order to achieve minimal compromising between cost and system
quality.
Abstract: Understanding of how and where NOx formation
occurs in industrial burner is very important for efficient and clean
operation of utility burners. Also the importance of this problem is
mainly due to its relation to the pollutants produced by more burners
used widely of gas turbine in thermal power plants and glass and steel
industry.
In this article, a numerical model of an industrial burner operating
in MILD combustion is validated with experimental data.. Then
influence of air flow rate and air temperature on combustor
temperature profiles and NOX product are investigated. In order to
modification this study reports on the effects of fuel and air dilution
(with inert gases H2O, CO2, N2), and also influence of lean-premixed
of fuel, on the temperature profiles and NOX emission.
Conservation equations of mass, momentum and energy, and
transport equations of species concentrations, turbulence, combustion
and radiation modeling in addition to NO modeling equations were
solved together to present temperature and NO distribution inside the
burner.
The results shows that dilution, cause to a reduction in value of
temperature and NOX emission, and suppresses any flame
propagation inside the furnace and made the flame inside the furnace
invisible. Dilution with H2O rather than N2 and CO2 decreases further
the value of the NOX. Also with raise of lean-premix level, local
temperature of burner and the value of NOX product are decreases
because of premixing prevents local “hot spots" within the combustor
volume that can lead to significant NOx formation. Also leanpremixing
of fuel with air cause to amount of air in reaction zone is
reach more than amount that supplied as is actually needed to burn
the fuel and this act lead to limiting NOx formation
Abstract: The speech signal conveys information about the
identity of the speaker. The area of speaker identification is
concerned with extracting the identity of the person speaking the
utterance. As speech interaction with computers becomes more
pervasive in activities such as the telephone, financial transactions
and information retrieval from speech databases, the utility of
automatically identifying a speaker is based solely on vocal
characteristic. This paper emphasizes on text dependent speaker
identification, which deals with detecting a particular speaker from a
known population. The system prompts the user to provide speech
utterance. System identifies the user by comparing the codebook of
speech utterance with those of the stored in the database and lists,
which contain the most likely speakers, could have given that speech
utterance. The speech signal is recorded for N speakers further the
features are extracted. Feature extraction is done by means of LPC
coefficients, calculating AMDF, and DFT. The neural network is
trained by applying these features as input parameters. The features
are stored in templates for further comparison. The features for the
speaker who has to be identified are extracted and compared with the
stored templates using Back Propogation Algorithm. Here, the
trained network corresponds to the output; the input is the extracted
features of the speaker to be identified. The network does the weight
adjustment and the best match is found to identify the speaker. The
number of epochs required to get the target decides the network
performance.
Abstract: In this paper, we propose a novel limited feedback scheme for task planning with service robots. Instead of sending the full service robot state information for the task planning, the proposed scheme send the best-M indices of service robots with a indicator. With the indicator, the proposed scheme significantly reduces the communication overhead for task planning as well as mitigates the system performance degradation in terms of the utility. In addition, we analyze the system performance of the proposed scheme and compare the proposed scheme with the other schemes.
Abstract: Data stream analysis is the process of computing
various summaries and derived values from large amounts of data
which are continuously generated at a rapid rate. The nature of a
stream does not allow a revisit on each data element. Furthermore,
data processing must be fast to produce timely analysis results. These
requirements impose constraints on the design of the algorithms to
balance correctness against timely responses. Several techniques
have been proposed over the past few years to address these
challenges. These techniques can be categorized as either dataoriented
or task-oriented. The data-oriented approach analyzes a
subset of data or a smaller transformed representation, whereas taskoriented
scheme solves the problem directly via approximation
techniques. We propose a hybrid approach to tackle the data stream
analysis problem. The data stream has been both statistically
transformed to a smaller size and computationally approximated its
characteristics. We adopt a Monte Carlo method in the approximation
step. The data reduction has been performed horizontally and
vertically through our EMR sampling method. The proposed method
is analyzed by a series of experiments. We apply our algorithm on
clustering and classification tasks to evaluate the utility of our
approach.
Abstract: We propose a multi-agent based utilitarian approach
to model and understand information flows in social networks that
lead to Pareto optimal informational exchanges. We model the
individual expected utility function of the agents to reflect the net
value of information received. We show how this model, adapted
from a theorem by Karl Borch dealing with an actuarial Risk
Exchange concept in the Insurance industry, can be used for social
network analysis. We develop a utilitarian framework that allows us
to interpret Pareto optimal exchanges of value as potential
information flows, while achieving a maximization of a sum of
expected utilities of information of the group of agents. We examine
some interesting conditions on the utility function under which the
flows are optimal. We illustrate the promise of this new approach to
attach economic value to information in networks with a synthetic
example.
Abstract: In this research, the authors analyze network stability
using agent-based simulation. Firstly, the authors focus on analyzing
large networks (eight agents) by connecting different two stable small
social networks (A small stable network is consisted on four agents.).
Secondly, the authors analyze the network (eight agents) shape which
is added one agent to a stable network (seven agents). Thirdly, the
authors analyze interpersonal comparison of utility. The “star-network
"was not found on the result of interaction among stable two small
networks. On the other hand, “decentralized network" was formed
from several combination. In case of added one agent to a stable
network (seven agents), if the value of “c"(maintenance cost of per
a link) was larger, the number of patterns of stable network was
also larger. In this case, the authors identified the characteristics of a
large stable network. The authors discovered the cases of decreasing
personal utility under condition increasing total utility.
Abstract: Multi-energy systems will enhance the system
reliability and power quality. This paper presents an integrated
approach for the design and operation of distributed energy resources
(DER) systems, based on energy hub modeling. A multi-objective
optimization model is developed by considering an integrated view of
electricity and natural gas network to analyze the optimal design and
operating condition of DER systems, by considering two conflicting
objectives, namely, minimization of total cost and the minimization
of environmental impact which is assessed in terms of CO2
emissions. The mathematical model considers energy demands of the
site, local climate data, and utility tariff structure, as well as technical
and financial characteristics of the candidate DER technologies. To
provide energy demands, energy systems including photovoltaic, and
co-generation systems, boiler, central power grid are considered. As
an illustrative example, a hotel in Iran demonstrates potential
applications of the proposed method. The results prove that
increasing the satisfaction degree of environmental objective leads to
increased total cost.
Abstract: Public awareness towards green energy are on the rise and this can be prove by many product being manufactured or prerequired to be made as energy saving devices mainly to save consumer from spending more on utility billing. These schemes are popular nowadays and many homemade appliances are turned into energy saving gadget which attracts the attention of consumers. Knowing the public demands and pattern towards purchasing home appliances thus the idea of “energy saving suction hood (ESSH)" is proposed. The ESSH can be used in many places that require smoke ventilation or even to reduce the room temperature as many conventional suction hoods (CSH) do, but this device works automatically by the usage of sensors that detects the smoke/temperature and automatically spins the exhaust fan. As it turns, the mechanical rotation rotates the AC generator which is coupled together with the fan and then charges the battery. The innovation of this product is, it does not rely on the utility supply as it is also hook up with a solar panel which also charges the battery, Secondly, it generates energy as the exhaust fan mechanically rotates. Thirdly, an energy loop back feature is introduced to this system which will supply for the ventilator fan. Another major innovation is towards interfacing this device with an in house production of generator. This generator is produced by proper design on stator as well as rotor to reduce the losses. A comparison is made between the ESSH and the CSH and result shows that the ESSH saves 172.8kWh/year of utility supply which is used by CSH. This amount of energy can save RM 3.14 from monthly utility bill and a total of RM 37.67 per year. In fact this product can generate 175 Watt of power from generator(75W) and solar panel(100W) that can be used either to supply other household appliances and/or to loop back to supply the fans motor. The innovation of this system is essential for future production of other equipment by using the loopback power method and turning most equipment into a standalone system.
Abstract: SQL injection on web applications is a very popular
kind of attack. There are mechanisms such as intrusion detection
systems in order to detect this attack. These strategies often rely on
techniques implemented at high layers of the application but do not
consider the low level of system calls. The problem of only
considering the high level perspective is that an attacker can
circumvent the detection tools using certain techniques such as URL
encoding. One technique currently used for detecting low-level
attacks on privileged processes is the tracing of system calls. System
calls act as a single gate to the Operating System (OS) kernel; they
allow catching the critical data at an appropriate level of detail. Our
basic assumption is that any type of application, be it a system
service, utility program or Web application, “speaks” the language of
system calls when having a conversation with the OS kernel. At this
level we can see the actual attack while it is happening. We conduct
an experiment in order to demonstrate the suitability of system call
analysis for detecting SQL injection. We are able to detect the attack.
Therefore we conclude that system calls are not only powerful in
detecting low-level attacks but that they also enable us to detect highlevel
attacks such as SQL injection.
Abstract: Protective relays are components of a protection system
in a power system domain that provides decision making element for
correct protection and fault clearing operations. Failure of the
protection devices may reduce the integrity and reliability of the power
system protection that will impact the overall performance of the
power system. Hence it is imperative for power utilities to assess the
reliability of protective relays to assure it will perform its intended
function without failure. This paper will discuss the application of
reliability analysis using statistical method called Life Data Analysis
in Tenaga Nasional Berhad (TNB), a government linked power utility
company in Malaysia, namely Transmission Division, to assess and
evaluate the reliability of numerical overcurrent protective relays from
two different manufacturers.
Abstract: The objective of this research was to find the
relationship between auspicious meaning in eastern wisdom and the
interpretation as a guideline for the design and development of
community souvenirs. The sample group included 400 customers in
Bangkok who used to buy community souvenir products. The
information was applied to design the souvenirs which were
considered for the appropriateness by 5 design specialists. The data
were analyzed to find frequency, percentage, and SD with the results
as follows. 1) The best factor referring to the auspicious meaning is
color. The application of auspicious meaning can make the value
added to the product and bring the fortune to the receivers. 2) The
effectiveness of the auspicious meaning integration on the design of
community souvenir product was in high level. When considering in
each aspect, it was found that the interpretation aspect was in high
level, the congruency of the auspicious meaning and the utility of the
product was in high level. The attractiveness and the good design
were in very high level while the potential of the value added in the
product design was in high level. The suitable application to the
design of community souvenir product was in high level.
Abstract: The project was undertaken to determine the effects of modified tissue culture protocols e.g. age of culture and hormone levels (2,4-D) in generating somaclonal variation. Moreover, the utility of molecular markers (SSR and MSAP) in sorting off types/somaclones were investigated.
Results show that somaclonal variation is in effect due to prolonged subculture and high 2,4-D concentration. The resultant variation was observed to be due to high level of methylation events specifically cytosine methylation either at the internal or external cytosine and was identified by methylation sensitive amplification polymorphism (MSAP).Simple sequence repeats (SSR) on the other hand, was able to associate a marker to a trait of interest.
These therefore, show that molecular markers can be an important tool in sorting out variation/mutants at an early stage.
Abstract: Electromyography (EMG) signal processing has been investigated remarkably regarding various applications such as in rehabilitation systems. Specifically, wavelet transform has served as a powerful technique to scrutinize EMG signals since wavelet transform is consistent with the nature of EMG as a non-stationary signal. In this paper, the efficiency of wavelet transform in surface EMG feature extraction is investigated from four levels of wavelet decomposition and a comparative study between different mother wavelets had been done. To recognize the best function and level of wavelet analysis, two evaluation criteria, scatter plot and RES index are recruited. Hereupon, four wavelet families, namely, Daubechies, Coiflets, Symlets and Biorthogonal are studied in wavelet decomposition stage. Consequently, the results show that only features from first and second level of wavelet decomposition yields good performance and some functions of various wavelet families can lead to an improvement in separability class of different hand movements.
Abstract: The utility of expert system generators has been
widely recognized in many applications. Several generators based on
concept of the paradigm object, have been recently proposed. The
generator of oriented object expert system (GSEOO) offers
languages that are often complex and difficult to use. We propose in
this paper an extension of the expert system generator, JESS, which
permits a friendly use of this expert system. The new tool, called
VISUAL JESS, bring two main improvements to JESS. The first
improvement concerns the easiness of its utilization while giving
back transparency to the syntax and semantic aspects of the JESS
programming language. The second improvement permits an easy
access and modification of the JESS knowledge basis. The
implementation of VISUAL JESS is made so that it is extensible and
portable.
Abstract: Influence diagrams (IDs) are one of the most commonly used graphical decision models for reasoning under uncertainty. The quantification of IDs which consists in defining conditional probabilities for chance nodes and utility functions for value nodes is not always obvious. In fact, decision makers cannot always provide exact numerical values and in some cases, it is more easier for them to specify qualitative preference orders. This work proposes an adaptation of standard IDs to the qualitative framework based on possibility theory.
Abstract: This paper presents a novel methodology for Maximum Power Point Tracking (MPPT) of a grid-connected 20 kW Photovoltaic (PV) system using neuro-fuzzy network. The proposed method predicts the reference PV voltage guarantying optimal power transfer between the PV generator and the main utility grid. The neuro-fuzzy network is composed of a fuzzy rule-based classifier and three Radial Basis Function Neural Networks (RBFNN). Inputs of the network (irradiance and temperature) are classified before they are fed into the appropriated RBFNN for either training or estimation process while the output is the reference voltage. The main advantage of the proposed methodology, comparing to a conventional single neural network-based approach, is the distinct generalization ability regarding to the nonlinear and dynamic behavior of a PV generator. In fact, the neuro-fuzzy network is a neural network based multi-model machine learning that defines a set of local models emulating the complex and non-linear behavior of a PV generator under a wide range of operating conditions. Simulation results under several rapid irradiance variations proved that the proposed MPPT method fulfilled the highest efficiency comparing to a conventional single neural network.
Abstract: The aim of this article is to narrate the utility of novel simulation approach i.e. convolution method to predict blood concentration of drug utilizing dissolution data of salbutamol sulphate microparticulate formulations with different release patterns (1:1, 1:2 and 1:3, drug:polymer). Dissolution apparatus II USP 2007 and 900 ml double distilled water stirrd at 50 rpm was employed for dissolution analysis. From dissolution data, blood drug concentration was determined, and in return predicted blood drug concentration data was used to calculate the pharmacokinetic parameters i.e. Cmax, Tmax, and AUC. Convolution is a good biwaiver technique; however its better utility needs it application in the conditions where biorelevant dissolution media are used.
Abstract: In an electric power system, spinning reserve
requirements can be determined by using deterministic and/or
probabilistic measures. Although deterministic methods are usual in
many systems, application of probabilistic methods becomes
increasingly important in the new environment of the electric power
utility industry. This is because of the increased uncertainty
associated with competition. In this paper 1) a new probabilistic
method is presented which considers the reliability of transmission
system in a simplified manner and 2) deterministic and probabilistic
methods are compared. The studied methods are applied to the Roy
Billinton Test System (RBTS).
Abstract: Urbanization and related anthropogenic modifications
cause extent of habitat fragmentation and directly lead to decline of
local biodiversity. Conservation biologists advocate corridor creation
as one approach to rescue biodiversity. Here we examine the utility of
roads as corridors in preserving plant diversity by investigating
roadside vegetation in Yellow River Delta (YRD), China. We
examined the spatio-temporal distribution pattern of plant species
richness, diversity and composition along roadside. The results
suggest that roads, as dispersal conduits, increase occurrence
probability of new settlers to a new area, meanwhile, roads accumulate
the greater propagule pressure and favourable survival condition
during operation phase. As a result, more species, including native and
alien plants, non- halophyte and halophyte species, threatened and
cosmopolitic species, were found prosperous at roadside. Roadside
may be a refuge for more species, and the pattern of vegetation
distribution is affected by road age and the distance from road verge.