Abstract: The condition of lightning surge causes the traveling waves and the temporary increase in voltage in the transmission line system. Lightning is the most harmful for destroying the transmission line and setting devices so it is necessary to study and analyze the temporary increase in voltage for designing and setting the surge arrester. This analysis describes the figure of the lightning wave in transmission line with 115 kV voltage level in Thailand by using ATP/EMTP program to create the model of the transmission line and lightning surge. Because of the limit of this program, it must be calculated for the geometry of the transmission line and surge parameter and calculation in the manual book for the closest value of the parameter. On the other hand, for the effects on surge protector when the lightning comes, the surge arrester model must be right and standardized as metropolitan electrical authority's standard. The candidate compared the real information to the result from calculation, also. The results of the analysis show that the temporary increase in voltage value will be rise to 326.59 kV at the line which is done by lightning when the surge arrester is not set in the system. On the other hand, the temporary increase in voltage value will be 182.83 kV at the line which is done by lightning when the surge arrester is set in the system and the period of the traveling wave is reduced, also. The distance for setting the surge arrester must be as near to the transformer as possible. Moreover, it is necessary to know the right distance for setting the surge arrester and the size of the surge arrester for preventing the temporary increase in voltage, effectively.
Abstract: The theory of Groebner Bases, which has recently been
honored with the ACM Paris Kanellakis Theory and Practice Award,
has become a crucial building block to computer algebra, and is
widely used in science, engineering, and computer science. It is wellknown
that Groebner bases computation is EXP-SPACE in a general
setting. In this paper, we give an algorithm to show that Groebner
bases computation is P-SPACE in Boolean rings. We also show that
with this discovery, the Groebner bases method can theoretically be
as efficient as other methods for automated verification of hardware
and software. Additionally, many useful and interesting properties of
Groebner bases including the ability to efficiently convert the bases
for different orders of variables making Groebner bases a promising
method in automated verification.
Abstract: Querying a data source and routing data towards sink
becomes a serious challenge in static wireless sensor networks if sink
and/or data source are mobile. Many a times the event to be observed
either moves or spreads across wide area making maintenance of
continuous path between source and sink a challenge. Also, sink can
move while query is being issued or data is on its way towards sink.
In this paper, we extend our already proposed Grid Based Data
Dissemination (GBDD) scheme which is a virtual grid based
topology management scheme restricting impact of movement of
sink(s) and event(s) to some specific cells of a grid. This obviates the
need for frequent path modifications and hence maintains continuous
flow of data while minimizing the network energy consumptions.
Simulation experiments show significant improvements in network
energy savings and average packet delay for a packet to reach at sink.
Abstract: Every commercial bank optimises its asset portfolio
depending on the profitability of assets and chosen or imposed
constraints. This paper proposes and applies a stylized model for
optimising banks' asset and liability structure, reflecting profitability
of different asset categories and their risks as well as costs associated
with different liability categories and reserve requirements. The level
of detail for asset and liability categories is chosen to create a
suitably parsimonious model and to include the most important
categories in the model. It is shown that the most appropriate
optimisation criterion for the model is the maximisation of the ratio
of net interest income to assets. The maximisation of this ratio is
subject to several constraints. Some are accounting identities or
dictated by legislative requirements; others vary depending on the
market objectives for a particular bank. The model predicts variable
amount of assets allocated to loan provision.
Abstract: Image synthesis is an important area in image processing.
To synthesize images various systems are proposed in
the literature. In this paper, we propose a bio-inspired system to
synthesize image and to study the generating power of the system, we
define the class of languages generated by our system. We call image
as array in this paper. We use a primitive called iso-array to synthesize
image/array. The operation is double splicing on iso-arrays. The
double splicing operation is used in DNA computing and we use
this to synthesize image. A comparison of the family of languages
generated by the proposed self restricted double splicing systems on
iso-arrays with the existing family of local iso-picture languages is
made. Certain closure properties such as union, concatenation and
rotation are studied for the family of languages generated by the
proposed model.
Abstract: The paper describes a new approach for fingerprint
classification, based on the distribution of local features (minute
details or minutiae) of the fingerprints. The main advantage is that
fingerprint classification provides an indexing scheme to facilitate
efficient matching in a large fingerprint database. A set of rules based
on heuristic approach has been proposed. The area around the core
point is treated as the area of interest for extracting the minutiae
features as there are substantial variations around the core point as
compared to the areas away from the core point. The core point in a
fingerprint has been located at a point where there is maximum
curvature. The experimental results report an overall average
accuracy of 86.57 % in fingerprint classification.
Abstract: Small satellites have become increasingly popular recently as a means of providing educational institutes with the chance to design, construct, and test their spacecraft from beginning to the possible launch due to the low launching cost. This approach is remarkably cost saving because of the weight and size reduction of such satellites. Weight reduction could be realised by utilising electromagnetic coils solely, instead of different types of actuators. This paper describes the restrictions of using only “Electromagnetic" actuation for 3D stabilisation and how to make the magnetorquer based attitude control feasible using Fuzzy Logic Control (FLC). The design is developed to stabilize the spacecraft against gravity gradient disturbances with a three-axis stabilizing capability.
Abstract: This work explores blind image deconvolution by recursive function approximation based on supervised learning of neural networks, under the assumption that a degraded image is linear convolution of an original source image through a linear shift-invariant (LSI) blurring matrix. Supervised learning of neural networks of radial basis functions (RBF) is employed to construct an embedded recursive function within a blurring image, try to extract non-deterministic component of an original source image, and use them to estimate hyper parameters of a linear image degradation model. Based on the estimated blurring matrix, reconstruction of an original source image from a blurred image is further resolved by an annealed Hopfield neural network. By numerical simulations, the proposed novel method is shown effective for faithful estimation of an unknown blurring matrix and restoration of an original source image.
Abstract: Human activity is a major concern in a wide variety of
applications, such as video surveillance, human computer interface
and face image database management. Detecting and recognizing
faces is a crucial step in these applications. Furthermore, major
advancements and initiatives in security applications in the past years
have propelled face recognition technology into the spotlight. The
performance of existing face recognition systems declines significantly
if the resolution of the face image falls below a certain level.
This is especially critical in surveillance imagery where often, due to
many reasons, only low-resolution video of faces is available. If these
low-resolution images are passed to a face recognition system, the
performance is usually unacceptable. Hence, resolution plays a key
role in face recognition systems. In this paper we introduce a new
low resolution face recognition system based on mixture of expert
neural networks. In order to produce the low resolution input images
we down-sampled the 48 × 48 ORL images to 12 × 12 ones using
the nearest neighbor interpolation method and after that applying
the bicubic interpolation method yields enhanced images which is
given to the Principal Component Analysis feature extractor system.
Comparison with some of the most related methods indicates that
the proposed novel model yields excellent recognition rate in low
resolution face recognition that is the recognition rate of 100% for
the training set and 96.5% for the test set.
Abstract: How to coordinate the behaviors of the agents through
learning is a challenging problem within multi-agent domains.
Because of its complexity, recent work has focused on how
coordinated strategies can be learned. Here we are interested in using
reinforcement learning techniques to learn the coordinated actions of a
group of agents, without requiring explicit communication among
them. However, traditional reinforcement learning methods are based
on the assumption that the environment can be modeled as Markov
Decision Process, which usually cannot be satisfied when multiple
agents coexist in the same environment. Moreover, to effectively
coordinate each agent-s behavior so as to achieve the goal, it-s
necessary to augment the state of each agent with the information
about other existing agents. Whereas, as the number of agents in a
multiagent environment increases, the state space of each agent grows
exponentially, which will cause the combinational explosion problem.
Profit sharing is one of the reinforcement learning methods that allow
agents to learn effective behaviors from their experiences even within
non-Markovian environments. In this paper, to remedy the drawback
of the original profit sharing approach that needs much memory to
store each state-action pair during the learning process, we firstly
address a kind of on-line rational profit sharing algorithm. Then, we
integrate the advantages of modular learning architecture with on-line
rational profit sharing algorithm, and propose a new modular
reinforcement learning model. The effectiveness of the technique is
demonstrated using the pursuit problem.
Abstract: Term Extraction, a key data preparation step in Text
Mining, extracts the terms, i.e. relevant collocation of words,
attached to specific concepts (e.g. genetic-algorithms and decisiontrees
are terms associated to the concept “Machine Learning" ). In
this paper, the task of extracting interesting collocations is achieved
through a supervised learning algorithm, exploiting a few
collocations manually labelled as interesting/not interesting. From
these examples, the ROGER algorithm learns a numerical function,
inducing some ranking on the collocations. This ranking is optimized
using genetic algorithms, maximizing the trade-off between the false
positive and true positive rates (Area Under the ROC curve). This
approach uses a particular representation for the word collocations,
namely the vector of values corresponding to the standard statistical
interestingness measures attached to this collocation. As this
representation is general (over corpora and natural languages),
generality tests were performed by experimenting the ranking
function learned from an English corpus in Biology, onto a French
corpus of Curriculum Vitae, and vice versa, showing a good
robustness of the approaches compared to the state-of-the-art Support
Vector Machine (SVM).
Abstract: Network management techniques have long been of
interest to the networking research community. The queue size plays
a critical role for the network performance. The adequate size of the
queue maintains Quality of Service (QoS) requirements within
limited network capacity for as many users as possible. The
appropriate estimation of the queuing model parameters is crucial for
both initial size estimation and during the process of resource
allocation. The accurate resource allocation model for the
management system increases the network utilization. The present
paper demonstrates the results of empirical observation of memory
allocation for packet-based services.
Abstract: The traditional software product and process metrics
are neither suitable nor sufficient in measuring the complexity of
software components, which ultimately is necessary for quality and
productivity improvement within organizations adopting CBSE.
Researchers have proposed a wide range of complexity metrics for
software systems. However, these metrics are not sufficient for
components and component-based system and are restricted to the
module-oriented systems and object-oriented systems. In this
proposed study it is proposed to find the complexity of the JavaBean
Software Components as a reflection of its quality and the component
can be adopted accordingly to make it more reusable. The proposed
metric involves only the design issues of the component and does not
consider the packaging and the deployment complexity. In this way,
the software components could be kept in certain limit which in turn
help in enhancing the quality and productivity.
Abstract: The study was a case study analysis about Thai Asia
Pacific Brewery Company. The purpose was to analyze the
company’s marketing objective, marketing strategy at company level,
and marketing mix before liquor liberalization in 2000. Methods used
in this study were qualitative and descriptive research approach
which demonstrated the following results of the study demonstrated
as follows: (1) Marketing objective was to increase market share of
Heineken and Amtel, (2) the company’s marketing strategies were
brand building strategy and distribution strategy. Additionally, the
company also conducted marketing mix strategy as follows. Product
strategy: The company added more beer brands namely Amstel and
Tiger to provide additional choice to consumers, product and
marketing research, and product development. Price strategy: the
company had taken the following into consideration: cost,
competitor, market, economic situation and tax. Promotion strategy:
the company conducted sales promotion and advertising. Distribution
strategy: the company extended channels its channels of distribution
into food shops, pubs and various entertainment places. This strategy
benefited interested persons and people who were engaged in the beer
business.
Abstract: Matrix metalloproteinase-3 (MMP3) is key member
of the MMP family, and is known to be present in coronary
atherosclerotic. Several studies have demonstrated that MMP-3
5A/6A polymorphism modify each transcriptional activity in allele
specific manner. We hypothesized that this polymorphism may play
a role as risk factor for development of coronary stenosis. The aim of
our study was to estimate MMP-3 (5A/6A) gene polymorphism on
interindividual variability in risk for coronary stenosis in an Iranian
population.DNA was extracted from white blood cells and genotypes
were obtained from coronary stenosis cases (n=95) and controls
(n=100) by PCR (polymerase chain reaction) and restriction
fragment length polymorphism techniques. Significant differences
between cases and controls were observed for MMP3 genotype
frequencies (X2=199.305, p< 0.001); the 6A allele was less
frequently seen in the control group, compared to the disease group
(85.79 vs. 78%, 6A/6A+5A/6A vs. 5A/5A, P≤0.001). These data
imply the involvement of -1612 5A/6A polymorphism in coronary
stenosis, and suggest that probably the 6A/6A MMP-3 genotype is a
genetic susceptibility factor for coronary stenosis.
Abstract: Fractional delay FIR filters design method based on
the differential evolution algorithm is presented. Differential evolution
is an evolutionary algorithm for solving a global optimization problems in the continuous search space. In the proposed approach,
an evolutionary algorithm is used to determine the coefficients of
a fractional delay FIR filter based on the Farrow structure. Basic
differential evolution is enhanced with a restricted mating technique,
which improves the algorithm performance in terms of convergence
speed and obtained solution. Evolutionary optimization is carried out by minimizing an objective function which is based on the amplitude
response and phase delay errors. Experimental results show that the proposed algorithm leads to a reduction in the amplitude response and phase delay errors relative to those achieved with the Least-Squares
method.
Abstract: Microcirculation is essential for the proper supply of
oxygen and nutritive substances to the biological tissue and the
removal of waste products of metabolism. The determination of
blood flow in the capillaries is therefore of great interest to clinicians.
A comparison has been carried out using the developed non-invasive,
non-contact and whole field laser speckle contrast imaging (LSCI)
based technique and as well as a commercially available laser
Doppler blood flowmeter (LDF) to evaluate blood flow at the finger
tip and elbow and is presented here. The LSCI technique gives more
quantitative information on the velocity of blood when compared to
the perfusion values obtained using the LDF. Measurement of blood
flow in capillaries can be of great interest to clinicians in the
diagnosis of vascular diseases of the upper extremities.
Abstract: Recently, wireless sensor networks have been paid
more interest, are widely used in a lot of commercial and military
applications, and may be deployed in critical scenarios (e.g. when a
malfunctioning network results in danger to human life or great
financial loss). Such networks must be protected against human
intrusion by using the secret keys to encrypt the exchange messages
between communicating nodes. Both the symmetric and asymmetric
methods have their own drawbacks for use in key management. Thus,
we avoid the weakness of these two cryptosystems and make use of
their advantages to establish a secure environment by developing the
new method for encryption depending on the idea of code
conversion. The code conversion-s equations are used as the key for
designing the proposed system based on the basics of logic gate-s
principals. Using our security architecture, we show how to reduce
significant attacks on wireless sensor networks.
Abstract: Music has a great effect on human body and mind; it
can have a positive effect on hormone system. Objective of this study
is to analysis the effect of music (carnatic, hard rock and jazz) on
brain activity during mental work load using electroencephalography
(EEG). Eight healthy subjects without special musical education
participated in the study. EEG signals were acquired at frontal (Fz),
parietal (Pz) and central (Cz) lobes of brain while listening to music
at three experimental condition (rest, music without mental task and
music with mental task). Spectral powers features were extracted at
alpha, theta and beta brain rhythms. While listening to jazz music, the
alpha and theta powers were significantly (p < 0.05) high for rest as
compared to music with and without mental task in Cz. While
listening to Carnatic music, the beta power was significantly (p <
0.05) high for with mental task as compared to rest and music
without mental task at Cz and Fz location. This finding corroborates
that attention based activities are enhanced while listening to jazz and
carnatic as compare to Hard rock during mental task.
Abstract: The healthcare environment is generally perceived as
being information rich yet knowledge poor. However, there is a lack
of effective analysis tools to discover hidden relationships and trends
in data. In fact, valuable knowledge can be discovered from
application of data mining techniques in healthcare system. In this
study, a proficient methodology for the extraction of significant
patterns from the Coronary Heart Disease warehouses for heart
attack prediction, which unfortunately continues to be a leading cause
of mortality in the whole world, has been presented. For this purpose,
we propose to enumerate dynamically the optimal subsets of the
reduced features of high interest by using rough sets technique
associated to dynamic programming. Therefore, we propose to
validate the classification using Random Forest (RF) decision tree to
identify the risky heart disease cases. This work is based on a large
amount of data collected from several clinical institutions based on
the medical profile of patient. Moreover, the experts- knowledge in
this field has been taken into consideration in order to define the
disease, its risk factors, and to establish significant knowledge
relationships among the medical factors. A computer-aided system is
developed for this purpose based on a population of 525 adults. The
performance of the proposed model is analyzed and evaluated based
on set of benchmark techniques applied in this classification problem.