Abstract: Energy Efficiency Management is the heart of a
worldwide problem. The capability of a multi-agent system as a
technology to manage the micro-grid operation has already been
proved. This paper deals with the implementation of a decisional
pattern applied to a multi-agent system which provides intelligence to
a distributed local energy network considered at local consumer level.
Development of multi-agent application involves agent
specifications, analysis, design, and realization. Furthermore, it can
be implemented by following several decisional patterns. The
purpose of present article is to suggest a new approach for a
decisional pattern involving a multi-agent system to control a
distributed local energy network in a decentralized competitive
system. The proposed solution is the result of a dichotomous
approach based on environment observation. It uses an iterative
process to solve automatic learning problems and converges
monotonically very fast to system attracting operation point.
Abstract: Efficient handoff algorithms are a cost-effective way
of enhancing the capacity and QoS of cellular system. The higher
value of hysteresis effectively prevents unnecessary handoffs but
causes undesired cell dragging. This undesired cell dragging causes
interference or could lead to dropped calls in microcellular
environment. The problems are further exacerbated by the corner
effect phenomenon which causes the signal level to drop by 20-30 dB
in 10-20 meters. Thus, in order to maintain reliable communication
in a microcellular system new and better handoff algorithms must be
developed. A fuzzy based handoff algorithm is proposed in this paper
as a solution to this problem. Handoff on the basis of ratio of slopes
of normal signal loss to the actual signal loss is presented. The fuzzy
based solution is supported by comparing its results with the results
obtained in analytical solution.
Abstract: The present investigation is concerned with
sub-impacts taken placed when a rigid hemispherical-head block
transversely impacts against a beam at different locations. Dynamic
substructure technique for elastic-plastic impact is applied to solve
numerically this problem. The time history of impact force and energy
exchange between block and beam are obtained. The process of
sub-impacts is analyzed from the energy exchange point of view. The
results verify the influences of the impact location on impact duration,
the first sub-impact and energy exchange between the beam and the
block.
Abstract: One of the difficulties of the vibration-based damage identification methods is the nonuniqueness of the results of damage identification. The different damage locations and severity may cause the identical response signal, which is even more severe for detection of the multiple damage. This paper proposes a new strategy for damage detection to avoid this nonuniqueness. This strategy firstly determines the approximates damage area based on the statistical pattern recognition method using the dynamic strain signal measured by the distributed fiber Bragg grating, and then accurately evaluates the damage information based on the Bayesian model updating method using the experimental modal data. The stochastic simulation method is then used to compute the high-dimensional integral in the Bayesian problem. Finally, an experiment of the plate structure, simulating one part of mechanical structure, is used to verify the effectiveness of this approach.
Abstract: Visualizing sound and noise often help us to determine
an appropriate control over the source localization. Near-field acoustic
holography (NAH) is a powerful tool for the ill-posed problem.
However, in practice, due to the small finite aperture size, the discrete
Fourier transform, FFT based NAH couldn-t predict the activeregion-
of-interest (AROI) over the edges of the plane. Theoretically
few approaches were proposed for solving finite aperture problem.
However most of these methods are not quite compatible for the
practical implementation, especially near the edge of the source. In
this paper, a zip-stuffing extrapolation approach has suggested with
2D Kaiser window. It is operated on wavenumber complex space
to localize the predicted sources. We numerically form a practice
environment with touch impact databases to test the localization of
sound source. It is observed that zip-stuffing aperture extrapolation
and 2D window with evanescent components provide more accuracy
especially in the small aperture and its derivatives.
Abstract: This paper reports work done to improve the modeling of complex processes when only small experimental data sets are available. Neural networks are used to capture the nonlinear underlying phenomena contained in the data set and to partly eliminate the burden of having to specify completely the structure of the model. Two different types of neural networks were used for the application of Pulping of Sugar Maple problem. A three layer feed forward neural networks, using the Preconditioned Conjugate Gradient (PCG) methods were used in this investigation. Preconditioning is a method to improve convergence by lowering the condition number and increasing the eigenvalues clustering. The idea is to solve the modified problem where M is a positive-definite preconditioner that is closely related to A. We mainly focused on Preconditioned Conjugate Gradient- based training methods which originated from optimization theory, namely Preconditioned Conjugate Gradient with Fletcher-Reeves Update (PCGF), Preconditioned Conjugate Gradient with Polak-Ribiere Update (PCGP) and Preconditioned Conjugate Gradient with Powell-Beale Restarts (PCGB). The behavior of the PCG methods in the simulations proved to be robust against phenomenon such as oscillations due to large step size.
Abstract: The need for multilingual communication in Japan has
increased due to an increase in the number of foreigners in the
country. When people communicate in their nonnative language,
the differences in language prevent mutual understanding among
the communicating individuals. In the medical field, communication
between the hospital staff and patients is a serious problem. Currently,
medical translators accompany patients to medical care facilities, and
the demand for medical translators is increasing. However, medical
translators cannot necessarily provide support, especially in cases in
which round-the-clock support is required or in case of emergencies.
The medical field has high expectations from information technology.
Hence, a system that supports accurate multilingual communication is
required. Despite recent advances in machine translation technology,
it is very difficult to obtain highly accurate translations. We have
developed a support system called M3 for multilingual medical
reception. M3 provides support functions that aid foreign patients in
the following respects: conversation, questionnaires, reception procedures,
and hospital navigation; it also has a Q&A function. Users
can operate M3 using a touch screen and receive text-based support.
In addition, M3 uses accurate translation tools called parallel texts
to facilitate reliable communication through conversations between
the hospital staff and the patients. However, if there is no parallel
text that expresses what users want to communicate, the users cannot
communicate. In this study, we have developed a circulating support
environment for multilingual medical communication using parallel
texts. The proposed environment can circulate necessary parallel texts
through the following procedure: (1) a user provides feedback about
the necessary parallel texts, following which (2) these parallel texts
are created and evaluated.
Abstract: Cryptographic algorithms play a crucial role in the
information society by providing protection from unauthorized
access to sensitive data. It is clear that information technology will
become increasingly pervasive, Hence we can expect the emergence
of ubiquitous or pervasive computing, ambient intelligence. These
new environments and applications will present new security
challenges, and there is no doubt that cryptographic algorithms and
protocols will form a part of the solution. The efficiency of a public
key cryptosystem is mainly measured in computational overheads,
key size and bandwidth. In particular the RSA algorithm is used in
many applications for providing the security. Although the security
of RSA is beyond doubt, the evolution in computing power has
caused a growth in the necessary key length. The fact that most chips
on smart cards can-t process key extending 1024 bit shows that there
is need for alternative. NTRU is such an alternative and it is a
collection of mathematical algorithm based on manipulating lists of
very small integers and polynomials. This allows NTRU to high
speeds with the use of minimal computing power. NTRU (Nth degree
Truncated Polynomial Ring Unit) is the first secure public key
cryptosystem not based on factorization or discrete logarithm
problem. This means that given sufficient computational resources
and time, an adversary, should not be able to break the key. The
multi-party communication and requirement of optimal resource
utilization necessitated the need for the present day demand of
applications that need security enforcement technique .and can be
enhanced with high-end computing. This has promoted us to develop
high-performance NTRU schemes using approaches such as the use
of high-end computing hardware. Peer-to-peer (P2P) or enterprise
grids are proven as one of the approaches for developing high-end
computing systems. By utilizing them one can improve the
performance of NTRU through parallel execution. In this paper we
propose and develop an application for NTRU using enterprise grid
middleware called Alchemi. An analysis and comparison of its
performance for various text files is presented.
Abstract: Noise has adverse effect on human health and
comfort. Noise not only cause hearing impairment, but it also acts as
a causal factor for stress and raising systolic pressure. Additionally it
can be a causal factor in work accidents, both by marking hazards
and warning signals and by impeding concentration. Industry
workers also suffer psychological and physical stress as well as
hearing loss due to industrial noise. This paper proposes an approach
to enable engineers to point out quantitatively the noisiest source for
modification, while multiple machines are operating simultaneously.
The model with the point source and spherical radiation in a free field
was adopted to formulate the problem. The procedure works very
well in ideal cases (point source and free field). However, most of the
industrial noise problems are complicated by the fact that the noise is
confined in a room. Reflections from the walls, floor, ceiling, and
equipment in a room create a reverberant sound field that alters the
sound wave characteristics from those for the free field. So the model
was validated for relatively low absorption room at NIT Kurukshetra
Central Workshop. The results of validation pointed out that the
estimated sound power of noise sources under simultaneous
conditions were on lower side, within the error limits 3.56 - 6.35 %.
Thus suggesting the use of this methodology for practical
implementation in industry. To demonstrate the application of the
above analytical procedure for estimating the sound power of noise
sources under simultaneous operating conditions, a manufacturing
facility (Railway Workshop at Yamunanagar, India) having five
sound sources (machines) on its workshop floor is considered in this
study. The findings of the case study had identified the two most
effective candidates (noise sources) for noise control in the Railway
Workshop Yamunanagar, India. The study suggests that the
modification in the design and/or replacement of these two identified
noisiest sources (machine) would be necessary so as to achieve an
effective reduction in noise levels. Further, the estimated data allows
engineers to better understand the noise situations of the workplace
and to revise the map when changes occur in noise level due to a
workplace re-layout.
Abstract: The data is available in abundance in any business
organization. It includes the records for finance, maintenance,
inventory, progress reports etc. As the time progresses, the data keep
on accumulating and the challenge is to extract the information from
this data bank. Knowledge discovery from these large and complex
databases is the key problem of this era. Data mining and machine
learning techniques are needed which can scale to the size of the
problems and can be customized to the application of business. For
the development of accurate and required information for particular
problem, business analyst needs to develop multidimensional models
which give the reliable information so that they can take right
decision for particular problem. If the multidimensional model does
not possess the advance features, the accuracy cannot be expected.
The present work involves the development of a Multidimensional
data model incorporating advance features. The criterion of
computation is based on the data precision and to include slowly
change time dimension. The final results are displayed in graphical
form.
Abstract: In this research work, investigations are carried out on
Continuous Wave (CW) Nd:YAG laser welding system after
preliminary experimentation to understand the influencing parameters
associated with laser welding of AISI 304. The experimental
procedure involves a series of laser welding trials on AISI 304
stainless steel sheets with various combinations of process parameters
like beam power, beam incident angle and beam incident angle. An
industrial 2 kW CW Nd:YAG laser system, available at Welding
Research Institute (WRI), BHEL Tiruchirappalli, is used for
conducting the welding trials for this research. After proper tuning of
laser beam, laser welding experiments are conducted on AISI 304
grade sheets to evaluate the influence of various input parameters on
weld bead geometry i.e. bead width (BW) and depth of penetration
(DOP). From the laser welding results, it is noticed that the beam
power and welding speed are the two influencing parameters on
depth and width of the bead. Three dimensional finite element
simulation of high density heat source have been performed for laser
welding technique using finite element code ANSYS for predicting
the temperature profile of laser beam heat source on AISI 304
stainless steel sheets. The temperature dependent material properties
for AISI 304 stainless steel are taken into account in the simulation,
which has a great influence in computing the temperature profiles.
The latent heat of fusion is considered by the thermal enthalpy of
material for calculation of phase transition problem. A Gaussian
distribution of heat flux using a moving heat source with a conical
shape is used for analyzing the temperature profiles. Experimental
and simulated values for weld bead profiles are analyzed for stainless
steel material for different beam power, welding speed and beam
incident angle. The results obtained from the simulation are
compared with those from the experimental data and it is observed
that the results of numerical analysis (FEM) are in good agreement
with experimental results, with an overall percentage of error
estimated to be within ±6%.
Abstract: Graphene-metal contact resistance limits the performance of graphene-based electrical devices. In this work, we have fabricated both graphene field-effect transistors (GFET) and transfer length measurement (TLM) test devices with titanium contacts. The purpose of this work is to compare the contact resistances that can be numerically extracted from the GFETs and measured from the TLM structures. We also provide a brief review of the work done in the field to solve the contact resistance problem.
Abstract: An unstructured finite volume numerical model is
presented here for simulating shallow-water flows with wetting and
drying fronts. The model is based on the Green-s theorem in
combination with Chorin-s projection method. A 2nd-order upwind
scheme coupled with a Least Square technique is used to handle
convection terms. An Wetting and drying treatment is used in the
present model to ensures the total mass conservation. To test it-s
capacity and reliability, the present model is used to solve the
Parabolic Bowl problem. We compare our numerical solutions with
the corresponding analytical and existing standard numerical results.
Excellent agreements are found in all the cases.
Abstract: Complex engineering design problems consist of
numerous factors of varying criticalities. Considering fundamental features of design and inferior details alike will result in an extensive
waste of time and effort. Design parameters should be introduced gradually as appropriate based on their significance relevant to the
problem context. This motivates the representation of design parameters at multiple levels of an abstraction hierarchy. However, developing abstraction hierarchies is an area that is not well
understood. Our research proposes a novel hierarchical abstraction methodology to plan effective engineering designs and processes. It
provides a theoretically sound foundation to represent, abstract and stratify engineering design parameters and tasks according to causality and criticality. The methodology creates abstraction
hierarchies in a recursive and bottom-up approach that guarantees no
backtracking across any of the abstraction levels. The methodology consists of three main phases, representation, abstraction, and layering to multiple hierarchical levels. The effectiveness of the
developed methodology is demonstrated by a design problem.
Abstract: Whole genome duplication (WGD) increased the
number of yeast Saccharomyces cerevisiae chromosomes from 8 to
16. In spite of retention the number of chromosomes in the genome
of this organism after WGD to date, chromosomal rearrangement
events have caused an evolutionary distance between current genome
and its ancestor. Studies under evolutionary-based approaches on
eukaryotic genomes have shown that the rearrangement distance is an
approximable problem. In the case of S. cerevisiae, we describe that
rearrangement distance is accessible by using dedoubled adjacency
graph drawn for 55 large paired chromosomal regions originated
from WGD. Then, we provide a program extracted from a C program
database to draw a dedoubled genome adjacency graph for S.
cerevisiae. From a bioinformatical perspective, using the duplicated
blocks of current genome in S. cerevisiae, we infer that genomic
organization of eukaryotes has the potential to provide valuable
detailed information about their ancestrygenome.
Abstract: This study created new graphical icons and operating
functions in a CAD/CAM software system by analyzing icons in some
of the popular systems, such as AutoCAD, AlphaCAM, Mastercam
and the 1st edition of LiteCAM. These software systems all focused on
geometric design and editing, thus how to transmit messages
intuitively from icon itself to users is an important function of
graphical icons. The primary purpose of this study is to design
innovative icons and commands for new software.
This study employed the TRIZ method, an innovative design
method, to generate new concepts systematically. Through literature
review, it then investigated and analyzed the relationship between
TRIZ and idea development. Contradiction Matrix and 40 Principles
were used to develop an assisting tool suitable for icon design in
software development. We first gathered icon samples from the
selected CAD/CAM systems. Then grouped these icons by
meaningful functions, and compared useful and harmful properties.
Finally, we developed new icons for new software systems in order to
avoid intellectual property problem.
Abstract: This paper presents an analytical solution to get a reliable estimation of the hydrodynamic pressure on gravity dams induced by vertical component earthquake when solving the fluid and dam interaction problem. Presented analytical technique is presented for calculation of earthquake-induced hydrodynamic pressure in the reservoir of gravity dams allowing for water compressibility and wave absorption at the reservoir bottom. This new analytical solution can take into account the effect of bottom material on seismic response of gravity dams. It is concluded that because the vertical component of ground motion causes significant hydrodynamic forces in the horizontal direction on a vertical upstream face, responses to the vertical component of ground motion are of special importance in analysis of concrete gravity dams subjected to earthquakes.
Abstract: Mining Sequential Patterns in large databases has become
an important data mining task with broad applications. It is
an important task in data mining field, which describes potential
sequenced relationships among items in a database. There are many
different algorithms introduced for this task. Conventional algorithms
can find the exact optimal Sequential Pattern rule but it takes a
long time, particularly when they are applied on large databases.
Nowadays, some evolutionary algorithms, such as Particle Swarm
Optimization and Genetic Algorithm, were proposed and have been
applied to solve this problem. This paper will introduce a new kind
of hybrid evolutionary algorithm that combines Genetic Algorithm
(GA) with Particle Swarm Optimization (PSO) to mine Sequential
Pattern, in order to improve the speed of evolutionary algorithms
convergence. This algorithm is referred to as SP-GAPSO.
Abstract: Schema matching plays a key role in many different
applications, such as schema integration, data integration, data
warehousing, data transformation, E-commerce, peer-to-peer data
management, ontology matching and integration, semantic Web,
semantic query processing, etc. Manual matching is expensive and
error-prone, so it is therefore important to develop techniques to
automate the schema matching process. In this paper, we present a
solution for XML schema automated matching problem which
produces semantic mappings between corresponding schema
elements of given source and target schemas. This solution
contributed in solving more comprehensively and efficiently XML
schema automated matching problem. Our solution based on
combining linguistic similarity, data type compatibility and structural
similarity of XML schema elements. After describing our solution,
we present experimental results that demonstrate the effectiveness of
this approach.
Abstract: Few decades ago, electronic and sensor technologies
are merged into vehicles as the Advanced Driver Assistance
System(ADAS). However, sensor-based ADASs have limitations
about weather interference and a line-of-sight nature problem. In our
project, we investigate a Relative Position based ADAS(RP-ADAS).
We divide the RP-ADAS into four main research areas: GNSS,
VANET, Security/Privacy, and Application. In this paper, we research
the GNSS technologies and determine the most appropriate one. With
the performance evaluation, we figure out that the C/A code based
GPS technologies are inappropriate for 'which lane-level' application.
However, they can be used as a 'which road-level' application.