Abstract: The existing image coding standards generally degrades at low bit-rates because of the underlying block based Discrete Cosine Transform scheme. Over the past decade, the success of wavelets in solving many different problems has contributed to its unprecedented popularity. Due to implementation constraints scalar wavelets do not posses all the properties such as orthogonality, short support, linear phase symmetry, and a high order of approximation through vanishing moments simultaneously, which are very much essential for signal processing. New class of wavelets called 'Multiwavelets' which posses more than one scaling function overcomes this problem. This paper presents a new image coding scheme based on non linear approximation of multiwavelet coefficients along with multistage vector quantization. The performance of the proposed scheme is compared with the results obtained from scalar wavelets.
Abstract: Eight difference schemes and five limiters are applied to numerical computation of Riemann problem. The resolution of discontinuities of each scheme produced is compared. Numerical dissipation and its estimation are discussed. The result shows that the numerical dissipation of each scheme is vital to improve scheme-s accuracy and stability. MUSCL methodology is an effective approach to increase computational efficiency and resolution. Limiter should be selected appropriately by balancing compressive and diffusive performance.
Abstract: Trends in business intelligence, e-commerce and
remote access make it necessary and practical to store data in
different ways on multiple systems with different operating systems.
As business evolve and grow, they require efficient computerized
solution to perform data update and to access data from diverse
enterprise business applications. The objective of this paper is to
demonstrate the capability of DTS [1] as a database solution for
automatic data transfer and update in solving business problem. This
DTS package is developed for the sales of variety of plants and
eventually expanded into commercial supply and landscaping
business. Dimension data modeling is used in DTS package to
extract, transform and load data from heterogeneous database
systems such as MySQL, Microsoft Access and Oracle that
consolidates into a Data Mart residing in SQL Server. Hence, the
data transfer from various databases is scheduled to run automatically
every quarter of the year to review the efficient sales analysis.
Therefore, DTS is absolutely an attractive solution for automatic data
transfer and update which meeting today-s business needs.
Abstract: Numerical study of two dimensional supersonic
hydrogen-air mixing layer is performed to investigate the effect of
turbulence and chemical additive on ignition distance. Chemical
reaction is treated using detail kinetics. Advection upstream splitting
method is used to calculate the fluxes and one equation turbulence
model is chosen here to simulate the considered problem. Hydrogen
peroxide is used as an additive and the results show that inflow
turbulence and chemical additive may drastically decrease the
ignition delay in supersonic combustion.
Abstract: Assembly line balancing is a very important issue in
mass production systems due to production cost. Although many
studies have been done on this topic, but because assembly line
balancing problems are so complex they are categorized as NP-hard
problems and researchers strongly recommend using heuristic
methods. This paper presents a new heuristic approach called the
critical task method (CTM) for solving U-shape assembly line
balancing problems. The performance of the proposed heuristic
method is tested by solving a number of test problems and comparing
them with 12 other heuristics available in the literature to confirm the
superior performance of the proposed heuristic. Furthermore, to
prove the efficiency of the proposed CTM, the objectives are
increased to minimize the number of workstation (or equivalently
maximize line efficiency), and minimizing the smoothness index.
Finally, it is proven that the proposed heuristic is more efficient than
the others to solve the U-shape assembly line balancing problem.
Abstract: Air conditioning is mainly to be used as human
comfort medium. It has been use more often in country in which the
daily temperatures are high. In scientific, air conditioning is defined
as a process of controlling the moisture, cooling, heating and cleaning
air. Without proper estimation of cooling load, big amount of waste
energy been used because of unsuitable of air conditioning system are
not considering to overcoming heat gains from surrounding. This is
due to the size of the room is too big and the air conditioning has to
use more energy to cool the room and the air conditioning is too
small for the room. The studies are basically to develop a program to
calculate cooling load. Through this study it is easy to calculate
cooling load estimation. Furthermore it-s help to compare the cooling
load estimation by hourly and yearly. Base on the last study that been
done, the developed software are not user-friendly. For individual
without proper knowledge of calculating cooling load estimation
might be problem. Easy excess and user-friendly should be the main
objective to design something. This program will allow cooling load
able be estimate by any users rather than estimation by using rule of
thumb. Several of limitation of case study is judged to sure it-s
meeting to Malaysia building specification. Finally validation is done
by comparison manual calculation and by developed program.
Abstract: In this paper, we present an innovative scheme of
blindly extracting message bits from an image distorted by an attack.
Support Vector Machine (SVM) is used to nonlinearly classify the
bits of the embedded message. Traditionally, a hard decoder is used
with the assumption that the underlying modeling of the Discrete
Cosine Transform (DCT) coefficients does not appreciably change.
In case of an attack, the distribution of the image coefficients is
heavily altered. The distribution of the sufficient statistics at the
receiving end corresponding to the antipodal signals overlap and a
simple hard decoder fails to classify them properly. We are
considering message retrieval of antipodal signal as a binary
classification problem. Machine learning techniques like SVM is
used to retrieve the message, when certain specific class of attacks is
most probable. In order to validate SVM based decoding scheme, we
have taken Gaussian noise as a test case. We generate a data set using
125 images and 25 different keys. Polynomial kernel of SVM has
achieved 100 percent accuracy on test data.
Abstract: This paper presents a new heuristic algorithm for the classical symmetric traveling salesman problem (TSP). The idea of the algorithm is to cut a TSP tour into overlapped blocks and then each block is improved separately. It is conjectured that the chance of improving a good solution by moving a node to a position far away from its original one is small. By doing intensive search in each block, it is possible to further improve a TSP tour that cannot be improved by other local search methods. To test the performance of the proposed algorithm, computational experiments are carried out based on benchmark problem instances. The computational results show that algorithm proposed in this paper is efficient for solving the TSPs.
Abstract: Multiple criteria decision making (MCDM) is an approach to ranking the solutions and finding the best one when two or more solutions are provided. In this study, MCDM approach is proposed to select the most suitable scheduling rule of robotic flexible assembly cells (RFACs). Two MCDM approaches, Analytic Hierarchy Process (AHP) and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) are proposed for solving the scheduling rule selection problem. The AHP method is employed to determine the weights of the evaluation criteria, while the TOPSIS method is employed to obtain final ranking order of scheduling rules. Four criteria are used to evaluate the scheduling rules. Also, four scheduling policies of RFAC are examined to choose the most appropriate one for this purpose. A numerical example illustrates applications of the suggested methodology. The results show that the methodology is practical and works in RFAC settings.
Abstract: Several methods are available for weight and shape
optimization of structures, among which Evolutionary Structural
Optimization (ESO) is one of the most widely used methods. In ESO,
however, the optimization criterion is completely case-dependent.
Moreover, only the improving solutions are accepted during the
search. In this paper a Simulated Annealing (SA) algorithm is used
for structural optimization problem. This algorithm differs from other
random search methods by accepting non-improving solutions. The
implementation of SA algorithm is done through reducing the
number of finite element analyses (function evaluations).
Computational results show that SA can efficiently and effectively
solve such optimization problems within short search time.
Abstract: Several numerical schemes utilizing central difference
approximations have been developed to solve the Goursat problem.
However, in a recent years compact discretization methods which
leads to high-order finite difference schemes have been used since it
is capable of achieving better accuracy as well as preserving certain
features of the equation e.g. linearity. The basic idea of the new
scheme is to find the compact approximations to the derivative terms
by differentiating centrally the governing equations. Our primary
interest is to study the performance of the new scheme when applied
to two Goursat partial differential equations against the traditional
finite difference scheme.
Abstract: Many multimedia communication applications require a
source to transmit messages to multiple destinations subject to quality
of service (QoS) delay constraint. To support delay constrained
multicast communications, computer networks need to guarantee an
upper bound end-to-end delay from the source node to each of
the destination nodes. This is known as multicast delay problem.
On the other hand, if the same message fails to arrive at each
destination node at the same time, there may arise inconsistency and
unfairness problem among users. This is related to multicast delayvariation
problem. The problem to find a minimum cost multicast
tree with delay and delay-variation constraints has been proven to
be NP-Complete. In this paper, we propose an efficient heuristic
algorithm, namely, Economic Delay and Delay-Variation Bounded
Multicast (EDVBM) algorithm, based on a novel heuristic function,
to construct an economic delay and delay-variation bounded multicast
tree. A noteworthy feature of this algorithm is that it has very high
probability of finding the optimal solution in polynomial time with
low computational complexity.
Abstract: In this paper, the American exchange option (AEO) valuation problem is modelled as a free boundary problem. The critical stock price for an AEO is satisfied an integral equation implicitly. When the remaining time is large enough, an asymptotic formula is provided for pricing an AEO. The numerical results reveal that our asymptotic pricing formula is robust and accurate for the long-term AEO.
Abstract: Research in distributed artificial intelligence and multiagent systems consider how a set of distributed entities can interact and coordinate their actions in order to solve a given problem. In this paper an overview of this concept and its evolution is presented particularly its application in the design of intelligent tutoring systems. An intelligent tutor based on the concept of agent and centered specifically on the design of a pedagogue agent is illustrated. Our work has two goals: the first one concerns the architecture aspect and the design of a tutor using multiagent approach. The second one deals particularly with the design of a part of a tutor system: the pedagogue agent.
Abstract: This study deals with a multi-criteria optimization
problem which has been transformed into a single objective
optimization problem using Response Surface Methodology (RSM),
Artificial Neural Network (ANN) and Grey Relational Analyses
(GRA) approach. Grey-RSM and Grey-ANN are hybrid techniques
which can be used for solving multi-criteria optimization problem.
There have been two main purposes of this research as follows.
1. To determine optimum and robust fiber dyeing process
conditions by using RSM and ANN based on GRA,
2. To obtain the best suitable model by comparing models
developed by different methodologies.
The design variables for fiber dyeing process in textile are
temperature, time, softener, anti-static, material quantity, pH,
retarder, and dispergator. The quality characteristics to be evaluated
are nominal color consistency of fiber, maximum strength of fiber,
minimum color of dyeing solution. GRA-RSM with exact level
value, GRA-RSM with interval level value and GRA-ANN models
were compared based on GRA output value and MSE (Mean Square
Error) performance measurement of outputs with each other. As a
result, GRA-ANN with interval value model seems to be suitable
reducing the variation of dyeing process for GRA output value of the
model.
Abstract: This paper discusses the approach of real-time
controlling of the energy management system using the data
acquisition tool of LabVIEW. The main idea of this inspiration was
to interface the Station (PC) with the system and publish the data on
internet using LabVIEW. In this venture, controlling and switching of
3 phase AC loads are effectively and efficiently done. The phases are
also sensed through devices. In case of any failure the attached
generator starts functioning automatically. The computer sends
command to the system and system respond to the request. The
modern feature is to access and control the system world-wide using
world wide web (internet). This controlling can be done at any time
from anywhere to effectively use the energy especially in developing
countries where energy management is a big problem. In this system
totally integrated devices are used to operate via remote location.
Abstract: There are several approaches in trying to solve the
Quantitative 1Structure-Activity Relationship (QSAR) problem.
These approaches are based either on statistical methods or on
predictive data mining. Among the statistical methods, one should
consider regression analysis, pattern recognition (such as cluster
analysis, factor analysis and principal components analysis) or partial
least squares. Predictive data mining techniques use either neural
networks, or genetic programming, or neuro-fuzzy knowledge. These
approaches have a low explanatory capability or non at all. This
paper attempts to establish a new approach in solving QSAR
problems using descriptive data mining. This way, the relationship
between the chemical properties and the activity of a substance
would be comprehensibly modeled.
Abstract: In this paper, the melting of a semi-infinite body as a
result of a moving laser beam has been studied. Because the Fourier
heat transfer equation at short times and large dimensions does not
have sufficient accuracy; a non-Fourier form of heat transfer
equation has been used. Due to the fact that the beam is moving in x
direction, the temperature distribution and the melting pool shape are
not asymmetric. As a result, the problem is a transient threedimensional
problem. Therefore, thermophysical properties such as
heat conductivity coefficient, density and heat capacity are functions
of temperature and material states. The enthalpy technique, used for
the solution of phase change problems, has been used in an explicit
finite volume form for the hyperbolic heat transfer equation. This
technique has been used to calculate the transient temperature
distribution in the semi-infinite body and the growth rate of the melt
pool. In order to validate the numerical results, comparisons were
made with experimental data. Finally, the results of this paper were
compared with similar problem that has used the Fourier theory. The
comparison shows the influence of infinite speed of heat propagation
in Fourier theory on the temperature distribution and the melt pool
size.
Abstract: In recent years, global warming has become a
worldwide problem. The reduction of carbon dioxide emissions is a
top priority for many companies in the manufacturing industry. In the
automobile industry as well, the reduction of carbon dioxide emissions
is one of the most important issues. Technology to reduce the weight
of automotive parts improves the fuel economy of automobiles, and is
an important technology for reducing carbon dioxide. Also, even if
this weight reduction technology is applied to electric automobiles
rather than gasoline automobiles, reducing energy consumption
remains an important issue. Plastic processing of hollow pipes is one
important technology for realizing the weight reduction of automotive
parts. Ohashi et al. [1],[2] present an example of research on pipe
formation in which a process was carried out to enlarge a pipe
diameter using a lost core, achieving the suppression of wall thickness
reduction and greater pipe expansion than hydroforming.
In this study, we investigated a method to increase the wall
thickness of a pipe through pipe compression using planetary rolls.
The establishment of a technology whereby the wall thickness of a
pipe can be controlled without buckling the pipe is an important
technology for the weight reduction of products. Using the finite
element analysis method, we predicted that it would be possible to
increase the compression of an aluminum pipe with a 3mm wall
thickness by approximately 20%, and wall thickness by approximately
20% by pressing the hollow pipe with planetary rolls.
Abstract: The electronically available Urdu data is in image form
which is very difficult to process. Printed Urdu data is the root cause
of problem. So for the rapid progress of Urdu language we need an
OCR systems, which can help us to make Urdu data available for the
common person. Research has been carried out for years to automata
Arabic and Urdu script. But the biggest hurdle in the development of
Urdu OCR is the challenge to recognize Nastalique Script which is
taken as standard for writing Urdu language. Nastalique script is
written diagonally with no fixed baseline which makes the script
somewhat complex. Overlap is present not only in characters but in
the ligatures as well. This paper proposes a method which allows
successful recognition of Nastalique Script.