Abstract: There are many approaches proposed for solving
Sudoku puzzles. One of them is by modelling the puzzles as block
world problems. There have been three model for Sudoku solvers
based on this approach. Each model expresses Sudoku solver as
a parameterized multi agent systems. In this work, we propose a
new model which is an improvement over the existing models. This
paper presents the development of a Sudoku solver that implements
all the proposed models. Some experiments have been conducted to
determine the performance of each model.
Abstract: This paper proposes the use of Bayesian belief
networks (BBN) as a higher level of health risk assessment for a
dumping site of lead battery smelter factory. On the basis of the
epidemiological studies, the actual hospital attendance records and
expert experiences, the BBN is capable of capturing the probabilistic
relationships between the hazardous substances and their adverse
health effects, and accordingly inferring the morbidity of the adverse
health effects. The provision of the morbidity rates of the related
diseases is more informative and can alleviate the drawbacks of
conventional methods.
Abstract: In this work, we developed the concept of
supercompression, i.e., compression above the compression standard
used. In this context, both compression rates are multiplied. In fact,
supercompression is based on super-resolution. That is to say,
supercompression is a data compression technique that superpose
spatial image compression on top of bit-per-pixel compression to
achieve very high compression ratios. If the compression ratio is very
high, then we use a convolutive mask inside decoder that restores the
edges, eliminating the blur. Finally, both, the encoder and the
complete decoder are implemented on General-Purpose computation
on Graphics Processing Units (GPGPU) cards. Specifically, the
mentio-ned mask is coded inside texture memory of a GPGPU.
Abstract: The sensitivity of orifice plate metering to disturbed
flow (either asymmetric or swirling) is a subject of great concern to
flow meter users and manufacturers. The distortions caused by pipe
fittings and pipe installations upstream of the orifice plate are major
sources of this type of non-standard flows. These distortions can alter
the accuracy of metering to an unacceptable degree. In this work, a
multi-scale object known as metal foam has been used to generate a
predetermined turbulent flow upstream of the orifice plate. The
experimental results showed that the combination of an orifice plate
and metal foam flow conditioner is broadly insensitive to upstream
disturbances. This metal foam demonstrated a good performance in
terms of removing swirl and producing a repeatable flow profile
within a short distance downstream of the device. The results of using
a combination of a metal foam flow conditioner and orifice plate for
non-standard flow conditions including swirling flow and asymmetric
flow show this package can preserve the accuracy of metering up to
the level required in the standards.
Abstract: The competitive learning is an adaptive process in
which the neurons in a neural network gradually become sensitive to
different input pattern clusters. The basic idea behind the Kohonen-s
Self-Organizing Feature Maps (SOFM) is competitive learning.
SOFM can generate mappings from high-dimensional signal spaces
to lower dimensional topological structures. The main features of this
kind of mappings are topology preserving, feature mappings and
probability distribution approximation of input patterns. To overcome
some limitations of SOFM, e.g., a fixed number of neural units and a
topology of fixed dimensionality, Growing Self-Organizing Neural
Network (GSONN) can be used. GSONN can change its topological
structure during learning. It grows by learning and shrinks by
forgetting. To speed up the training and convergence, a new variant
of GSONN, twin growing cell structures (TGCS) is presented here.
This paper first gives an introduction to competitive learning, SOFM
and its variants. Then, we discuss some GSONN with fixed
dimensionality, which include growing cell structures, its variants
and the author-s model: TGCS. It is ended with some testing results
comparison and conclusions.
Abstract: In this work, bending fatigue life of notched
specimens with various notch geometries and dimensions is
investigated by experiment and Manson-Caffin theoretical method. In
this theoretical method, fatigue life of notched specimens is
calculated using the fatigue life obtained from the experiments for
plain specimens (without notch). Three notch geometries including
∪-shape, ∨-shape and C -shape notches are considered in this
investigation. The experiments are conducted on a rotary bending
Moore machine. The specimens are made of a low carbon steel alloy,
which has wide application in industry. The stress- life curves are
captured for all notched specimen by experiment. The results indicate
that Manson-Caffin analytical method cannot adequately predict
the fatigue life of notched specimen. However, it seems that the
difference between the experiments and Manson-Caffin predictions
can be compensated by a proportional factor.
Abstract: In this work a new method for low complexity
image coding is presented, that permits different settings and great
scalability in the generation of the final bit stream. This coding
presents a continuous-tone still image compression system that
groups loss and lossless compression making use of finite arithmetic
reversible transforms. Both transformation in the space of color and
wavelet transformation are reversible. The transformed coefficients
are coded by means of a coding system in depending on a
subdivision into smaller components (CFDS) similar to the bit
importance codification. The subcomponents so obtained are
reordered by means of a highly configure alignment system
depending on the application that makes possible the re-configure of
the elements of the image and obtaining different importance levels
from which the bit stream will be generated. The subcomponents of
each importance level are coded using a variable length entropy
coding system (VBLm) that permits the generation of an embedded
bit stream. This bit stream supposes itself a bit stream that codes a
compressed still image. However, the use of a packing system on the
bit stream after the VBLm allows the realization of a final highly
scalable bit stream from a basic image level and one or several
improvement levels.
Abstract: Testable software has two inherent properties – observability and controllability. Observability facilitates observation of internal behavior of software to required degree of detail. Controllability allows creation of difficult-to-achieve states prior to execution of various tests. In this paper, we describe COTT, a Controllability and Observability Testing Tool, to create testable object-oriented software. COTT provides a framework that helps the user to instrument object-oriented software to build the required controllability and observability. During testing, the tool facilitates creation of difficult-to-achieve states required for testing of difficultto- test conditions and observation of internal details of execution at unit, integration and system levels. The execution observations are logged in a test log file, which are used for post analysis and to generate test coverage reports.
Abstract: This research proposes a Preemptive Possibilistic
Linear Programming (PPLP) approach for solving multiobjective
Aggregate Production Planning (APP) problem with interval demand
and imprecise unit price and related operating costs. The proposed
approach attempts to maximize profit and minimize changes of
workforce. It transforms the total profit objective that has imprecise
information to three crisp objective functions, which are maximizing
the most possible value of profit, minimizing the risk of obtaining the
lower profit and maximizing the opportunity of obtaining the higher
profit. The change of workforce level objective is also converted.
Then, the problem is solved according to objective priorities. It is
easier than simultaneously solve the multiobjective problem as
performed in existing approach. Possible range of interval demand is
also used to increase flexibility of obtaining the better production
plan. A practical application of an electronic company is illustrated to
show the effectiveness of the proposed model.
Abstract: In this paper, the effects of thermodynamic,
hydrodynamic and geometric of an air cooled condenser on COP of
vapor compression cycle are investigated for a fixed condenser facing
surface area. The system is utilized with a scroll compressor,
modeled based on thermodynamic and heat transfer equations
employing Matlab software. The working refrigerant is R134a whose
thermodynamic properties are called from Engineering Equation
Software. This simulation shows that vapor compression cycle can
be designed by different configurations and COPs, economical and
optimum working condition can be obtained via considering these
parameters.
Abstract: The purpose of this study is to analyze Green IT industry in major developed countries and to suggest overall directions for IT-Energy convergence industry. Recently, IT industry is pointed out as a problem such as environmental pollution, energy exhaustion, and high energy consumption. Therefore, Green IT gets focused which concerns as solution of these problems. However, since it is a beginning stage of this convergence area, there are only a few studies of IT-Energy convergence industry. According to this, this study examined the major developed countries in terms of institution arrangements, resources, markets and companies based on Van de Ven(1999)'s social system framework that shows relationship among key components of industrial infrastructure. Subsequently, the direction of the future study of convergence on IT and Energy industry is proposed.
Abstract: One way for optimum loading of overdimensioning
conveyers is speed (capacity) decrement, with attention for
production capabilities and demands. At conveyers which drives with
three phase slip-ring induction motor, technically reasonable solution
for conveyer (driving motors) speed regulation is using constant
torque subsynchronous cascade with static semiconductor converter
and transformer for energy reversion to the power network. In the
paper is described mathematical model for parameter calculation of
two-motors 6 kV subsynchronous cascade. It is also demonstrated
that applying of this cascade gave several good properties, foremost
in electrical energy saving, also in improving of other energy
indexes, and finally that results in cost reduction of complete
electrical motor drive.
Abstract: The present work presents a method of calculating the
ductility of rectangular sections of beams considering nonlinear
behavior of concrete and steel. This calculation procedure allows us
to trace the curvature of the section according to the bending
moment, and consequently deduce ductility. It also allowed us to
study the various parameters that affect the value of the ductility. A
comparison of the effect of maximum rates of tension steel, adopted
by the codes, ACI [1], EC8 [2] and RPA [3] on the value of the
ductility was made. It was concluded that the maximum rate of steels
permitted by the ACI [1] codes and RPA [3] are almost similar in
their effect on the ductility and too high. Therefore, the ductility
mobilized in case of an earthquake is low, the inverse of code EC8
[2]. Recommendations have been made in this direction.
Abstract: Bridges are one of the main components of
transportation networks. They should be functional before and after
earthquake for emergency services. Therefore we need to assess
seismic performance of bridges under different seismic loadings.
Fragility curve is one of the popular tools in seismic evaluations. The
fragility curves are conditional probability statements, which give the
probability of a bridge reaching or exceeding a particular damage
level for a given intensity level. In this study, the seismic
performance of a two-span simply supported concrete bridge is
assessed. Due to usual lack of empirical data, the analytical fragility
curve was developed by results of the dynamic analysis of bridge
subjected to the different time histories in near-fault area.
Abstract: This paper presents the design and implements the prototype of an intelligent data processing framework in ubiquitous sensor networks. Much focus is put on how to handle the sensor data stream as well as the interoperability between the low-level sensor data and application clients. Our framework first addresses systematic middleware which mitigates the interaction between the application layer and low-level sensors, for the sake of analyzing a great volume of sensor data by filtering and integrating to create value-added context information. Then, an agent-based architecture is proposed for real-time data distribution to efficiently forward a specific event to the appropriate application registered in the directory service via the open interface. The prototype implementation demonstrates that our framework can host a sophisticated application on the ubiquitous sensor network and it can autonomously evolve to new middleware, taking advantages of promising technologies such as software agents, XML, cloud computing, and the like.
Abstract: In this paper, a reliable cooperative multipath routing
algorithm is proposed for data forwarding in wireless sensor networks
(WSNs). In this algorithm, data packets are forwarded towards the
base station (BS) through a number of paths, using a set of relay
nodes. In addition, the Rayleigh fading model is used to calculate
the evaluation metric of links. Here, the quality of reliability is
guaranteed by selecting optimal relay set with which the probability
of correct packet reception at the BS will exceed a predefined
threshold. Therefore, the proposed scheme ensures reliable packet
transmission to the BS. Furthermore, in the proposed algorithm,
energy efficiency is achieved by energy balancing (i.e. minimizing
the energy consumption of the bottleneck node of the routing path)
at the same time. This work also demonstrates that the proposed
algorithm outperforms existing algorithms in extending longevity of
the network, with respect to the quality of reliability. Given this, the
obtained results make possible reliable path selection with minimum
energy consumption in real time.
Abstract: Cognitive models allow predicting some aspects of utility
and usability of human machine interfaces (HMI), and simulating
the interaction with these interfaces. The action of predicting is based
on a task analysis, which investigates what a user is required to do
in terms of actions and cognitive processes to achieve a task. Task
analysis facilitates the understanding of the system-s functionalities.
Cognitive models are part of the analytical approaches, that do not
associate the users during the development process of the interface.
This article presents a study about the evaluation of a human
machine interaction with a contextual assistant-s interface using ACTR
and GOMS cognitive models. The present work shows how these
techniques may be applied in the evaluation of HMI, design and
research by emphasizing firstly the task analysis and secondly the
time execution of the task. In order to validate and support our
results, an experimental study of user performance is conducted at
the DOMUS laboratory, during the interaction with the contextual
assistant-s interface. The results of our models show that the GOMS
and ACT-R models give good and excellent predictions respectively
of users performance at the task level, as well as the object level.
Therefore, the simulated results are very close to the results obtained
in the experimental study.
Abstract: This paper proposes a framework for product
development including hardware and software components. It
provides separation of hardware dependent software, modifications of
current product development process, and integration of software
modules with existing product configuration models and assembly
product structures. In order to decide the dependent software, the
framework considers product configuration modules and engineering
changes of associated software and hardware components. In order to
support efficient integration of the two different hardware and
software development, a modified product development process is
proposed. The process integrates the dependent software development
into product development through the interchanges of specific product
information. By using existing product data models in Product Data
Management (PDM), the framework represents software as modules
for product configurations and software parts for product structure.
The framework is applied to development of a robot system in order to
show its effectiveness.
Abstract: Information technology managers nowadays are
facing with tremendous pressure to plan, implement, and adopt new
technology solution due to the rapidity of technology changes.
Resulted from a lack of study that have been done in this topic, the
aim of this paper is to provide a comparison review on current tools
that are currently being used in order to respond to technological
changes. The study is based on extensive literature review of
published works with majority of them are ranging from 2000 to the
first part of 2011. The works were gathered from journals, books,
and other information sources available on the Web. Findings show
that, each tools has different focus and none of the tools are
providing a framework in holistic view, which should include
technical, people, process, and business environment aspect. Hence,
this result provides potential information about current available
tools that IT managers could use to manage changes in technology.
Further, the result reveals a research gap in the area where the
industries a short of such framework.
Abstract: This paper aims to develop a NOx emission model of
an acid gas incinerator using Nelder-Mead least squares support
vector regression (LS-SVR). Malaysia DOE is actively imposing the
Clean Air Regulation to mandate the installation of analytical
instrumentation known as Continuous Emission Monitoring System
(CEMS) to report emission level online to DOE . As a hardware
based analyzer, CEMS is expensive, maintenance intensive and often
unreliable. Therefore, software predictive technique is often
preferred and considered as a feasible alternative to replace the
CEMS for regulatory compliance. The LS-SVR model is built based
on the emissions from an acid gas incinerator that operates in a LNG
Complex. Simulated Annealing (SA) is first used to determine the
initial hyperparameters which are then further optimized based on the
performance of the model using Nelder-Mead simplex algorithm.
The LS-SVR model is shown to outperform a benchmark model
based on backpropagation neural networks (BPNN) in both training
and testing data.