Abstract: With rapid technology scaling, the proportion of the
static power consumption catches up with dynamic power
consumption gradually. To decrease leakage consumption is
becoming more and more important in low-power design. This paper
presents a power-gating scheme for P-DTGAL (p-type dual
transmission gate adiabatic logic) circuits to reduce leakage power
dissipations under deep submicron process. The energy dissipations of
P-DTGAL circuits with power-gating scheme are investigated in
different processes, frequencies and active ratios. BSIM4 model is
adopted to reflect the characteristics of the leakage currents. HSPICE
simulations show that the leakage loss is greatly reduced by using the
P-DTGAL with power-gating techniques.
Abstract: Both the minimum energy consumption and
smoothness, which is quantified as a function of jerk, are generally
needed in many dynamic systems such as the automobile and the
pick-and-place robot manipulator that handles fragile equipments.
Nevertheless, many researchers come up with either solely
concerning on the minimum energy consumption or minimum jerk
trajectory. This research paper proposes a simple yet very interesting
when combining the minimum energy and jerk of indirect jerks
approaches in designing the time-dependent system yielding an
alternative optimal solution. Extremal solutions for the cost functions
of the minimum energy, the minimum jerk and combining them
together are found using the dynamic optimization methods together
with the numerical approximation. This is to allow us to simulate
and compare visually and statistically the time history of state inputs
employed by combining minimum energy and jerk designs. The
numerical solution of minimum direct jerk and energy problem are
exactly the same solution; however, the solutions from problem of
minimum energy yield the similar solution especially in term of
tendency.
Abstract: Model Predictive Control (MPC) is increasingly being
proposed for real time applications and embedded systems. However
comparing to PID controller, the implementation of the MPC in
miniaturized devices like Field Programmable Gate Arrays (FPGA)
and microcontrollers has historically been very small scale due to its
complexity in implementation and its computation time requirement.
At the same time, such embedded technologies have become an
enabler for future manufacturing enterprises as well as a transformer
of organizations and markets. Recently, advances in microelectronics
and software allow such technique to be implemented in embedded
systems. In this work, we take advantage of these recent advances
in this area in the deployment of one of the most studied and
applied control technique in the industrial engineering. In fact in
this paper, we propose an efficient framework for implementation
of Generalized Predictive Control (GPC) in the performed STM32
microcontroller. The STM32 keil starter kit based on a JTAG interface
and the STM32 board was used to implement the proposed GPC
firmware. Besides the GPC, the PID anti windup algorithm was
also implemented using Keil development tools designed for ARM
processor-based microcontroller devices and working with C/Cµ
langage. A performances comparison study was done between both
firmwares. This performances study show good execution speed and
low computational burden. These results encourage to develop simple
predictive algorithms to be programmed in industrial standard hardware.
The main features of the proposed framework are illustrated
through two examples and compared with the anti windup PID
controller.
Abstract: This paper is intended to assist anyone with some general technical experience, but perhaps limited specific knowledge of heat transfer equipment. A characteristic of heat exchanger design is the procedure of specifying a design, heat transfer area and pressure drops and checking whether the assumed design satisfies all requirements or not. The purpose of this paper is how to design the oil cooler (heat exchanger) especially for shell-and-tube heat exchanger which is the majority type of liquid-to-liquid heat exchanger. General design considerations and design procedure are also illustrated in this paper and a flow diagram is provided as an aid of design procedure. In design calculation, the MatLAB and AutoCAD software are used. Fundamental heat transfer concepts and complex relationships involved in such exchanger are also presented in this paper. The primary aim of this design is to obtain a high heat transfer rate without exceeding the allowable pressure drop. This computer program is highly useful to design the shell-and-tube type heat exchanger and to modify existing deign.
Abstract: Flow movement in unsaturated soil can be expressed
by a partial differential equation, named Richards equation. The
objective of this study is the finding of an appropriate implicit
numerical solution for head based Richards equation. Some of the
well known finite difference schemes (fully implicit, Crank Nicolson
and Runge-Kutta) have been utilized in this study. In addition, the
effects of different approximations of moisture capacity function,
convergence criteria and time stepping methods were evaluated. Two
different infiltration problems were solved to investigate the
performance of different schemes. These problems include of vertical
water flow in a wet and very dry soils. The numerical solutions of
two problems were compared using four evaluation criteria and the
results of comparisons showed that fully implicit scheme is better
than the other schemes. In addition, utilizing of standard chord slope
method for approximation of moisture capacity function, automatic
time stepping method and difference between two successive
iterations as convergence criterion in the fully implicit scheme can
lead to better and more reliable results for simulation of fluid
movement in different unsaturated soils.
Abstract: The sensitivity of orifice plate metering to disturbed
flow (either asymmetric or swirling) is a subject of great concern to
flow meter users and manufacturers. The distortions caused by pipe
fittings and pipe installations upstream of the orifice plate are major
sources of this type of non-standard flows. These distortions can alter
the accuracy of metering to an unacceptable degree. In this work, a
multi-scale object known as metal foam has been used to generate a
predetermined turbulent flow upstream of the orifice plate. The
experimental results showed that the combination of an orifice plate
and metal foam flow conditioner is broadly insensitive to upstream
disturbances. This metal foam demonstrated a good performance in
terms of removing swirl and producing a repeatable flow profile
within a short distance downstream of the device. The results of using
a combination of a metal foam flow conditioner and orifice plate for
non-standard flow conditions including swirling flow and asymmetric
flow show this package can preserve the accuracy of metering up to
the level required in the standards.
Abstract: The competitive learning is an adaptive process in
which the neurons in a neural network gradually become sensitive to
different input pattern clusters. The basic idea behind the Kohonen-s
Self-Organizing Feature Maps (SOFM) is competitive learning.
SOFM can generate mappings from high-dimensional signal spaces
to lower dimensional topological structures. The main features of this
kind of mappings are topology preserving, feature mappings and
probability distribution approximation of input patterns. To overcome
some limitations of SOFM, e.g., a fixed number of neural units and a
topology of fixed dimensionality, Growing Self-Organizing Neural
Network (GSONN) can be used. GSONN can change its topological
structure during learning. It grows by learning and shrinks by
forgetting. To speed up the training and convergence, a new variant
of GSONN, twin growing cell structures (TGCS) is presented here.
This paper first gives an introduction to competitive learning, SOFM
and its variants. Then, we discuss some GSONN with fixed
dimensionality, which include growing cell structures, its variants
and the author-s model: TGCS. It is ended with some testing results
comparison and conclusions.
Abstract: This paper applies an anthropological approach to illuminate the dynamic cultural geography of Kazakhstani Korean ethnicity focusing on its turning point, the historic “Seoul Olympic Games in 1988." The Korean ethnic group was easily considered as a harmonious and homogeneous community by outsiders, but there existed deep-seated conflicts and hostilities within the ethnic group. The majority-s oppositional dichotomy of superiority and inferiority toward the minority was continuously reorganized and reinforced by difference in experience, memory and sentiment. However, such a chronic exclusive boundary was collapsed following the patriotism ignited by the Olympics held in their mother country. This paper explores the fluidity of subject by formation of the boundary in which constructed cultural differences are continuously essentialized and reproduced, and by dissolution of cultural barrier in certain contexts.
Abstract: In this work, bending fatigue life of notched
specimens with various notch geometries and dimensions is
investigated by experiment and Manson-Caffin theoretical method. In
this theoretical method, fatigue life of notched specimens is
calculated using the fatigue life obtained from the experiments for
plain specimens (without notch). Three notch geometries including
∪-shape, ∨-shape and C -shape notches are considered in this
investigation. The experiments are conducted on a rotary bending
Moore machine. The specimens are made of a low carbon steel alloy,
which has wide application in industry. The stress- life curves are
captured for all notched specimen by experiment. The results indicate
that Manson-Caffin analytical method cannot adequately predict
the fatigue life of notched specimen. However, it seems that the
difference between the experiments and Manson-Caffin predictions
can be compensated by a proportional factor.
Abstract: In this work a new method for low complexity
image coding is presented, that permits different settings and great
scalability in the generation of the final bit stream. This coding
presents a continuous-tone still image compression system that
groups loss and lossless compression making use of finite arithmetic
reversible transforms. Both transformation in the space of color and
wavelet transformation are reversible. The transformed coefficients
are coded by means of a coding system in depending on a
subdivision into smaller components (CFDS) similar to the bit
importance codification. The subcomponents so obtained are
reordered by means of a highly configure alignment system
depending on the application that makes possible the re-configure of
the elements of the image and obtaining different importance levels
from which the bit stream will be generated. The subcomponents of
each importance level are coded using a variable length entropy
coding system (VBLm) that permits the generation of an embedded
bit stream. This bit stream supposes itself a bit stream that codes a
compressed still image. However, the use of a packing system on the
bit stream after the VBLm allows the realization of a final highly
scalable bit stream from a basic image level and one or several
improvement levels.
Abstract: Testable software has two inherent properties – observability and controllability. Observability facilitates observation of internal behavior of software to required degree of detail. Controllability allows creation of difficult-to-achieve states prior to execution of various tests. In this paper, we describe COTT, a Controllability and Observability Testing Tool, to create testable object-oriented software. COTT provides a framework that helps the user to instrument object-oriented software to build the required controllability and observability. During testing, the tool facilitates creation of difficult-to-achieve states required for testing of difficultto- test conditions and observation of internal details of execution at unit, integration and system levels. The execution observations are logged in a test log file, which are used for post analysis and to generate test coverage reports.
Abstract: For higher order multiplications, a huge number of
adders or compressors are to be used to perform the partial product
addition. We have reduced the number of adders by introducing
special kind of adders that are capable to add five/six/seven bits per
decade. These adders are called compressors. Binary counter
property has been merged with the compressor property to develop
high order compressors. Uses of these compressors permit the
reduction of the vertical critical paths. A 16×16 bit multiplier has
been developed using these compressors. These compressors make
the multipliers faster as compared to the conventional design that
have been used 4-2 compressors and 3-2 compressors.
Abstract: This research proposes a Preemptive Possibilistic
Linear Programming (PPLP) approach for solving multiobjective
Aggregate Production Planning (APP) problem with interval demand
and imprecise unit price and related operating costs. The proposed
approach attempts to maximize profit and minimize changes of
workforce. It transforms the total profit objective that has imprecise
information to three crisp objective functions, which are maximizing
the most possible value of profit, minimizing the risk of obtaining the
lower profit and maximizing the opportunity of obtaining the higher
profit. The change of workforce level objective is also converted.
Then, the problem is solved according to objective priorities. It is
easier than simultaneously solve the multiobjective problem as
performed in existing approach. Possible range of interval demand is
also used to increase flexibility of obtaining the better production
plan. A practical application of an electronic company is illustrated to
show the effectiveness of the proposed model.
Abstract: The aim of the study was to follow changes of powervelocity
relationship in female volleyball players during an annual
training cycle. The study was conducted on eleven female volleyball
players: age 21.6±1.7 years, body height 177.9±4.7 cm, body mass
71.3±6.6 kg and training experience 8.6±3.3 years. Power–velocity
relationship was determined from five maximal 10-second
cycloergometer efforts with external loads equal: 2.5, 5.0, 7.5, 10.0
and 12.5% of body weight (BW) before (I) and after (II) the
preparatory period, after the first (III) and second (IV) competitive
season. The maximal power output increased from 9.30±0.85 W•kg–1
(I) to 9.50±0.96 W•kg–1 (II), 9.77±0.96 W•kg–1 (III) and 9.95±1.13
W•kg–1 (IV, p
Abstract: The purpose of this study is to find natural gait of
biped robot such as human being by analyzing the COG (Center Of
Gravity) trajectory of human being's gait. It is discovered that human
beings gait naturally maintain the stability and use the minimum
energy. This paper intends to find the natural gait pattern of biped
robot using the minimum energy as well as maintaining the stability by
analyzing the human's gait pattern that is measured from gait image on
the sagittal plane and COG trajectory on the frontal plane. It is not
possible to apply the torques of human's articulation to those of biped
robot's because they have different degrees of freedom. Nonetheless,
human and 5-link biped robots are similar in kinematics. For this, we
generate gait pattern of the 5-link biped robot by using the GA
algorithm of adaptation gait pattern which utilize the human's ZMP
(Zero Moment Point) and torque of all articulation that are measured
from human's gait pattern. The algorithm proposed creates biped
robot's fluent gait pattern as that of human being's and to minimize
energy consumption because the gait pattern of the 5-link biped robot
model is modeled after consideration about the torque of human's each
articulation on the sagittal plane and ZMP trajectory on the frontal
plane. This paper demonstrate that the algorithm proposed is superior
by evaluating 2 kinds of the 5-link biped robot applied to each gait
patterns generated both in the general way using inverse kinematics
and in the special way in which by considering visuality and
efficiency.
Abstract: In this paper we investigated a number of the Internet
congestion control algorithms that has been developed in the last few
years. It was obviously found that many of these algorithms were
designed to deal with the Internet traffic merely as a train of
consequent packets. Other few algorithms were specifically tailored
to handle the Internet congestion caused by running media traffic that
represents audiovisual content. This later set of algorithms is
considered to be aware of the nature of this media content. In this
context we briefly explained a number of congestion control
algorithms and hence categorized them into the two following
categories: i) Media congestion control algorithms. ii) Common
congestion control algorithms. We hereby recommend the usage of
the media congestion control algorithms for the reason of being
media content-aware rather than the other common type of
algorithms that blindly manipulates such traffic. We showed that the
spread of such media content-aware algorithms over Internet will
lead to better congestion control status in the coming years. This is
due to the observed emergence of the era of digital convergence
where the media traffic type will form the majority of the Internet
traffic.
Abstract: The present work presents a method of calculating the
ductility of rectangular sections of beams considering nonlinear
behavior of concrete and steel. This calculation procedure allows us
to trace the curvature of the section according to the bending
moment, and consequently deduce ductility. It also allowed us to
study the various parameters that affect the value of the ductility. A
comparison of the effect of maximum rates of tension steel, adopted
by the codes, ACI [1], EC8 [2] and RPA [3] on the value of the
ductility was made. It was concluded that the maximum rate of steels
permitted by the ACI [1] codes and RPA [3] are almost similar in
their effect on the ductility and too high. Therefore, the ductility
mobilized in case of an earthquake is low, the inverse of code EC8
[2]. Recommendations have been made in this direction.
Abstract: This paper presents the design and implements the prototype of an intelligent data processing framework in ubiquitous sensor networks. Much focus is put on how to handle the sensor data stream as well as the interoperability between the low-level sensor data and application clients. Our framework first addresses systematic middleware which mitigates the interaction between the application layer and low-level sensors, for the sake of analyzing a great volume of sensor data by filtering and integrating to create value-added context information. Then, an agent-based architecture is proposed for real-time data distribution to efficiently forward a specific event to the appropriate application registered in the directory service via the open interface. The prototype implementation demonstrates that our framework can host a sophisticated application on the ubiquitous sensor network and it can autonomously evolve to new middleware, taking advantages of promising technologies such as software agents, XML, cloud computing, and the like.
Abstract: Biomimicry has many potential benefits as many
technologies found in nature are superior to their man-made
counterparts. As technological device components approach the micro
and nanoscale, surface properties such as surface adhesion and friction
may need to be taken into account. Lowering surface adhesion by
manipulating chemistry alone might no longer be sufficient for such
components and thus physical manipulation may be required.
Adhesion reduction is only one of the many surface functions
displayed by micro/nano-structured cuticles of insects. Here, we
present a mini review of our understanding of insect cuticle structures
and the relationship between the structure dimensions and the
corresponding functional mechanisms. It may be possible to introduce
additional properties to material surfaces (indeed multi-functional
properties) based on the design of natural surfaces.
Abstract: Cognitive models allow predicting some aspects of utility
and usability of human machine interfaces (HMI), and simulating
the interaction with these interfaces. The action of predicting is based
on a task analysis, which investigates what a user is required to do
in terms of actions and cognitive processes to achieve a task. Task
analysis facilitates the understanding of the system-s functionalities.
Cognitive models are part of the analytical approaches, that do not
associate the users during the development process of the interface.
This article presents a study about the evaluation of a human
machine interaction with a contextual assistant-s interface using ACTR
and GOMS cognitive models. The present work shows how these
techniques may be applied in the evaluation of HMI, design and
research by emphasizing firstly the task analysis and secondly the
time execution of the task. In order to validate and support our
results, an experimental study of user performance is conducted at
the DOMUS laboratory, during the interaction with the contextual
assistant-s interface. The results of our models show that the GOMS
and ACT-R models give good and excellent predictions respectively
of users performance at the task level, as well as the object level.
Therefore, the simulated results are very close to the results obtained
in the experimental study.