Abstract: Human pose estimation can be executed using Active Shape Models. The existing techniques for applying to human-body research using Active Shape Models, such as human detection, primarily take the form of silhouette of human body. This technique is not able to estimate accurately for human pose to concern two arms and legs, as the silhouette of human body represents the shape as out of round. To solve this problem, we applied the human body model as stick-figure, “skeleton". The skeleton model of human body can give consideration to various shapes of human pose. To obtain effective estimation result, we applied background subtraction and deformed matching algorithm of primary Active Shape Models in the fitting process. The images which were used to make the model were 600 human bodies, and the model has 17 landmark points which indicate body junction and key features of human pose. The maximum iteration for the fitting process was 30 times and the execution time was less than .03 sec.
Abstract: Dehydration process was carried out for tomato slices of var. Avinash after giving different pre-treatments such as calcium chloride (CaCl2), potassium metabisulphite (KMS), calcium chloride and potassium metabisulphite (CaCl2 +KMS), and sodium chloride (NaCl). Untreated samples served as control. Solar drier and continuous conveyor (tunnel) drier were used for dehydration. Quality characteristics of tomato slices viz. moisture content, sugar, titratable acidity, lycopene content, dehydration ratio, rehydration ratio and non-enzymatic browning as affected by dehydration process were studied. Storage study was also carried out for a period of six months for tomato powder packed into different types of packaging materials viz. metalized polyester (MP) film and low density poly ethylene (LDPE). Changes in lycopene content and non-enzymatic browning (NEB) were estimated during storage at room temperature. Pretreatment of 5 mm thickness of tomato slices with calcium chloride in combination with potassium metabisulphite and drying using a tunnel drier with subsequent storage of product in metalized polyester bags was selected as the best process.
Abstract: FACTS devices are used to control the power flow, to
increase the transmission capacity and to optimize the stability of the
power system. One of the most widely used FACTS devices is
Unified Power Flow Controller (UPFC). The controller used in the
control mechanism has a significantly effects on controlling of the
power flow and enhancing the system stability of UPFC. According
to this, the capability of UPFC is observed by using different control
mechanisms based on P, PI, PID and fuzzy logic controllers (FLC) in
this study. FLC was developed by taking consideration of Takagi-
Sugeno inference system in the decision process and Sugeno-s
weighted average method in the defuzzification process. Case studies
with different operating conditions are applied to prove the ability of
UPFC on controlling the power flow and the effectiveness of
controllers on the performance of UPFC. PSCAD/EMTDC program
is used to create the FLC and to simulate UPFC model.
Abstract: (Bi0.5Na0.5)TiO3 doped with 8 mol % BaTiO3 powder
(BNT-BT0.08), prepared by sol-gel method was compacted and
sintered by Spark Plasma Sintering (SPS) process. The influence of
SPS temperature on the densification of BNT-BT0.08 ceramic was
investigated. Starting from sol-gel nanopowder of BNT-BT
containing 8 mol % BaTiO3 with an average particles size of about
30 nm, were obtained ceramics with density around 98 % of the
theoretical density value when the SPS temperature used was about
850 °C. The average grain size of the resulting ceramics was 80 nm.
The BNT-BT0.08 ceramic sample obtained by SPS method has shown
good electric properties at various frequencies.
Abstract: In recent years, an increased competition and lower profit margins have necessitated a focus on improving the performance of the product development process, an area that traditionally have been excluded from detailed steering and evaluation. A systematic improvement requires a good understanding of the current performance, wherefore the interest for product development performance measurement has increased dramatically. This paper presents a case study that evaluates the performance of the product development performance measurement system used in a Swedish company that is a part of a global corporate group. The study is based on internal documentation and eighteen in-depth interviews with stakeholders involved in the product development process. The results from the case study includes a description of what metrics that are in use, how these are employed, and its affect on the quality of the performance measurement system. Especially, the importance of having a well-defined process proved to have a major impact on the quality of the performance measurement system in this particular case.
Abstract: In the context of spectrum surveillance, a new method
to recover the code of spread spectrum signal is presented, while the
receiver has no knowledge of the transmitter-s spreading sequence. In
our previous paper, we used Genetic algorithm (GA), to recover
spreading code. Although genetic algorithms (GAs) are well known
for their robustness in solving complex optimization problems, but
nonetheless, by increasing the length of the code, we will often lead
to an unacceptable slow convergence speed. To solve this problem we
introduce Particle Swarm Optimization (PSO) into code estimation in
spread spectrum communication system. In searching process for
code estimation, the PSO algorithm has the merits of rapid
convergence to the global optimum, without being trapped in local
suboptimum, and good robustness to noise. In this paper we describe
how to implement PSO as a component of a searching algorithm in
code estimation. Swarm intelligence boasts a number of advantages
due to the use of mobile agents. Some of them are: Scalability, Fault
tolerance, Adaptation, Speed, Modularity, Autonomy, and
Parallelism. These properties make swarm intelligence very attractive
for spread spectrum code estimation. They also make swarm
intelligence suitable for a variety of other kinds of channels. Our
results compare between swarm-based algorithms and Genetic
algorithms, and also show PSO algorithm performance in code
estimation process.
Abstract: This paper focuses on a critical component of the situational awareness (SA), the neural control of autonomous constant depth flight of an autonomous underwater vehicle (AUV). Autonomous constant depth flight is a challenging but important task for AUVs to achieve high level of autonomy under adverse conditions. The fundamental requirement for constant depth flight is the knowledge of the depth, and a properly designed controller to govern the process. The AUV, named VORAM, is used as a model for the verification of the proposed hybrid control algorithm. Three neural network controllers, named NARMA-L2 controllers, are designed for fast and stable diving maneuvers of chosen AUV model. This hybrid control strategy for chosen AUV model has been verified by simulation of diving maneuvers using software package Simulink and demonstrated good performance for fast SA in real-time searchand- rescue operations.
Abstract: Nanowire arrays of copper with uniform diameters have
been synthesized by potentiostatic electrochemical metal deposition
(EMD) of copper sulphate and potassium chloride solution within
the nano-channels of porous Indium-Tin Oxide (ITO), also known as
Tin doped Indium Oxide templates. The nanowires developed were
fairly continuous with diameters ranging from 110-140 nm along
the entire length. Single as well as poly-crystalline copper wires
have been prepared by application of appropriate potential during the
EMD process. Scanning electron microscopy (SEM), high resolution
transmission electron microscopy (HRTEM), small angle electron
diffraction (SAED) and atomic force microscopy (AFM) were used
to characterize the synthesized nano wires at room temperature. The
electrochemical response of synthesized products was evaluated by
cyclic voltammetry while surface energy analysis was carried out
using a Goniometer.
Abstract: The gel-supported precipitation (GSP) process can be
used to make spherical particles (spherules) of nuclear fuel,
particularly for very high temperature reactors (VHTR) and even for
implementing the process called SPHEREPAC. In these different
cases, the main characteristics are the sphericity of the particles to be
manufactured and the control over their grain size. Nonetheless,
depending on the specifications defined for these spherical particles,
the GSP process has intrinsic limits, particularly when fabricating
very small particles. This paper describes the use of secondary
fragmentation (water, water/PVA and uranyl nitrate) on solid
surfaces under varying temperature and vibration conditions to assess
the relevance of using this new technique to manufacture very small
spherical particles by means of a modified GSP process. The
fragmentation mechanisms are monitored and analysed, before the
trends for its subsequent optimised application are described.
Abstract: The copper flotation tailings from Konkola Copper
mine in Nchanga, Zambia were used in the study. The purpose of this
study was to determine the leaching characteristics of the tailings
material prior and after the physical beneficiation process is
employed. The Knelson gravity concentrator (KC-MD3) was used for
the beneficiation process. The copper leaching efficiencies and
impurity co-extraction percentages in both the upgraded and the raw
feed material were determined at different pH levels and temperature.
It was observed that the copper extraction increased with an increase
in temperature and a decrease in pH levels. In comparison to the raw
feed sample, the upgraded sample reported a maximum copper
extraction of 69% which was 9%, higher than raw feed % extractions.
The impurity carry over was reduced from 18% to 4 % on the
upgraded sample. The reduction in impurity co-extraction was as a
result of the removal of the reactive gangue elements during the
upgrading process, this minimized the number of side reaction
occurring during leaching.
Abstract: In order to maximize efficiency of an information management platform and to assist in decision making, the collection, storage and analysis of performance-relevant data has become of fundamental importance. This paper addresses the merits and drawbacks provided by the OLAP paradigm for efficiently navigating large volumes of performance measurement data hierarchically. The system managers or database administrators navigate through adequately (re)structured measurement data aiming to detect performance bottlenecks, identify causes for performance problems or assessing the impact of configuration changes on the system and its representative metrics. Of particular importance is finding the root cause of an imminent problem, threatening availability and performance of an information system. Leveraging OLAP techniques, in contrast to traditional static reporting, this is supposed to be accomplished within moderate amount of time and little processing complexity. It is shown how OLAP techniques can help improve understandability and manageability of measurement data and, hence, improve the whole Performance Analysis process.
Abstract: This paper presents the work of signal discrimination
specifically for Electrocardiogram (ECG) waveform. ECG signal is
comprised of P, QRS, and T waves in each normal heart beat to
describe the pattern of heart rhythms corresponds to a specific
individual. Further medical diagnosis could be done to determine any
heart related disease using ECG information. The emphasis on QRS
Complex classification is further discussed to illustrate the
importance of it. Pan-Tompkins Algorithm, a widely known
technique has been adapted to realize the QRS Complex
classification process. There are eight steps involved namely
sampling, normalization, low pass filter, high pass filter (build a band
pass filter), derivation, squaring, averaging and lastly is the QRS
detection. The simulation results obtained is represented in a
Graphical User Interface (GUI) developed using MATLAB.
Abstract: The current trend of increasing quality and demands
of the final product is affected by time analysis of the entire
manufacturing process. The primary requirement of manufacturing is
to produce as many products as soon as possible, at the lowest
possible cost, but of course with the highest quality. Such
requirements may be satisfied only if all the elements entering and
affecting the production cycle are in a fully functional condition.
These elements consist of sensory equipment and intelligent control
elements that are essential for building intelligent manufacturing
systems. The intelligent manufacturing paradigm includes a new
approach to production system structure design. Intelligent behaviors
are based on the monitoring of important parameters of system and
its environment. The flexible reaction to changes. The realization and
utilization of this design paradigm as an "intelligent manufacturing
system" enables the flexible system reaction to production
requirement as soon as environmental changes too. Results of these
flexible reactions are a smaller layout space, be decreasing of
production and investment costs and be increasing of productivity.
Intelligent manufacturing system itself should be a system that can
flexibly respond to changes in entering and exiting the process in
interaction with the surroundings.
Abstract: Time interleaved sigma-delta (TIΣΔ) architecture is a
potential candidate for high bandwidth analog to digital converters
(ADC) which remains a bottleneck for software and cognitive radio
receivers. However, the performance of the TIΣΔ architecture is
limited by the unavoidable gain and offset mismatches resulting
from the manufacturing process. This paper presents a novel digital
calibration method to compensate the gain and offset mismatch
effect. The proposed method takes advantage of the reconstruction
digital signal processing on each channel and requires only few logic
components for implementation. The run time calibration is estimated
to 10 and 15 clock cycles for offset cancellation and gain mismatch
calibration respectively.
Abstract: Numerical design optimization is a powerful tool that
can be used by engineers during any stage of the design process.
There are many different applications for structural optimization. A
specific application that will be discussed in the following paper is
experimental data matching. Data obtained through tests on a physical
structure will be matched with data from a numerical model of that
same structure. The data of interest will be the dynamic characteristics
of an antenna structure focusing on the mode shapes and modal
frequencies. The structure used was a scaled and simplified model of
the Karoo Array Telescope-7 (KAT-7) antenna structure.
This kind of data matching is a complex and difficult task. This
paper discusses how optimization can assist an engineer during the
process of correlating a finite element model with vibration test data.
Abstract: Fuller’s earth is a fine-grained, naturally occurring substance that has a substantial ability to adsorb impurities. In the present study Fuller’s earth has been characterized and used for the removal of Pb(II) from aqueous solution. The effect of various physicochemical parameters such as pH, adsorbent dosage and shaking time on adsorption were studied. The result of the equilibrium studies showed that the solution pH was the key factor affecting the adsorption. The optimum pH for adsorption was 5. Kinetics data for the adsorption of Pb(II) was best described by pseudo-second order model. The effective diffusion co-efficient for Pb(II) adsorption was of the order of 10-8 m2/s. The adsorption data for metal adsorption can be well described by Langmuir adsorption isotherm. The maximum uptake of metal was 103.3 mg/g of adsorbent. Mass transfer analysis was also carried out for the adsorption process. The values of mass transfer coefficients obtained from the study indicate that the velocity of the adsorbate transport from bulk to the solid phase was quite fast. The mean sorption energy calculated from Dubinin-Radushkevich isotherm indicated that the metal adsorption process was chemical in nature.
Abstract: Multi criteria decision making (MCDM) methods like analytic hierarchy process, ELECTRE and multi-attribute utility theory are critically studied. They have irregularities in terms of the reliability of ranking of the best alternatives. The Routing Decision Support (RDS) algorithm is trying to improve some of their deficiencies. This paper gives a mathematical verification that the RDS algorithm conforms to the test criteria for an effective MCDM method when a linear preference function is considered.
Abstract: When an assignable cause(s) manifests itself to a multivariate process and the process shifts to an out-of-control condition, a root-cause analysis should be initiated by quality engineers to identify and eliminate the assignable cause(s) affected the process. A root-cause analysis in a multivariate process is more complex compared to a univariate process. In the case of a process involved several correlated variables an effective root-cause analysis can be only experienced when it is possible to identify the required knowledge including the out-of-control condition, the change point, and the variable(s) responsible to the out-of-control condition, all simultaneously. Although literature addresses different schemes to monitor multivariate processes, one can find few scientific reports focused on all the required knowledge. To the best of the author’s knowledge this is the first time that a multi task model based on artificial neural network (ANN) is reported to monitor all the required knowledge at the same time for a multivariate process with more than two correlated quality characteristics. The performance of the proposed scheme is evaluated numerically when different step shifts affect the mean vector. Average run length is used to investigate the performance of the proposed multi task model. The simulated results indicate the multi task scheme performs all the required knowledge effectively.
Abstract: In this paper, the energy performance of a selected
UHDE Ammonia plant is optimized by conducting heat integration through waste heat recovery and the synthesis of a heat exchange
network (HEN). Minimum hot and cold utility requirements were estimated through IChemE spreadsheet. Supporting simulation was
carried out using HYSYS software. The results showed that there is
no need for heating utility while the required cold utility was found to
be around 268,714 kW. Hence a threshold pinch case was faced. Then, the hot and cold streams were matched appropriately. Also,
waste heat recovered resulted with savings in HP and LP steams of
approximately 51.0% and 99.6%, respectively. An economic analysis
on proposed HEN showed very attractive overall payback period not
exceeding 3 years. In general, a net saving approaching 35% was
achieved in implementing heat optimization of current studied UHDE Ammonia process.
Abstract: At the time where electronic books, or e-Books, offer
students a fun way of learning , teachers who are used to the paper
text books may find it as a new challenge to use it as a part of
learning process. Precisely, there are various types of e-Books
available to suit students- knowledge, characteristics, abilities, and
interests. The paper discusses teachers- perceptions on the use of ebooks
as a paper text book in the classroom. A survey was conducted
on 72 teachers who use e-books as textbooks. It was discovered that a
majority of these teachers had good perceptions on the use of ebooks.
However, they had little problems using the devices. It can be
overcome with some strategies and a suggested framework.