Abstract: In this paper we propose a novel method for human
face segmentation using the elliptical structure of the human head. It
makes use of the information present in the edge map of the image.
In this approach we use the fact that the eigenvalues of covariance
matrix represent the elliptical structure. The large and small
eigenvalues of covariance matrix are associated with major and
minor axial lengths of an ellipse. The other elliptical parameters are
used to identify the centre and orientation of the face. Since an
Elliptical Hough Transform requires 5D Hough Space, the Circular
Hough Transform (CHT) is used to evaluate the elliptical parameters.
Sparse matrix technique is used to perform CHT, as it squeeze zero
elements, and have only a small number of non-zero elements,
thereby having an advantage of less storage space and computational
time. Neighborhood suppression scheme is used to identify the valid
Hough peaks. The accurate position of the circumference pixels for
occluded and distorted ellipses is identified using Bresenham-s
Raster Scan Algorithm which uses the geometrical symmetry
properties. This method does not require the evaluation of tangents
for curvature contours, which are very sensitive to noise. The method
has been evaluated on several images with different face orientations.
Abstract: Decision feedback equalizers are commonly employed to reduce the error caused by intersymbol interference. Here, an adaptive decision feedback equalizer is presented with a new adaptation algorithm. The algorithm follows a block-based approach of normalized least mean square (NLMS) algorithm with set-membership filtering and achieves a significantly less computational complexity over its conventional NLMS counterpart with set-membership filtering. It is shown in the results that the proposed algorithm yields similar type of bit error rate performance over a reasonable signal to noise ratio in comparison with the latter one.
Abstract: In this paper a comprehensive model of a fossil fueled
power plant (FFPP) is developed in order to evaluate the
performance of a newly designed turbine follower controller.
Considering the drawbacks of previous works, an overall model is
developed to minimize the error between each subsystem model
output and the experimental data obtained at the actual power plant.
The developed model is organized in two main subsystems namely;
Boiler and Turbine. Considering each FFPP subsystem
characteristics, different modeling approaches are developed. For
economizer, evaporator, superheater and reheater, first order models
are determined based on principles of mass and energy conservation.
Simulations verify the accuracy of the developed models. Due to the
nonlinear characteristics of attemperator, a new model, based on a
genetic-fuzzy systems utilizing Pittsburgh approach is developed
showing a promising performance vis-à-vis those derived with other
methods like ANFIS. The optimization constraints are handled
utilizing penalty functions. The effect of increasing the number of
rules and membership functions on the performance of the proposed
model is also studied and evaluated. The turbine model is developed
based on the equation of adiabatic expansion. Parameters of all
evaluated models are tuned by means of evolutionary algorithms.
Based on the developed model a fuzzy PI controller is developed. It
is then successfully implemented in the turbine follower control
strategy of the plant. In this control strategy instead of keeping
control parameters constant, they are adjusted on-line with regard to
the error and the error rate. It is shown that the response of the
system improves significantly. It is also shown that fuel consumption
decreases considerably.
Abstract: In this work, a new approach is proposed to control
the manipulators for Humanoid robot. The kinematics of the
manipulators in terms of joint positions, velocity, acceleration and
torque of each joint is computed using the Denavit Hardenberg (D-H)
notations. These variables are used to design the manipulator control
system, which has been proposed in this work. In view of supporting
the development of a controller, a simulation of the manipulator is
designed for Humanoid robot. This simulation is developed through
the use of the Virtual Reality Toolbox and Simulink in Matlab. The
Virtual Reality Toolbox in Matlab provides the interfacing and
controls to an environment which is developed based on the Virtual
Reality Modeling Language (VRML). Chains of bones were used to
represent the robot.
Abstract: As data to be stored in storage subsystems
tremendously increases, data protection techniques have become more
important than ever, to provide data availability and reliability. In this
paper, we present the file system-based data protection (WOWSnap)
that has been implemented using WORM (Write-Once-Read-Many)
scheme. In the WOWSnap, once WORM files have been created, only
the privileged read requests to them are allowed to protect data against
any intentional/accidental intrusions. Furthermore, all WORM files
are related to their protection cycle that is a time period during which
WORM files should securely be protected. Once their protection cycle
is expired, the WORM files are automatically moved to the
general-purpose data section without any user interference. This
prevents the WORM data section from being consumed by
unnecessary files. We evaluated the performance of WOWSnap on
Linux cluster.
Abstract: This paper aims at numerically analysing the effect
of an active flow control (AFC) by a vortex generator jet (VGJ)
submerged in a boundary layer via Chimera Grids and Detached-
Eddy Simulation (DES). The performance of DES results are
judged against Reynolds-Averaged Navier-Stokes (RANS) and
compared with the experiments that showed an unsteady vortex
motion downstream of VGJ. Experimental results showed that
the mechanism of embedding logitudinal vortex structure in the
main stream flow is quite effective in increasing the near wall
momentum of separated aircraft wing. In order to simulate such
a flow configuration together with the VGJ, an efficient numerical
approach is required. This requirement is fulfilled by performing
the DES simulation over the flat plate using the DLR TAU Code.
The DES predictions identify the vortex region via smooth hybrid
length scale and predict the unsteady vortex motion observed in
the experiments. The DES results also showed that the sufficient
grid refinement in the vortex region resolves the turbulent scales
downstream of the VGJ, the spatial vortex core postion and nondimensional
momentum coefficient RVx .
Abstract: Purpose: To develop a method for automatic segmentation of adipose and muscular tissue in thighs from magnetic resonance images. Materials and methods: Thirty obese women were scanned on a Siemens Impact Expert 1T resonance machine. 1500 images were finally used in the tests. The developed segmentation method is a recursive and multilevel process that makes use of several concepts such as shaped histograms, adaptative thresholding and connectivity. The segmentation process was implemented in Matlab and operates without the need of any user interaction. The whole set of images were segmented with the developed method. An expert radiologist segmented the same set of images following a manual procedure with the aid of the SliceOmatic software (Tomovision). These constituted our 'goal standard'. Results: The number of coincidental pixels of the automatic and manual segmentation procedures was measured. The average results were above 90 % of success in most of the images. Conclusions: The proposed approach allows effective automatic segmentation of MRIs from thighs, comparable to expert manual performance.
Abstract: The motion planning technique described in this paper has been developed to eliminate or reduce the residual vibrations of belt-driven rotary platforms, while maintaining unchanged the motion time and the total angular displacement of the platform. The proposed approach is based on a suitable choice of the motion command given to the servomotor that drives the mechanical device; this command is defined by some numerical coefficients which determine the shape of the displacement, velocity and acceleration profiles. Using a numerical optimization technique, these coefficients can be changed without altering the continuity conditions imposed on the displacement and its time derivatives at the initial and final time instants. The proposed technique can be easily and quickly implemented on an actual device, since it requires only a simple modification of the motion command profile mapped in the memory of the electronic motion controller.
Abstract: We introduce an extended resource leveling model that abstracts real life projects that consider specific work ranges for each resource. Contrary to traditional resource leveling problems this model considers scarce resources and multiple objectives: the minimization of the project makespan and the leveling of each resource usage over time. We formulate this model as a multiobjective optimization problem and we propose a multiobjective genetic algorithm-based solver to optimize it. This solver consists in a two-stage process: a main stage where we obtain non-dominated solutions for all the objectives, and a postprocessing stage where we seek to specifically improve the resource leveling of these solutions. We propose an intelligent encoding for the solver that allows including domain specific knowledge in the solving mechanism. The chosen encoding proves to be effective to solve leveling problems with scarce resources and multiple objectives. The outcome of the proposed solvers represent optimized trade-offs (alternatives) that can be later evaluated by a decision maker, this multi-solution approach represents an advantage over the traditional single solution approach. We compare the proposed solver with state-of-art resource leveling methods and we report competitive and performing results.
Abstract: In syntactic pattern recognition a pattern can be
represented by a graph. Given an unknown pattern represented by
a graph g, the problem of recognition is to determine if the graph g
belongs to a language L(G) generated by a graph grammar G. The
so-called IE graphs have been defined in [1] for a description of
patterns. The IE graphs are generated by so-called ETPL(k) graph
grammars defined in [1]. An efficient, parsing algorithm for ETPL(k)
graph grammars for syntactic recognition of patterns represented by
IE graphs has been presented in [1]. In practice, structural
descriptions may contain pattern distortions, so that the assignment
of a graph g, representing an unknown pattern, to
a graph language L(G) generated by an ETPL(k) graph grammar G is
rejected by the ETPL(k) type parsing. Therefore, there is a need for
constructing effective parsing algorithms for recognition of distorted
patterns. The purpose of this paper is to present a new approach to
syntactic recognition of distorted patterns. To take into account all
variations of a distorted pattern under study, a probabilistic
description of the pattern is needed. A random IE graph approach is
proposed here for such a description ([2]).
Abstract: In this paper we propose a novel Run Time Interface
(RTI) technique to provide an efficient environment for MPI jobs on
the heterogeneous architecture of PARAM Padma. It suggests an
innovative, unified framework for the job management interface
system in parallel and distributed computing. This approach employs
proxy scheme. The implementation shows that the proposed RTI is
highly scalable and stable. Moreover RTI provides the storage access
for the MPI jobs in various operating system platforms and improve
the data access performance through high performance C-DAC
Parallel File System (C-PFS). The performance of the RTI is
evaluated by using the standard HPC benchmark suites and the
simulation results show that the proposed RTI gives good
performance on large scale supercomputing system.
Abstract: This paper presents a wavelet transform and Support
Vector Machine (SVM) based algorithm for estimating fault location
on transmission lines. The Discrete wavelet transform (DWT) is used
for data pre-processing and this data are used for training and testing
SVM. Five types of mother wavelet are used for signal processing to
identify a suitable wavelet family that is more appropriate for use in
estimating fault location. The results demonstrated the ability of SVM
to generalize the situation from the provided patterns and to
accurately estimate the location of faults with varying fault resistance.
Abstract: Modeling of complex dynamic systems, which are
very complicated to establish mathematical models, requires new and
modern methodologies that will exploit the existing expert
knowledge, human experience and historical data. Fuzzy cognitive
maps are very suitable, simple, and powerful tools for simulation and
analysis of these kinds of dynamic systems. However, human experts
are subjective and can handle only relatively simple fuzzy cognitive
maps; therefore, there is a need of developing new approaches for an
automated generation of fuzzy cognitive maps using historical data.
In this study, a new learning algorithm, which is called Big Bang-Big
Crunch, is proposed for the first time in literature for an automated
generation of fuzzy cognitive maps from data. Two real-world
examples; namely a process control system and radiation therapy
process, and one synthetic model are used to emphasize the
effectiveness and usefulness of the proposed methodology.
Abstract: Efficient modulo 2n+1 adders are important for
several applications including residue number system, digital signal
processors and cryptography algorithms. In this paper we present a
novel modulo 2n+1 addition algorithm for a recently represented
number system. The proposed approach is introduced for the
reduction of the power dissipated. In a conventional modulo 2n+1
adder, all operands have (n+1)-bit length. To avoid using (n+1)-bit
circuits, the diminished-1 and carry save diminished-1 number
systems can be effectively used in applications. In the paper, we also
derive two new architectures for designing modulo 2n+1 adder, based
on n-bit ripple-carry adder. The first architecture is a faster design
whereas the second one uses less hardware. In the proposed method,
the special treatment required for zero operands in Diminished-1
number system is removed. In the fastest modulo 2n+1 adders in
normal binary system, there are 3-operand adders. This problem is
also resolved in this paper. The proposed architectures are compared
with some efficient adders based on ripple-carry adder and highspeed
adder. It is shown that the hardware overhead and power
consumption will be reduced. As well as power reduction, in some
cases, power-delay product will be also reduced.
Abstract: Text similarity measurement is a fundamental issue in
many textual applications such as document clustering, classification,
summarization and question answering. However, prevailing approaches
based on Vector Space Model (VSM) more or less suffer
from the limitation of Bag of Words (BOW), which ignores the semantic
relationship among words. Enriching document representation
with background knowledge from Wikipedia is proven to be an effective
way to solve this problem, but most existing methods still
cannot avoid similar flaws of BOW in a new vector space. In this
paper, we propose a novel text similarity measurement which goes
beyond VSM and can find semantic affinity between documents.
Specifically, it is a unified graph model that exploits Wikipedia as
background knowledge and synthesizes both document representation
and similarity computation. The experimental results on two different
datasets show that our approach significantly improves VSM-based
methods in both text clustering and classification.
Abstract: Customer-supplier collaboration enables firms to
achieve greater success than acting independently. Nevertheless, not
many firms have fully utilized the potential of collaboration. This
paper presents organizational and human related success factors for
collaboration in manufacturing supply chains in casting industry. Our
research approach was a case study including multiple cases. Data
was gathered by interviews and group discussions in two different
research projects. In the first research project we studied seven firms
and in the second five. It was found that the success factors are
interrelated, in other words, organizational and human factors
together enable success but not any of them alone. Some of the found
success factors are a culture of following agreements, and a speed of
informing the partner about changes affecting to the product or the
delivery chain.
Abstract: This paper examines economic and Information and Communication Technology (ICT) development influence on recently increasing Internet purchases by individuals for European Union member states. After a growing trend for Internet purchases in EU27 was noticed, all possible regression analysis was applied using nine independent variables in 2011. Finally, two linear regression models were studied in detail. Conducted simple linear regression analysis confirmed the research hypothesis that the Internet purchases in analyzed EU countries is positively correlated with statistically significant variable Gross Domestic Product per capita (GDPpc). Also, analyzed multiple linear regression model with four regressors, showing ICT development level, indicates that ICT development is crucial for explaining the Internet purchases by individuals, confirming the research hypothesis.
Abstract: A variety of new technology-based services have
emerged with the development of Information and Communication
Technologies (ICTs). Since technology-based services have technology-driven characteristics, the identification of relationships
between technology-based services and ICTs would give meaningful implications. Thus, this paper proposes an approach for identifying the
relationships between technology-based services and ICTs by
analyzing patent documents. First, business model (BM) patents are
classified into relevant service categories. Second, patent citation
analysis is conducted to investigate the technological linkage and impacts between technology-based services and ICTs at macro level.
Third, as a micro level analysis, patent co-classification analysis is
employed to identify the technological linkage and coverage. The
proposed approach could guide and help managers and designers of
technology-based services to discover the opportunity of the development of new technology-based services in emerging service sectors.
Abstract: Cryo-electron microscopy (CEM) in combination with
single particle analysis (SPA) is a widely used technique for
elucidating structural details of macromolecular assemblies at closeto-
atomic resolutions. However, development of automated software
for SPA processing is still vital since thousands to millions of
individual particle images need to be processed. Here, we present our
workflow for automated particle picking. Our approach integrates
peak shape analysis to the classical correlation and an iterative
approach to separate macromolecules and background by
classification. This particle selection workflow furthermore provides
a robust means for SPA with little user interaction. Processing
simulated and experimental data assesses performance of the
presented tools.
Abstract: Efficient storage, transmission and use of video information are key requirements in many multimedia applications currently being addressed by MPEG-4. To fulfill these requirements, a new approach for representing video information which relies on an object-based representation, has been adopted. Therefore, objectbased watermarking schemes are needed for copyright protection. This paper proposes a novel blind object watermarking scheme for images and video using the in place lifting shape adaptive-discrete wavelet transform (SA-DWT). In order to make the watermark robust and transparent, the watermark is embedded in the average of wavelet blocks using the visual model based on the human visual system. Wavelet coefficients n least significant bits (LSBs) are adjusted in concert with the average. Simulation results shows that the proposed watermarking scheme is perceptually invisible and robust against many attacks such as lossy image/video compression (e.g. JPEG, JPEG2000 and MPEG-4), scaling, adding noise, filtering, etc.