Abstract: In this paper we present a novel approach for human
Body configuration based on the Silhouette. We propose to address
this problem under the Bayesian framework. We use an effective
Model based MCMC (Markov Chain Monte Carlo) method to solve
the configuration problem, in which the best configuration could be
defined as MAP (maximize a posteriori probability) in Bayesian
model. This model based MCMC utilizes the human body model to
drive the MCMC sampling from the solution space. It converses the
original high dimension space into a restricted sub-space constructed
by the human model and uses a hybrid sampling algorithm. We
choose an explicit human model and carefully select the likelihood
functions to represent the best configuration solution. The
experiments show that this method could get an accurate
configuration and timesaving for different human from multi-views.
Abstract: This paper presents an advance in monitoring and
process control of surface roughness in CNC machine for the turning
and milling processes. An integration of the in-process monitoring
and process control of the surface roughness is proposed and
developed during the machining process by using the cutting force
ratio. The previously developed surface roughness models for turning
and milling processes of the author are adopted to predict the inprocess
surface roughness, which consist of the cutting speed, the
feed rate, the tool nose radius, the depth of cut, the rake angle, and
the cutting force ratio. The cutting force ratios obtained from the
turning and the milling are utilized to estimate the in-process surface
roughness. The dynamometers are installed on the tool turret of CNC
turning machine and the table of 5-axis machining center to monitor
the cutting forces. The in-process control of the surface roughness
has been developed and proposed to control the predicted surface
roughness. It has been proved by the cutting tests that the proposed
integration system of the in-process monitoring and the process
control can be used to check the surface roughness during the cutting
by utilizing the cutting force ratio.
Abstract: In this paper, a novel approach for the multidisciplinary design optimization (MDO) of complex mechatronic systems. This approach, which is a part of a global project aiming to include the MDO aspect inside an innovative design process. As a first step, the paper considers the MDO as a redesign approach which is limited to the parametric optimization. After defining and introducing the different keywords, the proposed method which is based on the V-Model which is commonly used in mechatronics.
Abstract: One of the main advantages of the LO paradigm is to
allow the availability of good quality, shareable learning material
through the Web. The effectiveness of the retrieval process requires a
formal description of the resources (metadata) that closely fits the
user-s search criteria; in spite of the huge international efforts in this
field, educational metadata schemata often fail to fulfil this
requirement. This work aims to improve the situation, by the
definition of a metadata model capturing specific didactic features of
shareable learning resources. It classifies LOs into “teacher-oriented"
and “student-oriented" categories, in order to describe the role a LO
is to play when it is integrated into the educational process. This
article describes the model and a first experimental validation process
that has been carried out in a controlled environment.
Abstract: The problem of robust stability and robust stabilization for a class of discrete-time uncertain systems with time delay is investigated. Based on Tchebychev inequality, by constructing a new augmented Lyapunov function, some improved sufficient conditions ensuring exponential stability and stabilization are established. These conditions are expressed in the forms of linear matrix inequalities (LMIs), whose feasibility can be easily checked by using Matlab LMI Toolbox. Compared with some previous results derived in the literature, the new obtained criteria have less conservatism. Two numerical examples are provided to demonstrate the improvement and effectiveness of the proposed method.
Abstract: In this paper, we propose novel algorithmic models
based on information fusion and feature transformation in crossmodal
subspace for different types of residue features extracted from
several intra-frame and inter-frame pixel sub-blocks in video
sequences for detecting digital video tampering or forgery. An
evaluation of proposed residue features – the noise residue features
and the quantization features, their transformation in cross-modal
subspace, and their multimodal fusion, for emulated copy-move
tamper scenario shows a significant improvement in tamper detection
accuracy as compared to single mode features without transformation
in cross-modal subspace.
Abstract: In single trial analysis, when using Principal
Component Analysis (PCA) to extract Visual Evoked Potential
(VEP) signals, the selection of principal components (PCs) is an
important issue. We propose a new method here that selects only
the appropriate PCs. We denote the method as selective eigen-rate
(SER). In the method, the VEP is reconstructed based on the rate
of the eigen-values of the PCs. When this technique is applied on
emulated VEP signals added with background
electroencephalogram (EEG), with a focus on extracting the
evoked P3 parameter, it is found to be feasible. The improvement
in signal to noise ratio (SNR) is superior to two other existing
methods of PC selection: Kaiser (KSR) and Residual Power (RP).
Though another PC selection method, Spectral Power Ratio (SPR)
gives a comparable SNR with high noise factors (i.e. EEGs), SER
give more impressive results in such cases. Next, we applied SER
method to real VEP signals to analyse the P3 responses for
matched and non-matched stimuli. The P3 parameters extracted
through our proposed SER method showed higher P3 response for
matched stimulus, which confirms to the existing neuroscience
knowledge. Single trial PCA using KSR and RP methods failed to
indicate any difference for the stimuli.
Abstract: The P-Bigram method is a string comparison methods
base on an internal two characters-based similarity measure. The edit
distance between two strings is the minimal number of elementary
editing operations required to transform one string into the other. The
elementary editing operations include deletion, insertion, substitution
two characters. In this paper, we address the P-Bigram method to
sole the similarity problem in DNA sequence. This method provided
an efficient algorithm that locates all minimum operation in a string.
We have been implemented algorithm and found that our program
calculated that smaller distance than one string. We develop PBigram
edit distance and show that edit distance or the similarity and
implementation using dynamic programming. The performance of
the proposed approach is evaluated using number edit and percentage
similarity measures.
Abstract: In this paper we propose a family of algorithms based
on 3rd and 4th order cumulants for blind single-input single-output
(SISO) Non-Minimum Phase (NMP) Finite Impulse Response (FIR)
channel estimation driven by non-Gaussian signal. The input signal
represents the signal used in 10GBASE-T (or IEEE 802.3an-2006)
as a Tomlinson-Harashima Precoded (THP) version of random
Pulse-Amplitude Modulation with 16 discrete levels (PAM-16). The
proposed algorithms are tested using three non-minimum phase
channel for different Signal-to-Noise Ratios (SNR) and for different
data input length. Numerical simulation results are presented to
illustrate the performance of the proposed algorithms.
Abstract: The flash memory has many advantages such as low power consumption, strong shock resistance, fast I/O and non-volatility. And it is increasingly used in the mobile storage device. The YAFFS, one of the NAND flash file system, is widely used in the embedded device. However, the existing YAFFS takes long time to mount the file system because it scans whole spare areas in all pages of NAND flash memory. In order to solve this problem, we propose a new content-based flash file system using a mounting time reduction technique. The proposed method only scans partial spare areas of some special pages by using content-based block management. The experimental results show that the proposed method reduces the average mounting time by 87.2% comparing with JFFS2 and 69.9% comparing with YAFFS.
Abstract: Recent years have seen a growing trend towards the
integration of multiple information sources to support large-scale
prediction of protein-protein interaction (PPI) networks in model
organisms. Despite advances in computational approaches, the
combination of multiple “omic" datasets representing the same type
of data, e.g. different gene expression datasets, has not been
rigorously studied. Furthermore, there is a need to further investigate
the inference capability of powerful approaches, such as fullyconnected
Bayesian networks, in the context of the prediction of PPI
networks. This paper addresses these limitations by proposing a
Bayesian approach to integrate multiple datasets, some of which
encode the same type of “omic" data to support the identification of
PPI networks. The case study reported involved the combination of
three gene expression datasets relevant to human heart failure (HF).
In comparison with two traditional methods, Naive Bayesian and
maximum likelihood ratio approaches, the proposed technique can
accurately identify known PPI and can be applied to infer potentially
novel interactions.
Abstract: This paper presents an algorithm which
combining ant colony optimization in the dynamic
programming for solving a dynamic facility layout problem.
The problem is separated into 2 phases, static and dynamic
phase. In static phase, ant colony optimization is used to find
the best ranked of layouts for each period. Then the dynamic
programming (DP) procedure is performed in the dynamic
phase to evaluate the layout set during multi-period planning
horizon. The proposed algorithm is tested over many
problems with size ranging from 9 to 49 departments, 2 and 4
periods. The experimental results show that the proposed
method is an alternative way for the plant layout designer to
determine the layouts during multi-period planning horizon.
Abstract: Since 1984 many schemes have been proposed for
digital signature protocol, among them those that based on discrete
log and factorizations. However a new identification scheme based
on iterated function (IFS) systems are proposed and proved to be
more efficient. In this study the proposed identification scheme is
transformed into a digital signature scheme by using a one way hash
function. It is a generalization of the GQ signature schemes. The
attractor of the IFS is used to obtain public key from a private one,
and in the encryption and decryption of a hash function. Our aim is
to provide techniques and tools which may be useful towards
developing cryptographic protocols. Comparisons between the
proposed scheme and fractal digital signature scheme based on RSA
setting, as well as, with the conventional Guillou-Quisquater
signature, and RSA signature schemes is performed to prove that, the
proposed scheme is efficient and with high performance.
Abstract: The correct design of the regulators structure requires complete prediction of the ultimate dimensions of the scour hole profile formed downstream the solid apron. The study of scour downstream regulator is studied either on solid aprons by means of velocity distribution or on movable bed by studying the topography of the scour hole formed in the downstream. In this paper, a new technique was developed to study the scour hole downstream regulators on movable beds. The study was divided into two categories; the first is to find out the sum of the lengths of rigid apron behind the gates in addition to the length of scour hole formed downstream, while the second is to find the minimum length of rigid apron behind the gates to prevent erosion downstream it. The study covers free and submerged hydraulic jump conditions in both symmetrical and asymmetrical under-gated regulations. From the comparison between the studied categories, we found that the minimum length of rigid apron to prevent scour (Ls) is greater than the sum of the lengths of rigid apron and that of scour hole formed behind it (L+Xs). On the other hand, the scour hole dimensions in case of submerged hydraulic jump is always greater than free one, also the scour hole dimensions in asymmetrical operation is greater than symmetrical one.
Abstract: In this paper we propose a computational model for the representation and processing of morpho-phonological phenomena in a natural language, like Modern Greek. We aim at a unified treatment of inflection, compounding, and word-internal phonological changes, in a model that is used for both analysis and generation. After discussing certain difficulties cuase by well-known finitestate approaches, such as Koskenniemi-s two-level model [7] when applied to a computational treatment of compounding, we argue that a morphology-based model provides a more adequate account of word-internal phenomena. Contrary to the finite state approaches that cannot handle hierarchical word constituency in a satisfactory way, we propose a unification-based word grammar, as the nucleus of our strategy, which takes into consideration word representations that are based on affixation and [stem stem] or [stem word] compounds. In our formalism, feature-passing operations are formulated with the use of the unification device, and phonological rules modeling the correspondence between lexical and surface forms apply at morpheme boundaries. In the paper, examples from Modern Greek illustrate our approach. Morpheme structures, stress, and morphologically conditioned phoneme changes are analyzed and generated in a principled way.
Abstract: Mathematical programming has been applied to various
problems. For many actual problems, the assumption that the parameters
involved are deterministic known data is often unjustified. In
such cases, these data contain uncertainty and are thus represented
as random variables, since they represent information about the
future. Decision-making under uncertainty involves potential risk.
Stochastic programming is a commonly used method for optimization
under uncertainty. A stochastic programming problem with recourse
is referred to as a two-stage stochastic problem. In this study, we
consider a stochastic programming problem with simple integer
recourse in which the value of the recourse variable is restricted to a
multiple of a nonnegative integer. The algorithm of a dynamic slope
scaling procedure for solving this problem is developed by using a
property of the expected recourse function. Numerical experiments
demonstrate that the proposed algorithm is quite efficient. The
stochastic programming model defined in this paper is quite useful
for a variety of design and operational problems.
Abstract: In this paper, we discuss the paradigm shift in bank
capital from the “gone concern" to the “going concern" mindset. We
then propose a methodology for pricing a product of this shift called
Contingent Capital Notes (“CoCos"). The Merton Model can
determine a price for credit risk by using the firm-s equity value as a
call option on those assets. Our pricing methodology for CoCos also
uses the credit spread implied by the Merton Model in a subsequent
derivative form created by John Hull et al . Here, a market implied
asset volatility is calculated by using observed market CDS spreads.
This implied asset volatility is then used to estimate the probability of
triggering a predetermined “contingency event" given the distanceto-
trigger (DTT). The paper then investigates the effect of varying
DTTs and recovery assumptions on the CoCo yield. We conclude
with an investment rationale.
Abstract: The application of a high frequency signal injection method as speed and position observer in PMSM drives has been a research focus. At present, the precision of this method is nearly good as that of ten-bit encoder. But there are some questions for estimating position polarity. Based on high frequency signal injection, this paper presents a method to compensate position polarity for permanent magnet synchronous motor (PMSM). Experiments were performed to test the effectiveness of the proposed algorithm and results present the good performance.
Abstract: To investigate the correspondence of theory and
practice, a successfully implemented Knowledge Management
System (KMS) is explored through the lens of Alavi and Leidner-s
proposed KMS framework for the analysis of an information system
in knowledge management (Framework-AISKM). The applied KMS
system was designed to manage curricular knowledge in a distributed
university environment. The motivation for the KMS is discussed
along with the types of knowledge necessary in an academic setting.
Elements of the KMS involved in all phases of capturing and
disseminating knowledge are described. As the KMS matures the
resulting data stores form the precursor to and the potential for
knowledge mining. The findings from this exploratory study indicate
substantial correspondence between the successful KMS and the
theory-based framework providing provisional confirmation for the
framework while suggesting factors that contributed to the system-s
success. Avenues for future work are described.
Abstract: In Knowledge Structure Graph, each course unit
represents a phase of learning activities. Both learning portfolios and
Knowledge Structure Graphs contain learning information of students
and let teachers know which content are difficulties and fails. The
study purposes "Dual Mode On-line Learning Diagnosis System" that
integrates two search methods: learning portfolio and knowledge
structure. Teachers can operate the proposed system and obtain the
information of specific students without any computer science
background. The teachers can find out failed students in advance and
provide remedial learning resources.