Abstract: The topic of surface flattening plays a vital role in the field of computer aided design and manufacture. Surface flattening enables the production of 2D patterns and it can be used in design and manufacturing for developing a 3D surface to a 2D platform, especially in fashion design. This study describes surface flattening based on minimum energy methods according to the property of different fabrics. Firstly, through the geometric feature of a 3D surface, the less transformed area can be flattened on a 2D platform by geodesic. Then, strain energy that has accumulated in mesh can be stably released by an approximate implicit method and revised error function. In some cases, cutting mesh to further release the energy is a common way to fix the situation and enhance the accuracy of the surface flattening, and this makes the obtained 2D pattern naturally generate significant cracks. When this methodology is applied to a 3D mannequin constructed with feature lines, it enhances the level of computer-aided fashion design. Besides, when different fabrics are applied to fashion design, it is necessary to revise the shape of a 2D pattern according to the properties of the fabric. With this model, the outline of 2D patterns can be revised by distributing the strain energy with different results according to different fabric properties. Finally, this research uses some common design cases to illustrate and verify the feasibility of this methodology.
Abstract: It-s known that incorporating prior knowledge into support
vector regression (SVR) can help to improve the approximation
performance. Most of researches are concerned with the incorporation
of knowledge in the form of numerical relationships. Little work,
however, has been done to incorporate the prior knowledge on the
structural relationships among the variables (referred as to Structural
Prior Knowledge, SPK). This paper explores the incorporation of SPK
in SVR by constructing appropriate admissible support vector kernel
(SV kernel) based on the properties of reproducing kernel (R.K).
Three-levels specifications of SPK are studied with the corresponding
sub-levels of prior knowledge that can be considered for the method.
These include Hierarchical SPK (HSPK), Interactional SPK (ISPK)
consisting of independence, global and local interaction, Functional
SPK (FSPK) composed of exterior-FSPK and interior-FSPK. A
convenient tool for describing the SPK, namely Description Matrix
of SPK is introduced. Subsequently, a new SVR, namely Motivated
Support Vector Regression (MSVR) whose structure is motivated
in part by SPK, is proposed. Synthetic examples show that it is
possible to incorporate a wide variety of SPK and helpful to improve
the approximation performance in complex cases. The benefits of
MSVR are finally shown on a real-life military application, Air-toground
battle simulation, which shows great potential for MSVR to
the complex military applications.
Abstract: Nanostructured Iron Oxide with different
morphologies of rod-like and granular have been suc-cessfully
prepared via a solid-state reaction in the presence of NaCl, NaBr, NaI
and NaN3, respectively. The added salts not only prevent a drastic
increase in the size of the products but also provide suitable
conditions for the oriented growth of primary nanoparticles. The
formation mechanisms of these materials by solid-state reaction at
ambient temperature are proposed. The photocatalytic experiments
for congo red (CR) have demonstrated that the mixture of α-Fe2O3
and Fe3O4 nanostructures were more efficient than α-Fe2O3
nanostructures.
Abstract: This paper suggests an improved integer frequency
offset (IFO) estimation scheme using P1 symbol for orthogonal
frequency division multiplexing (OFDM) based the second generation
terrestrial digital video broadcasting (DVB-T2) system. Proposed
IFO estimator is designed by a low-complexity blind IFO estimation
scheme, which is implemented with complex additions. Also, we
propose active carriers (ACs) selection scheme in order to prevent
performance degradation in blind IFO estimation. The simulation
results show that under the AWGN and TU6 channels, the proposed
method has low complexity than conventional method and almost
similar performance in comparison with the conventional method.
Abstract: In this study thermodynamic performance analysis of a
combined organic Rankine cycle and ejector refrigeration cycle is
carried out for use of low-grade heat source in the form of sensible
energy. Special attention is paid to the effects of system parameters
including the turbine inlet temperature and turbine inlet pressure on the
characteristics of the system such as ratios of mass flow rate, net work
production, and refrigeration capacity as well as the coefficient of
performance and exergy efficiency of the system. Results show that
for a given source the coefficient of performance increases with
increasing of the turbine inlet pressure. However, the exergy
efficiency has an optimal condition with respect to the turbine inlet
pressure.
Abstract: Camera calibration is an indispensable step for augmented
reality or image guided applications where quantitative information
should be derived from the images. Usually, a camera
calibration is obtained by taking images of a special calibration object
and extracting the image coordinates of projected calibration marks
enabling the calculation of the projection from the 3d world coordinates
to the 2d image coordinates. Thus such a procedure exhibits
typical steps, including feature point localization in the acquired
images, camera model fitting, correction of distortion introduced by
the optics and finally an optimization of the model-s parameters. In
this paper we propose to extend this list by further step concerning
the identification of the optimal subset of images yielding the smallest
overall calibration error. For this, we present a Monte Carlo based
algorithm along with a deterministic extension that automatically
determines the images yielding an optimal calibration. Finally, we
present results proving that the calibration can be significantly
improved by automated image selection.
Abstract: Evolutionary Algorithms are population-based,
stochastic search techniques, widely used as efficient global
optimizers. However, many real life optimization problems often
require finding optimal solution to complex high dimensional,
multimodal problems involving computationally very expensive
fitness function evaluations. Use of evolutionary algorithms in such
problem domains is thus practically prohibitive. An attractive
alternative is to build meta models or use an approximation of the
actual fitness functions to be evaluated. These meta models are order
of magnitude cheaper to evaluate compared to the actual function
evaluation. Many regression and interpolation tools are available to
build such meta models. This paper briefly discusses the
architectures and use of such meta-modeling tools in an evolutionary
optimization context. We further present two evolutionary algorithm
frameworks which involve use of meta models for fitness function
evaluation. The first framework, namely the Dynamic Approximate
Fitness based Hybrid EA (DAFHEA) model [14] reduces
computation time by controlled use of meta-models (in this case
approximate model generated by Support Vector Machine
regression) to partially replace the actual function evaluation by
approximate function evaluation. However, the underlying
assumption in DAFHEA is that the training samples for the metamodel
are generated from a single uniform model. This does not take
into account uncertain scenarios involving noisy fitness functions.
The second model, DAFHEA-II, an enhanced version of the original
DAFHEA framework, incorporates a multiple-model based learning
approach for the support vector machine approximator to handle
noisy functions [15]. Empirical results obtained by evaluating the
frameworks using several benchmark functions demonstrate their
efficiency
Abstract: This conference paper discusses a risk allocation problem for subprime investing banks involving investment in subprime structured mortgage products (SMPs) and Treasuries. In order to solve this problem, we develop a L'evy process-based model of jump diffusion-type for investment choice in subprime SMPs and Treasuries. This model incorporates subprime SMP losses for which credit default insurance in the form of credit default swaps (CDSs) can be purchased. In essence, we solve a mean swap-at-risk (SaR) optimization problem for investment which determines optimal allocation between SMPs and Treasuries subject to credit risk protection via CDSs. In this regard, SaR is indicative of how much protection investors must purchase from swap protection sellers in order to cover possible losses from SMP default. Here, SaR is defined in terms of value-at-risk (VaR). Finally, we provide an analysis of the aforementioned optimization problem and its connections with the subprime mortgage crisis (SMC).
Abstract: Using mini modules of Tmotes, it is possible to automate a small personal area network. This idea can be extended to large networks too by implementing multi-hop routing. Linking the various Tmotes using Programming languages like Nesc, Java and having transmitter and receiver sections, a network can be monitored. It is foreseen that, depending on the application, a long range at a low data transfer rate or average throughput may be an acceptable trade-off. To reduce the overall costs involved, an optimum number of Tmotes to be used under various conditions (Indoor/Outdoor) is to be deduced. By analyzing the data rates or throughputs at various locations of Tmotes, it is possible to deduce an optimal number of Tmotes for a specific network. This paper deals with the determination of optimum distances to reduce the cost and increase the reliability of the entire sensor network with Wireless Local Loop (WLL) capability.
Abstract: The aim of this paper is to present a new method
which can be used for progressive transmission of electrocardiogram
(ECG). The idea consists in transforming any ECG signal to an
image, containing one beat in each row. In the first step, the beats are
synchronized in order to reduce the high frequencies due to inter-beat
transitions. The obtained image is then transformed using a discrete
version of Radon Transform (DRT). Hence, transmitting the ECG,
leads to transmit the most significant energy of the transformed
image in Radon domain. For decoding purpose, the receptor needs to
use the inverse Radon Transform as well as the two synchronization
frames.
The presented protocol can be adapted for lossy to lossless
compression systems. In lossy mode we show that the compression
ratio can be multiplied by an average factor of 2 for an acceptable
quality of reconstructed signal. These results have been obtained on
real signals from MIT database.
Abstract: Environmental factors affect agriculture production
productivity and efficiency resulted in changing of profit efficiency.
This paper attempts to estimate the impacts of environmental factors
to profitability of rice farmers in the Red River Delta of Vietnam. The
dataset was extracted from 349 rice farmers using personal
interviews. Both OLS and MLE trans-log profit functions were used
in this study. Five production inputs and four environmental factors
were included in these functions. The estimation of the stochastic
profit frontier with a two-stage approach was used to measure
profitability. The results showed that the profit efficiency was about
75% on the average and environmental factors change profit
efficiency significantly beside farm specific characteristics. Plant
disease, soil fertility, irrigation apply and water pollution were the
four environmental factors cause profit loss in rice production. The
result indicated that farmers should reduce household size, farm
plots, apply row seeding technique and improve environmental
factors to obtain high profit efficiency with special consideration is
given for irrigation water quality improvement.
Abstract: We present a new method to reconstruct a temporally
coherent 3D animation from single or multi-view RGB-D video data
using unbiased feature point sampling. Given RGB-D video data, in
form of a 3D point cloud sequence, our method first extracts feature
points using both color and depth information. In the subsequent
steps, these feature points are used to match two 3D point clouds in
consecutive frames independent of their resolution. Our new motion
vectors based dynamic alignement method then fully reconstruct
a spatio-temporally coherent 3D animation. We perform extensive
quantitative validation using novel error functions to analyze the
results. We show that despite the limiting factors of temporal and
spatial noise associated to RGB-D data, it is possible to extract
temporal coherence to faithfully reconstruct a temporally coherent
3D animation from RGB-D video data.
Abstract: All practical real-time scheduling algorithms in multiprocessor systems present a trade-off between their computational complexity and performance. In real-time systems, tasks have to be performed correctly and timely. Finding minimal schedule in multiprocessor systems with real-time constraints is shown to be NP-hard. Although some optimal algorithms have been employed in uni-processor systems, they fail when they are applied in multiprocessor systems. The practical scheduling algorithms in real-time systems have not deterministic response time. Deterministic timing behavior is an important parameter for system robustness analysis. The intrinsic uncertainty in dynamic real-time systems increases the difficulties of scheduling problem. To alleviate these difficulties, we have proposed a fuzzy scheduling approach to arrange real-time periodic and non-periodic tasks in multiprocessor systems. Static and dynamic optimal scheduling algorithms fail with non-critical overload. In contrast, our approach balances task loads of the processors successfully while consider starvation prevention and fairness which cause higher priority tasks have higher running probability. A simulation is conducted to evaluate the performance of the proposed approach. Experimental results have shown that the proposed fuzzy scheduler creates feasible schedules for homogeneous and heterogeneous tasks. It also and considers tasks priorities which cause higher system utilization and lowers deadline miss time. According to the results, it performs very close to optimal schedule of uni-processor systems.
Abstract: This paper presents a novel two-phase hybrid optimization algorithm with hybrid genetic operators to solve the optimal control problem of a single stage hybrid manufacturing system. The proposed hybrid real coded genetic algorithm (HRCGA) is developed in such a way that a simple real coded GA acts as a base level search, which makes a quick decision to direct the search towards the optimal region, and a local search method is next employed to do fine tuning. The hybrid genetic operators involved in the proposed algorithm improve both the quality of the solution and convergence speed. The phase–1 uses conventional real coded genetic algorithm (RCGA), while optimisation by direct search and systematic reduction of the size of search region is employed in the phase – 2. A typical numerical example of an optimal control problem with the number of jobs varying from 10 to 50 is included to illustrate the efficacy of the proposed algorithm. Several statistical analyses are done to compare the validity of the proposed algorithm with the conventional RCGA and PSO techniques. Hypothesis t – test and analysis of variance (ANOVA) test are also carried out to validate the effectiveness of the proposed algorithm. The results clearly demonstrate that the proposed algorithm not only improves the quality but also is more efficient in converging to the optimal value faster. They can outperform the conventional real coded GA (RCGA) and the efficient particle swarm optimisation (PSO) algorithm in quality of the optimal solution and also in terms of convergence to the actual optimum value.
Abstract: C-control chart assumes that process nonconformities follow a Poisson distribution. In actuality, however, this Poisson distribution does not always occur. A process control for semiconductor based on a Poisson distribution always underestimates the true average amount of nonconformities and the process variance. Quality is described more accurately if a compound Poisson process is used for process control at this time. A cumulative sum (CUSUM) control chart is much better than a C control chart when a small shift will be detected. This study calculates one-sided CUSUM ARLs using a Markov chain approach to construct a CUSUM control chart with an underlying Poisson-Gamma compound distribution for the failure mechanism. Moreover, an actual data set from a wafer plant is used to demonstrate the operation of the proposed model. The results show that a CUSUM control chart realizes significantly better performance than EWMA.
Abstract: This paper evaluates performances of an adaptive noise
cancelling (ANC) based target detection algorithm on a set of real test
data supported by the Defense Evaluation Research Agency (DERA
UK) for multi-target wideband active sonar echolocation system. The
hybrid algorithm proposed is a combination of an adaptive ANC
neuro-fuzzy scheme in the first instance and followed by an iterative
optimum target motion estimation (TME) scheme. The neuro-fuzzy
scheme is based on the adaptive noise cancelling concept with the
core processor of ANFIS (adaptive neuro-fuzzy inference system) to
provide an effective fine tuned signal. The resultant output is then
sent as an input to the optimum TME scheme composed of twogauge
trimmed-mean (TM) levelization, discrete wavelet denoising
(WDeN), and optimal continuous wavelet transform (CWT) for
further denosing and targets identification. Its aim is to recover the
contact signals in an effective and efficient manner and then determine
the Doppler motion (radial range, velocity and acceleration) at very
low signal-to-noise ratio (SNR). Quantitative results have shown that
the hybrid algorithm have excellent performance in predicting targets-
Doppler motion within various target strength with the maximum
false detection of 1.5%.
Abstract: In this paper an effective approach for segmenting
human skin regions in images taken at different environment is
proposed. The proposed method uses a color distance map that is
flexible enough to reliably detect the skin regions even if the
illumination conditions of the image vary. Local image conditions is
also focused, which help the technique to adaptively detect differently
illuminated skin regions of an image. Moreover, usage of local
information also helps the skin detection process to get rid of picking
up much noisy pixels.
Abstract: Motivated by Berman et al. [Sign patterns that allow eventual positivity, ELA, 19(2010): 108-120], we concentrate on the potential eventual positivity of irreducible tridiagonal sign patterns. The minimal potential eventual positivity of irreducible tridiagonal sign patterns of order less than six is established, and all the minimal potentially eventually positive tridiagonal sign patterns of order · 5 are identified. Our results indicate that if an irreducible tridiagonal sign pattern of order less than six A is minimal potentially eventually positive, then A requires the eventual positivity.
Abstract: The purpose of this study was to evaluate and
compare new indices based on the discrete wavelet transform
with another spectral parameters proposed in the literature as
mean average voltage, median frequency and ratios between
spectral moments applied to estimate acute exercise-induced
changes in power output, i.e., to assess peripheral muscle
fatigue during a dynamic fatiguing protocol. 15 trained
subjects performed 5 sets consisting of 10 leg press, with 2
minutes rest between sets. Surface electromyography was
recorded from vastus medialis (VM) muscle. Several surface
electromyographic parameters were compared to detect
peripheral muscle fatigue. These were: mean average voltage
(MAV), median spectral frequency (Fmed), Dimitrov spectral
index of muscle fatigue (FInsm5), as well as other five
parameters obtained from the discrete wavelet transform
(DWT) as ratios between different scales. The new wavelet
indices achieved the best results in Pearson correlation
coefficients with power output changes during acute dynamic
contractions. Their regressions were significantly different
from MAV and Fmed. On the other hand, they showed the
highest robustness in presence of additive white gaussian
noise for different signal to noise ratios (SNRs). Therefore,
peripheral impairments assessed by sEMG wavelet indices
may be a relevant factor involved in the loss of power output
after dynamic high-loading fatiguing task.
Abstract: In this work, we improve a previously developed
segmentation scheme aimed at extracting edge information from
speckled images using a maximum likelihood edge detector. The
scheme was based on finding a threshold for the probability density
function of a new kernel defined as the arithmetic mean-to-geometric
mean ratio field over a circular neighborhood set and, in a general
context, is founded on a likelihood random field model (LRFM). The
segmentation algorithm was applied to discriminated speckle areas
obtained using simple elliptic discriminant functions based on
measures of the signal-to-noise ratio with fractional order moments.
A rigorous stochastic analysis was used to derive an exact expression
for the cumulative density function of the probability density
function of the random field. Based on this, an accurate probability
of error was derived and the performance of the scheme was
analysed. The improved segmentation scheme performed well for
both simulated and real images and showed superior results to those
previously obtained using the original LRFM scheme and standard
edge detection methods. In particular, the false alarm probability was
markedly lower than that of the original LRFM method with
oversegmentation artifacts virtually eliminated. The importance of
this work lies in the development of a stochastic-based segmentation,
allowing an accurate quantification of the probability of false
detection. Non visual quantification and misclassification in medical
ultrasound speckled images is relatively new and is of interest to
clinicians.