Abstract: Pairwise testing, which requires that every
combination of valid values of each pair of system factors be covered
by at lease one test case, plays an important role in software testing
since many faults are caused by unexpected 2-way interactions among
system factors. Although meta-heuristic strategies like simulated
annealing can generally discover smaller pairwise test suite, they may
cost more time to perform search, compared with greedy algorithms.
We propose a new method, improved Extremal Optimization (EO)
based on the Bak-Sneppen (BS) model of biological evolution, for
constructing pairwise test suites and define fitness function according
to the requirement of improved EO. Experimental results show that
improved EO gives similar size of resulting pairwise test suite and
yields an 85% reduction in solution time over SA.
Abstract: A hybrid feature based adaptive particle filter algorithm is presented for object tracking in real scenarios with static camera.
The hybrid feature is combined by two effective features: the Grayscale Arranging Pairs (GAP) feature and the color histogram feature. The GAP feature has high discriminative ability even under conditions of severe illumination variation and dynamic background
elements, while the color histogram feature has high reliability to identify the detected objects. The combination of two features covers the shortage of single feature. Furthermore, we adopt an updating
target model so that some external problems such as visual angles can be overcame well. An automatic initialization algorithm is introduced which provides precise initial positions of objects. The experimental
results show the good performance of the proposed method.
Abstract: The literature reports a large number of approaches for
measuring the similarity between protein sequences. Most of these
approaches estimate this similarity using alignment-based techniques
that do not necessarily yield biologically plausible results, for two
reasons.
First, for the case of non-alignable (i.e., not yet definitively aligned
and biologically approved) sequences such as multi-domain, circular
permutation and tandem repeat protein sequences, alignment-based
approaches do not succeed in producing biologically plausible results.
This is due to the nature of the alignment, which is based on the
matching of subsequences in equivalent positions, while non-alignable
proteins often have similar and conserved domains in non-equivalent
positions.
Second, the alignment-based approaches lead to similarity measures
that depend heavily on the parameters set by the user for the alignment
(e.g., gap penalties and substitution matrices). For easily alignable
protein sequences, it's possible to supply a suitable combination of
input parameters that allows such an approach to yield biologically
plausible results. However, for difficult-to-align protein sequences,
supplying different combinations of input parameters yields different
results. Such variable results create ambiguities and complicate the
similarity measurement task.
To overcome these drawbacks, this paper describes a novel and
effective approach for measuring the similarity between protein
sequences, called SAF for Substitution and Alignment Free. Without
resorting either to the alignment of protein sequences or to substitution
relations between amino acids, SAF is able to efficiently detect the
significant subsequences that best represent the intrinsic properties of
protein sequences, those underlying the chronological dependencies of
structural features and biochemical activities of protein sequences.
Moreover, by using a new efficient subsequence matching scheme,
SAF more efficiently handles protein sequences that contain similar
structural features with significant meaning in chronologically
non-equivalent positions. To show the effectiveness of SAF, extensive
experiments were performed on protein datasets from different
databases, and the results were compared with those obtained by
several mainstream algorithms.
Abstract: Bonding has become a routine procedure in several
dental specialties – from prosthodontics to conservative dentistry and
even orthodontics. In many of these fields it is important to be able to
investigate the bonded interfaces to assess their quality. All currently
employed investigative methods are invasive, meaning that samples
are destroyed in the testing procedure and cannot be used again. We
have investigated the interface between human enamel and bonded
ceramic brackets non-invasively, introducing a combination of new
investigative methods – optical coherence tomography (OCT),
fluorescence OCT and confocal microscopy (CM). Brackets were
conventionally bonded on conditioned buccal surfaces of teeth. The
bonding was assessed using these methods. Three dimensional
reconstructions of the detected material defects were developed using
manual and semi-automatic segmentation. The results clearly prove
that OCT, fluorescence OCT and CM are useful in orthodontic
bonding investigations.
Abstract: Power loss reduction is one of the main targets in power industry and so in this paper, the problem of finding the optimal configuration of a radial distribution system for loss reduction is considered. Optimal reconfiguration involves the selection of the best set of branches to be opened ,one each from each loop, for reducing resistive line losses , and reliving overloads on feeders by shifting the load to adjacent feeders. However ,since there are many candidate switching combinations in the system ,the feeder reconfiguration is a complicated problem. In this paper a new approach is proposed based on a simple optimum loss calculation by determining optimal trees of the given network. From graph theory a distribution network can be represented with a graph that consists a set of nodes and branches. In fact this problem can be viewed as a problem of determining an optimal tree of the graph which simultaneously ensure radial structure of each candidate topology .In this method the refined genetic algorithm is also set up and some improvements of algorithm are made on chromosome coding. In this paper an implementation of the algorithm presented by [7] is applied by modifying in load flow program and a comparison of this method with the proposed method is employed. In [7] an algorithm is proposed that the choice of the switches to be opened is based on simple heuristic rules. This algorithm reduce the number of load flow runs and also reduce the switching combinations to a fewer number and gives the optimum solution. To demonstrate the validity of these methods computer simulations with PSAT and MATLAB programs are carried out on 33-bus test system. The results show that the performance of the proposed method is better than [7] method and also other methods.
Abstract: In this paper, an improved edge detection algorithm
based on fuzzy combination of mathematical morphology and
wavelet transform is proposed. The combined method is proposed to
overcome the limitation of wavelet based edge detection and
mathematical morphology based edge detection in noisy images.
Experimental results show superiority of the proposed method, as
compared to the traditional Prewitt, wavelet based and morphology
based edge detection methods. The proposed method is an effective
edge detection method for noisy image and keeps clear and
continuous edges.
Abstract: This paper presents a experiment to estimate the
influences of cutting conditions in microstructure changes of
machining austenitic 304 stainless steel, especially for wear insert. The
wear insert were prefabricated with a width of 0.5 mm. And the forces,
temperature distribution, RS, and microstructure changes were
measured by force dynamometer, infrared thermal camera, X-ray
diffraction, XRD, SEM, respectively. The results told that the different
combinations of machining condition have a significant influence on
machined surface microstructure changes. In addition to that, the
ANOVA and AOMwere used to tell the different influences of cutting
speed, feed rate, and wear insert.
Abstract: Heat-inducible gene expression vectors are useful for hyperthermia-induced cancer gene therapy, because the combination
of hyperthermia and gene therapy can considerably improve the therapeutic effects. In the present study, we developed an enhanced
heat-inducible transgene expression system in which a heat-shock
protein (HSP) promoter and tetracycline-responsive transactivator
were combined. When the transactivator plasmid containing the
tetracycline-responsive transactivator gene was co-transfected with
the reporter gene expression plasmid, a high level of heat-induced gene expression was observed compared with that using the HSP
promoter without the transactivator. In vitro evaluation of the
therapeutic effect using HeLa cells showed that heat-induced therapeutic gene expression caused cell death in a high percentage of
these cells, indicating that this strategy is promising for cancer gene therapy.
Abstract: There have been different approaches to compute the
analytic instantaneous frequency with a variety of background reasoning
and applicability in practice, as well as restrictions. This paper presents an adaptive Fourier decomposition and (α-counting) based
instantaneous frequency computation approach. The adaptive Fourier
decomposition is a recently proposed new signal decomposition
approach. The instantaneous frequency can be computed through the so called mono-components decomposed by it. Due to the fast energy
convergency, the highest frequency of the signal will be discarded by the adaptive Fourier decomposition, which represents the noise of
the signal in most of the situation. A new instantaneous frequency
definition for a large class of so-called simple waves is also proposed
in this paper. Simple wave contains a wide range of signals for which
the concept instantaneous frequency has a perfect physical sense.
The α-counting instantaneous frequency can be used to compute the highest frequency for a signal. Combination of these two approaches one can obtain the IFs of the whole signal. An experiment is demonstrated the computation procedure with promising results.
Abstract: The purpose of this study is mainly to predict collision
frequency on the horizontal tangents combined with vertical curves
using artificial neural network methods. The proposed ANN models
are compared with existing regression models. First, the variables
that affect collision frequency were investigated. It was found that
only the annual average daily traffic, section length, access density,
the rate of vertical curvature, smaller curve radius before and after
the tangent were statistically significant according to related
combinations. Second, three statistical models (negative binomial,
zero inflated Poisson and zero inflated negative binomial) were
developed using the significant variables for three alignment
combinations. Third, ANN models are developed by applying the
same variables for each combination. The results clearly show that
the ANN models have the lowest mean square error value than those
of the statistical models. Similarly, the AIC values of the ANN
models are smaller to those of the regression models for all the
combinations. Consequently, the ANN models have better statistical
performances than statistical models for estimating collision
frequency. The ANN models presented in this paper are
recommended for evaluating the safety impacts 3D alignment
elements on horizontal tangents.
Abstract: The behavior of Radial Basis Function (RBF) Networks greatly depends on how the center points of the basis functions are selected. In this work we investigate the use of instance reduction techniques, originally developed to reduce the storage requirements of instance based learners, for this purpose. Five Instance-Based Reduction Techniques were used to determine the set of center points, and RBF networks were trained using these sets of centers. The performance of the RBF networks is studied in terms of classification accuracy and training time. The results obtained were compared with two Radial Basis Function Networks: RBF networks that use all instances of the training set as center points (RBF-ALL) and Probabilistic Neural Networks (PNN). The former achieves high classification accuracies and the latter requires smaller training time. Results showed that RBF networks trained using sets of centers located by noise-filtering techniques (ALLKNN and ENN) rather than pure reduction techniques produce the best results in terms of classification accuracy. The results show that these networks require smaller training time than that of RBF-ALL and higher classification accuracy than that of PNN. Thus, using ALLKNN and ENN to select center points gives better combination of classification accuracy and training time. Our experiments also show that using the reduced sets to train the networks is beneficial especially in the presence of noise in the original training sets.
Abstract: In contrast to existing methods which do not take into account multiconnectivity in a broad sense of this term, we develop mathematical models and highly effective combination (BIEM and FDM) numerical methods of calculation of stationary and quasistationary temperature field of a profile part of a blade with convective cooling (from the point of view of realization on PC). The theoretical substantiation of these methods is proved by appropriate theorems. For it, converging quadrature processes have been developed and the estimations of errors in the terms of A.Ziqmound continuity modules have been received. For visualization of profiles are used: the method of the least squares with automatic conjecture, device spline, smooth replenishment and neural nets. Boundary conditions of heat exchange are determined from the solution of the corresponding integral equations and empirical relationships. The reliability of designed methods is proved by calculation and experimental investigations heat and hydraulic characteristics of the gas turbine first stage nozzle blade.
Abstract: Variable ordering heuristics are used in constraint satisfaction algorithms. Different characteristics of various variable ordering heuristics are complementary. Therefore we have tried to get the advantages of all heuristics to improve search algorithms performance for solving constraint satisfaction problems. This paper considers combinations based on products and quotients, and then a newer form of combination based on weighted sums of ratings from a set of base heuristics, some of which result in definite improvements in performance.
Abstract: A theory for optimal filtering of infinite sets of random
signals is presented. There are several new distinctive features of the
proposed approach. First, a single optimal filter for processing any
signal from a given infinite signal set is provided. Second, the filter is
presented in the special form of a sum with p terms where each term
is represented as a combination of three operations. Each operation
is a special stage of the filtering aimed at facilitating the associated
numerical work. Third, an iterative scheme is implemented into the
filter structure to provide an improvement in the filter performance at
each step of the scheme. The final step of the scheme concerns signal
compression and decompression. This step is based on the solution of
a new rank-constrained matrix approximation problem. The solution
to the matrix problem is described in this paper. A rigorous error
analysis is given for the new filter.
Abstract: Animation is simply defined as the sequencing of a
series of static images to generate the illusion of movement. Most
people believe that actual drawings or creation of the individual
images is the animation, when in actuality it is the arrangement of
those static images that conveys the motion. To become an animator,
it is often assumed that needed the ability to quickly design
masterpiece after masterpiece. Although some semblance of artistic
skill is a necessity for the job, the real key to becoming a great
animator is in the comprehension of timing. This paper will use a
combination of sprite animation, frame animation, and some other
techniques to cause a group of multi-colored static images to slither
around in the bounded area. In addition to slithering, the images
will also change the color of different parts of their body, much like
the real world creatures that have this amazing ability to change the
colors on their bodies do. This paper was implemented by using
Java 2 Standard Edition (J2SE).
It is both time-consuming and expensive to create animations,
regardless if they are created by hand or by using motion-capture
equipment. If the animators could reuse old animations and even
blend different animations together, a lot of work would be saved in
the process. The main objective of this paper is to examine a method
for blending several animations together in real time. This paper
presents and analyses a solution using Weighted Skeleton
Animation (WSA) resulting in limited CPU time and memory waste
as well as saving time for the animators. The idea presented is
described in detail and implemented. In this paper, text animation,
vertex animation, sprite part animation and whole sprite animation
were tested.
In this research paper, the resolution, smoothness and movement
of animated images will be carried out from the parameters, which
will be obtained from the experimental research of implementing
this paper.
Abstract: Sleep spindles are the most interesting hallmark of
stage 2 sleep EEG. Their accurate identification in a
polysomnographic signal is essential for sleep professionals to help
them mark Stage 2 sleep. Sleep Spindles are also promising objective
indicators for neurodegenerative disorders. Visual spindle scoring
however is a tedious workload. In this paper three different
approaches are used for the automatic detection of sleep spindles:
Short Time Fourier Transform, Wavelet Transform and Wave
Morphology for Spindle Detection. In order to improve the results, a
combination of the three detectors is presented and comparison with
human expert scorers is performed. The best performance is obtained
with a combination of the three algorithms which resulted in a
sensitivity and specificity of 94% when compared to human expert
scorers.
Abstract: Thirty three re-wetting tests were conducted at
different combinations of temperatures (5.7- 46.30C) and relative
humidites (48.2-88.6%) with barley. Two most commonly used thinlayer
drying and rewetting models i.e. Page and Diffusion were
compared for their ability to the fit the experimental re-wetting data
based on the standard error of estimate (SEE) of the measured and
simulated moisture contents. The comparison shows both the Page
and Diffusion models fit the re-wetting experimental data of barley
well. The average SEE values for the Page and Diffusion models
were 0.176 % d.b. and 0.199 % d.b., respectively. The Page and
Diffusion models were found to be most suitable equations, to
describe the thin-layer re-wetting characteristics of barley over a
typically five day re-wetting. These two models can be used for the
simulation of deep-bed re-wetting of barley occurring during
ventilated storage and deep bed drying.
Abstract: Removal of PCP by a system combining
biodegradation by biofilm and adsorption was investigated here.
Three studies were conducted employing batch tests, sequencing
batch reactor (SBR) and continuous biofilm activated carbon
column reactor (BACCOR). The combination of biofilm-GAC
batch process removed about 30% more PCP than GAC adsorption
alone. For the SBR processes, both the suspended and attached
biomass could remove more than 90% of the PCP after
acclimatisation. BACCOR was able to remove more than 98% of
PCP-Na at concentrations ranging from 10 to 100 mg/L, at empty
bed contact time (EBCT) ranging from 0.75 to 4 hours. Pure and
mixed cultures from BACCOR were tested for use of PCP as sole
carbon and energy source under aerobic conditions. The isolates
were able to degrade up to 42% of PCP under aerobic conditions in
pure cultures. However, mixed cultures were found able to degrade
more than 99% PCP indicating interdependence of species.
Abstract: In this paper, an analysis is presented, which
demonstrates the effect pre-logic factoring could have on an
automated combinational logic synthesis process succeeding it. The
impact of pre-logic factoring for some arbitrary combinatorial
circuits synthesized within a FPGA based logic design environment
has been analyzed previously. This paper explores a similar effect,
but with the non-regenerative logic synthesized using elements of a
commercial standard cell library. On an overall basis, the results
obtained pertaining to the analysis on a variety of MCNC/IWLS
combinational logic benchmark circuits indicate that pre-logic
factoring has the potential to facilitate simultaneous power, delay and
area optimized synthesis solutions in many cases.
Abstract: Energy intensity(energy consumption intensity) is a
global index which computes the required energy for producing a
specific value of goods and services in each country. It is computed
in terms of initial energy supply or final energy consumption. In this
study (research) Divisia method is used to decompose energy
consumption and energy intensity. This method decomposes
consumption and energy intensity to production effects, structural
and net intensity and could be done as time series or two-periodical.
This study analytically investigates consumption changes and energy
intensity on economical sectors of Iran and more specific on road
transportation(rail road and road).Our results show that the
contribution of structural effect (change in economical activities
combination) is very low and the effect of net energy consumption
has the higher contribution in consumption changes and energy
intensity. In other words, the high consumption of energy is due to
Intensity of energy consumption and is not to structural effect of
transportation sector.