Abstract: Factoring Boolean functions is one of the basic operations in algorithmic logic synthesis. A novel algebraic factorization heuristic for single-output combinatorial logic functions is presented in this paper and is developed based on the set theory paradigm. The impact of factoring is analyzed mainly from a low power design perspective for standard cell based digital designs in this paper. The physical implementation of a number of MCNC/IWLS combinational benchmark functions and sub-functions are compared before and after factoring, based on a simple technology mapping procedure utilizing only standard gate primitives (readily available as standard cells in a technology library) and not cells corresponding to optimized complex logic. The power results were obtained at the gate-level by means of an industry-standard power analysis tool from Synopsys, targeting a 130nm (0.13μm) UMC CMOS library, for the typical case. The wire-loads were inserted automatically and the simulations were performed with maximum input activity. The gate-level simulations demonstrate the advantage of the proposed factoring technique in comparison with other existing methods from a low power perspective, for arbitrary examples. Though the benchmarks experimentation reports mixed results, the mean savings in total power and dynamic power for the factored solution over a non-factored solution were 6.11% and 5.85% respectively. In terms of leakage power, the average savings for the factored forms was significant to the tune of 23.48%. The factored solution is expected to better its non-factored counterpart in terms of the power-delay product as it is well-known that factoring, in general, yields a delay-efficient multi-level solution.
Abstract: In this study, numerical simulations on laminar flow in
sinusoidal wavy shaped tubes were conducted for mean Reynolds
number of 250, which is in the range of physiological flow-rate and
investigated flow structures, pressure distribution and particle
trajectories both in steady and periodic inflow conditions. For
extensive comparisons, various wave lengths and amplitudes of sine
function for geometry of tube models were employed. The results
showed that small amplitude secondary curvature has significant
influence on the nature of flow patterns and particle mixing
mechanism. This implies that characterizing accurate geometry is
essential in accurate predicting of in vivo hemodynamics and may
motivate further study on any possibility of reflection of secondary
flow on vascular remodeling and pathophysiology.
Abstract: The study of the fouling deposition of pink guava
juice (PGJ) is relatively new research compared to milk fouling
deposit. In this work, a new experimental set-up was developed to
imitate the fouling formation in heat exchanger, namely a continuous
flow experimental set-up heat exchanger. The new experimental setup
was operated under industrial pasteurization temperature of PGJ,
which was at 93°C. While the flow rate and pasteurization period
were based on the experimental capacity, which were 0.5 and 1
liter/min for the flow rate and the pasteurization period was set for 1
hour. Characterization of the fouling deposit was determined by
using various methods. Microstructure of the deposits was carried
out using ESEM. Proximate analyses were performed to determine
the composition of moisture, fat, protein, fiber, ash and carbohydrate
content. A study on the hardness and stickiness of the fouling deposit
was done using a texture analyzer. The presence of seedstone in pink
guava juice was also analyzed using a particle analyzer. The findings
shown that seedstone from pink guava juice ranging from 168 to
200μm and carbohydrate was found to be a major composition
(47.7% of fouling deposit consists of carbohydrate). Comparison
between the hardness and stickiness of the deposits at two different
flow rates showed that fouling deposits were harder and denser at
higher flow rate. Findings from this work provide basis knowledge
for further study on fouling and cleaning of PGJ.
Abstract: Droplet size distributions in the cold spray of a fuel
are important in observed combustion behavior. Specification of
droplet size and velocity distributions in the immediate downstream
of injectors is also essential as boundary conditions for advanced
computational fluid dynamics (CFD) and two-phase spray transport
calculations. This paper describes the development of a new model to
be incorporated into maximum entropy principle (MEP) formalism
for prediction of droplet size distribution in droplet formation region.
The MEP approach can predict the most likely droplet size and
velocity distributions under a set of constraints expressing the
available information related to the distribution.
In this article, by considering the mechanisms of turbulence
generation inside the nozzle and wave growth on jet surface, it is
attempted to provide a logical framework coupling the flow inside the
nozzle to the resulting atomization process. The purpose of this paper
is to describe the formulation of this new model and to incorporate it
into the maximum entropy principle (MEP) by coupling sub-models
together using source terms of momentum and energy. Comparison
between the model prediction and experimental data for a gas turbine
swirling nozzle and an annular spray indicate good agreement
between model and experiment.
Abstract: In this paper, penalized power-divergence test statistics have been defined and their exact size properties to test a nested sequence of log-linear models have been compared with ordinary power-divergence test statistics for various penalization, λ and main effect values. Since the ordinary and penalized power-divergence test statistics have the same asymptotic distribution, comparisons have been only made for small and moderate samples. Three-way contingency tables distributed according to a multinomial distribution have been considered. Simulation results reveal that penalized power-divergence test statistics perform much better than their ordinary counterparts.
Abstract: Using Dynamic Bayesian Networks (DBN) to model genetic regulatory networks from gene expression data is one of the major paradigms for inferring the interactions among genes. Averaging a collection of models for predicting network is desired, rather than relying on a single high scoring model. In this paper, two kinds of model searching approaches are compared, which are Greedy hill-climbing Search with Restarts (GSR) and Markov Chain Monte Carlo (MCMC) methods. The GSR is preferred in many papers, but there is no such comparison study about which one is better for DBN models. Different types of experiments have been carried out to try to give a benchmark test to these approaches. Our experimental results demonstrated that on average the MCMC methods outperform the GSR in accuracy of predicted network, and having the comparable performance in time efficiency. By proposing the different variations of MCMC and employing simulated annealing strategy, the MCMC methods become more efficient and stable. Apart from comparisons between these approaches, another objective of this study is to investigate the feasibility of using DBN modeling approaches for inferring gene networks from few snapshots of high dimensional gene profiles. Through synthetic data experiments as well as systematic data experiments, the experimental results revealed how the performances of these approaches can be influenced as the target gene network varies in the network size, data size, as well as system complexity.
Abstract: This article presents the developments of efficient
algorithms for tablet copies comparison. Image recognition has
specialized use in digital systems such as medical imaging,
computer vision, defense, communication etc. Comparison between
two images that look indistinguishable is a formidable task. Two
images taken from different sources might look identical but due to
different digitizing properties they are not. Whereas small variation
in image information such as cropping, rotation, and slight
photometric alteration are unsuitable for based matching
techniques. In this paper we introduce different matching
algorithms designed to facilitate, for art centers, identifying real
painting images from fake ones. Different vision algorithms for
local image features are implemented using MATLAB. In this
framework a Table Comparison Computer Tool “TCCT" is
designed to facilitate our research. The TCCT is a Graphical Unit
Interface (GUI) tool used to identify images by its shapes and
objects. Parameter of vision system is fully accessible to user
through this graphical unit interface. And then for matching, it
applies different description technique that can identify exact
figures of objects.
Abstract: Names are important in many societies, even in technologically oriented ones which use e.g. ID systems to identify individual people. Names such as surnames are the most important as they are used in many processes, such as identifying of people and genealogical research. On the other hand variation of names can be a major problem for the identification and search for people, e.g. web search or security reasons. Name matching presumes a-priori that the recorded name written in one alphabet reflects the phonetic identity of two samples or some transcription error in copying a previously recorded name. We add to this the lode that the two names imply the same person. This paper describes name variations and some basic description of various name matching algorithms developed to overcome name variation and to find reasonable variants of names which can be used to further increasing mismatches for record linkage and name search. The implementation contains algorithms for computing a range of fuzzy matching based on different types of algorithms, e.g. composite and hybrid methods and allowing us to test and measure algorithms for accuracy. NYSIIS, LIG2 and Phonex have been shown to perform well and provided sufficient flexibility to be included in the linkage/matching process for optimising name searching.
Abstract: Decision support based upon risk analysis into
comparison of the electricity generation from different renewable
energy technologies can provide information about their effects on
the environment and society. The aim of this paper is to develop the
assessment framework regarding risks to health and environment,
and the society-s benefits of the electric power plant generation from
different renewable sources. The multicriteria framework to
multiattribute risk analysis technique and the decision analysis
interview technique are applied in order to support the decisionmaking
process for the implementing renewable energy projects to
the Bangkok case study. Having analyses the local conditions and
appropriate technologies, five renewable power plants are postulated
as options. As this work demonstrates, the analysis can provide a tool
to aid decision-makers for achieving targets related to promote
sustainable energy system.
Abstract: Developing an accurate classifier for high dimensional microarray datasets is a challenging task due to availability of small sample size. Therefore, it is important to determine a set of relevant genes that classify the data well. Traditionally, gene selection method often selects the top ranked genes according to their discriminatory power. Often these genes are correlated with each other resulting in redundancy. In this paper, we have proposed a hybrid method using feature ranking and wrapper method (Genetic Algorithm with multiclass SVM) to identify a set of relevant genes that classify the data more accurately. A new fitness function for genetic algorithm is defined that focuses on selecting the smallest set of genes that provides maximum accuracy. Experiments have been carried on four well-known datasets1. The proposed method provides better results in comparison to the results found in the literature in terms of both classification accuracy and number of genes selected.
Abstract: The objective of the research was to study of foot
anthropometry of children aged 7-12 years in the South of Thailand Thirty-three dimensions were measured on 305 male and 295 female
subjects with 3 age ranges (7-12 years old). The instrumentation consists of four types of anthropometer, digital vernier caliper, digital
height gauge and measuring tape. The mean values and standard
deviations of average age, height, and weight of the male subjects were 9.52(±1.70) years, 137.80(±11.55) cm, and 37.57(±11.65) kg.
Female average age, height, and weight subjects were 9.53(±1.70) years, 137.88(±11.55) cm, and 34.90(±11.57) kg respectively. The
comparison of the 33 comparison measured anthropometric. Between
male and female subjects were sexual differences in size on women in almost all areas of significance (p
Abstract: Wind energy has been shown to be one of the most
viable sources of renewable energy. With current technology, the low
cost of wind energy is competitive with more conventional sources of
energy such as coal. Most blades available for commercial grade
wind turbines incorporate a straight span-wise profile and airfoil
shaped cross sections. These blades are found to be very efficient at
lower wind speeds in comparison to the potential energy that can be
extracted. However as the oncoming wind speed increases the
efficiency of the blades decreases as they approach a stall point. This
paper explores the possibility of increasing the efficiency of the
blades at higher wind speeds while maintaining efficiency at the
lower wind speeds. The design intends to maintain efficiency at
lower wind speeds by selecting the appropriate orientation and size
of the airfoil cross sections based on a low oncoming wind speed and
given constant rotation rate. The blades will be made more efficient
at higher wind speeds by implementing a swept blade profile.
Performance was investigated using the computational fluid
dynamics (CFD).
Abstract: Horseradish (Armoracia rusticana) is a perennial herb belonging to the Brassicaceae family and contains biologically active substances. The aim of the current research was to determine best method for extraction of phenolic compounds from horseradish roots showing high antiradical activity. Three genotypes (No. 105; No. 106 and variety ‘Turku’) of horseradish roots were extracted with eight different solvents: n-hexane, ethyl acetate, diethyl ether, 2-propanol, acetone, ethanol (95%), ethanol / water / acetic acid (80/20/1 v/v/v) and ethanol / water (80/20 by volume) using two extraction methods (conventional and Soxhlet). As the best solvents ethanol and ethanol / water solutions can be chosen. Although in Soxhlet extracts TPC was higher, scavenging activity of DPPH˙ radicals did not increase. It can be concluded that using Soxhlet extraction method more compounds that are not effective antioxidants.
Abstract: As a simple to method estimate the plant heating energy
capacity of an apartment complex, a new load calculation method has
been proposed. The method which can be called as unit building
method, predicts the heating load of the entire complex instead of
summing up that of each apartment belonging to complex.
Comparison of the unit heating load for various floor sizes between the
present method and conventional approach shows a close agreement
with dynamic load calculation code. Some additional calculations are
performed to demonstrate it-s application examples.
Abstract: Support vector regression (SVR) has been regarded
as a state-of-the-art method for approximation and regression. The
importance of kernel function, which is so-called admissible support
vector kernel (SV kernel) in SVR, has motivated many studies
on its composition. The Gaussian kernel (RBF) is regarded as a
“best" choice of SV kernel used by non-expert in SVR, whereas
there is no evidence, except for its superior performance on some
practical applications, to prove the statement. Its well-known that
reproducing kernel (R.K) is also a SV kernel which possesses many
important properties, e.g. positive definiteness, reproducing property
and composing complex R.K by simpler ones. However, there are a
limited number of R.Ks with explicit forms and consequently few
quantitative comparison studies in practice. In this paper, two R.Ks,
i.e. SV kernels, composed by the sum and product of a translation
invariant kernel in a Sobolev space are proposed. An exploratory
study on the performance of SVR based general R.K is presented
through a systematic comparison to that of RBF using multiple
criteria and synthetic problems. The results show that the R.K is
an equivalent or even better SV kernel than RBF for the problems
with more input variables (more than 5, especially more than 10) and
higher nonlinearity.
Abstract: Although face recognition seems as an easy task for
human, automatic face recognition is a much more challenging task
due to variations in time, illumination and pose. In this paper, the
influence of time-lapse on visible and thermal images is examined.
Orthogonal moment invariants are used as a feature extractor to
analyze the effect of time-lapse on thermal and visible images and the
results are compared with conventional Principal Component
Analysis (PCA). A new triangle square ratio criterion is employed
instead of Euclidean distance to enhance the performance of nearest
neighbor classifier. The results of this study indicate that the ideal
feature vectors can be represented with high discrimination power
due to the global characteristic of orthogonal moment invariants.
Moreover, the effect of time-lapse has been decreasing and enhancing
the accuracy of face recognition considerably in comparison with
PCA. Furthermore, our experimental results based on moment
invariant and triangle square ratio criterion show that the proposed
approach achieves on average 13.6% higher in recognition rate than
PCA.
Abstract: In this paper we use exponential particle swarm
optimization (EPSO) to cluster data. Then we compare between
(EPSO) clustering algorithm which depends on exponential variation
for the inertia weight and particle swarm optimization (PSO)
clustering algorithm which depends on linear inertia weight. This
comparison is evaluated on five data sets. The experimental results
show that EPSO clustering algorithm increases the possibility to find
the optimal positions as it decrease the number of failure. Also show
that (EPSO) clustering algorithm has a smaller quantization error
than (PSO) clustering algorithm, i.e. (EPSO) clustering algorithm
more accurate than (PSO) clustering algorithm.
Abstract: Testing is an activity that is required both in the
development and maintenance of the software development life cycle
in which Integration Testing is an important activity. Integration
testing is based on the specification and functionality of the software
and thus could be called black-box testing technique. The purpose of
integration testing is testing integration between software
components. In function or system testing, the concern is with overall
behavior and whether the software meets its functional specifications
or performance characteristics or how well the software and
hardware work together. This explains the importance and necessity
of IT for which the emphasis is on interactions between modules and
their interfaces. Software errors should be discovered early during
IT to reduce the costs of correction. This paper introduces a new type
of integration error, presenting an overview of Integration Testing
techniques with comparison of each technique and also identifying
which technique detects what type of error.
Abstract: Residue Number System (RNS) is a modular representation and is proved to be an instrumental tool in many digital signal processing (DSP) applications which require high-speed computations. RNS is an integer and non weighted number system; it can support parallel, carry-free, high-speed and low power arithmetic. A very interesting correspondence exists between the concepts of Multiple Valued Logic (MVL) and Residue Number Arithmetic. If the number of levels used to represent MVL signals is chosen to be consistent with the moduli which create the finite rings in the RNS, MVL becomes a very natural representation for the RNS. There are two concerns related to the application of this Number System: reaching the most possible speed and the largest dynamic range. There is a conflict when one wants to resolve both these problem. That is augmenting the dynamic range results in reducing the speed in the same time. For achieving the most performance a method is considere named “One-Hot Residue Number System" in this implementation the propagation is only equal to one transistor delay. The problem with this method is the huge increase in the number of transistors they are increased in order m2 . In real application this is practically impossible. In this paper combining the Multiple Valued Logic and One-Hot Residue Number System we represent a new method to resolve both of these two problems. In this paper we represent a novel design of an OHRNS-based adder circuit. This circuit is useable for Multiple Valued Logic moduli, in comparison to other RNS design; this circuit has considerably improved the number of transistors and power consumption.
Abstract: As the majority of faults are found in a few of its
modules so there is a need to investigate the modules that are
affected severely as compared to other modules and proper
maintenance need to be done in time especially for the critical
applications. As, Neural networks, which have been already applied
in software engineering applications to build reliability growth
models predict the gross change or reusability metrics. Neural
networks are non-linear sophisticated modeling techniques that are
able to model complex functions. Neural network techniques are
used when exact nature of input and outputs is not known. A key
feature is that they learn the relationship between input and output
through training. In this present work, various Neural Network Based
techniques are explored and comparative analysis is performed for
the prediction of level of need of maintenance by predicting level
severity of faults present in NASA-s public domain defect dataset.
The comparison of different algorithms is made on the basis of Mean
Absolute Error, Root Mean Square Error and Accuracy Values. It is
concluded that Generalized Regression Networks is the best
algorithm for classification of the software components into different
level of severity of impact of the faults. The algorithm can be used to
develop model that can be used for identifying modules that are
heavily affected by the faults.