Abstract: Finite impulse response (FIR) filters have the advantage of linear phase, guaranteed stability, fewer finite precision errors, and efficient implementation. In contrast, they have a major disadvantage of high order need (more coefficients) than IIR counterpart with comparable performance. The high order demand imposes more hardware requirements, arithmetic operations, area usage, and power consumption when designing and fabricating the filter. Therefore, minimizing or reducing these parameters, is a major goal or target in digital filter design task. This paper presents an algorithm proposed for modifying values and the number of non-zero coefficients used to represent the FIR digital pulse shaping filter response. With this algorithm, the FIR filter frequency and phase response can be represented with a minimum number of non-zero coefficients. Therefore, reducing the arithmetic complexity needed to get the filter output. Consequently, the system characteristic i.e. power consumption, area usage, and processing time are also reduced. The proposed algorithm is more powerful when integrated with multiplierless algorithms such as distributed arithmetic (DA) in designing high order digital FIR filters. Here the DA usage eliminates the need for multipliers when implementing the multiply and accumulate unit (MAC) and the proposed algorithm will reduce the number of adders and addition operations needed through the minimization of the non-zero values coefficients to get the filter output.
Abstract: Measurement of competitiveness between countries or regions is an important topic of many economic analysis and scientific papers. In European Union (EU), there is no mainstream approach of competitiveness evaluation and measuring. There are many opinions and methods of measurement and evaluation of competitiveness between states or regions at national and European level. The methods differ in structure of using the indicators of competitiveness and ways of their processing. The aim of the paper is to analyze main sources of competitive potential of the EU Member States with the help of Factor analysis (FA) and to classify the EU Member States to homogeneous units (clusters) according to the similarity of selected indicators of competitiveness factors by Cluster analysis (CA) in reference years 2000 and 2011. The theoretical part of the paper is devoted to the fundamental bases of competitiveness and the methodology of FA and CA methods. The empirical part of the paper deals with the evaluation of competitiveness factors in the EU Member States and cluster comparison of evaluated countries by cluster analysis.
Abstract: Artifact rejection plays a key role in many signal processing applications. The artifacts are disturbance that can occur during the signal acquisition and that can alter the analysis of the signals themselves. Our aim is to automatically remove the artifacts, in particular from the Electroencephalographic (EEG) recordings. A technique for the automatic artifact rejection, based on the Independent Component Analysis (ICA) for the artifact extraction and on some high order statistics such as kurtosis and Shannon-s entropy, was proposed some years ago in literature. In this paper we try to enhance this technique proposing a new method based on the Renyi-s entropy. The performance of our method was tested and compared to the performance of the method in literature and the former proved to outperform the latter.
Abstract: In image processing and visualization, comparing two
bitmapped images needs to be compared from their pixels by matching
pixel-by-pixel. Consequently, it takes a lot of computational time
while the comparison of two vector-based images is significantly
faster. Sometimes these raster graphics images can be approximately
converted into the vector-based images by various techniques. After
conversion, the problem of comparing two raster graphics images
can be reduced to the problem of comparing vector graphics images.
Hence, the problem of comparing pixel-by-pixel can be reduced to
the problem of polynomial comparisons. In computer aided geometric
design (CAGD), the vector graphics images are the composition of
curves and surfaces. Curves are defined by a sequence of control
points and their polynomials. In this paper, the control points will be
considerably used to compare curves. The same curves after relocated
or rotated are treated to be equivalent while two curves after different
scaled are considered to be similar curves. This paper proposed an
algorithm for comparing the polynomial curves by using the control
points for equivalence and similarity. In addition, the geometric
object-oriented database used to keep the curve information has also
been defined in XML format for further used in curve comparisons.
Abstract: As days go by, we hear more and more about HIV,
Ebola, Bird Flu and other dreadful viruses which were unknown a
few decades ago. In both detecting and fighting viral diseases
ordinary methods have come across some basic and important
difficulties. Vaccination is by a sense introduction of the virus to the
immune system before the occurrence of the real case infection. It is
very successful against some viruses (e.g. Poliomyelitis), while
totally ineffective against some others (e.g. HIV or Hepatitis-C). On
the other hand, Anti-virus drugs are mostly some tools to control and
not to cure a viral disease. This could be a good motivation to try
alternative treatments. In this study, some key features of possible
physical-based alternative treatments for viral diseases are presented.
Electrification of body parts or fluids (especially blood) with micro
electric signals with adjusted current or frequency is also studied. The
main approach of this study is to find a suitable energy field, with
appropriate parameters that are able to kill or deactivate viruses. This
would be a lengthy, multi-disciplinary research which needs the
contribution of virology, physics, and signal processing experts. It
should be mentioned that all the claims made by alternative cures
researchers must be tested carefully and are not advisable at the time
being.
Abstract: Many experimental results suggest that more precise
spike timing is significant in neural information processing. We
construct a self-organization model using the spatiotemporal patterns,
where Spike-Timing Dependent Plasticity (STDP) tunes the
conduction delays between neurons. We show that the fluctuation of
conduction delays causes globally continuous and locally distributed
firing patterns through the self-organization.
Abstract: Prime Factorization based on Quantum approach in
two phases has been performed. The first phase has been achieved at
Quantum computer and the second phase has been achieved at the
classic computer (Post Processing). At the second phase the goal is to
estimate the period r of equation xrN ≡ 1 and to find the prime factors
of the composite integer N in classic computer. In this paper we
present a method based on Randomized Approach for estimation the
period r with a satisfactory probability and the composite integer N
will be factorized therefore with the Randomized Approach even the
gesture of the period is not exactly the real period at least we can find
one of the prime factors of composite N. Finally we present some
important points for designing an Emulator for Quantum Computer
Simulation.
Abstract: This research deals with a flexible flowshop
scheduling problem with arrival and delivery of jobs in groups and
processing them individually. Due to the special characteristics of
each job, only a subset of machines in each stage is eligible to
process that job. The objective function deals with minimization of
sum of the completion time of groups on one hand and minimization
of sum of the differences between completion time of jobs and
delivery time of the group containing that job (waiting period) on the
other hand. The problem can be stated as FFc / rj , Mj / irreg which
has many applications in production and service industries. A
mathematical model is proposed, the problem is proved to be NPcomplete,
and an effective heuristic method is presented to schedule
the jobs efficiently. This algorithm can then be used within the body
of any metaheuristic algorithm for solving the problem.
Abstract: This paper attempts to explore the phenomenon of metaphorization in English newspaper headlines from the perspective of pragmatic investigation. With relevance theory as the guideline, this paper makes an explanation of the processing of metaphor with a pragmatic approach and points that metaphor is the stimulus adopted by journalists to achieve optimal relevance in this ostensive communication, as well as the strategy to fulfill their writing purpose.
Abstract: A method and apparatus for noninvasive measurement
of blood glucose concentration based on transilluminated laser beam
via the Index Finger has been reported in this paper. This method
depends on atomic gas (He-Ne) laser operating at 632.8nm
wavelength. During measurement, the index finger is inserted into the
glucose sensing unit, the transilluminated optical signal is converted
into an electrical signal, compared with the reference electrical
signal, and the obtained difference signal is processed by signal
processing unit which presents the results in the form of blood
glucose concentration. This method would enable the monitoring
blood glucose level of the diabetic patient continuously, safely and
noninvasively.
Abstract: Finding the minimal logical functions has important applications in the design of logical circuits. This task is solved by many different methods but, frequently, they are not suitable for a computer implementation. We briefly summarise the well-known Quine-McCluskey method, which gives a unique procedure of computing and thus can be simply implemented, but, even for simple examples, does not guarantee an optimal solution. Since the Petrick extension of the Quine-McCluskey method does not give a generally usable method for finding an optimum for logical functions with a high number of values, we focus on interpretation of the result of the Quine-McCluskey method and show that it represents a set covering problem that, unfortunately, is an NP-hard combinatorial problem. Therefore it must be solved by heuristic or approximation methods. We propose an approach based on genetic algorithms and show suitable parameter settings.
Abstract: This paper proposes a new optimization techniques
for the optimization a gas processing plant uncertain feed and
product flows. The problem is first formulated using a continuous
linear deterministic approach. Subsequently, the single and joint
chance constraint models for steady state process with timedependent
uncertainties have been developed. The solution approach
is based on converting the probabilistic problems into their
equivalent deterministic form and solved at different confidence
levels Case study for a real plant operation has been used to
effectively implement the proposed model. The optimization results
indicate that prior decision has to be made for in-operating plant
under uncertain feed and product flows by satisfying all the
constraints at 95% confidence level for single chance constrained and
85% confidence level for joint chance constrained optimizations
cases.
Abstract: The atmospheric pressure plasma torch with a direct
current arc discharge stabilized by water vapor vortex was
experimentally investigated. Overheated up to 450K water vapor was
used as plasma forming gas. Plasma torch design is one of the most
important factors leading to a stable operation of the device. The
electrical and thermal characteristics of the plasma torch were
determined during the experimental investigations. The design and
the basic characteristics of the water vapor plasma torch are presented
in the paper.
Plasma torches with the electric arc stabilized by water vapor
vortex provide special performance characteristics in some plasma
processing applications such as thermal plasma neutralization and
destruction of organic wastes enabling to extract high caloric value
synthesis gas as by-product of the process. Syngas could be used as a
surrogate fuel partly replacing the dependence on the fossil fuels or
used as a feedstock for hydrogen, methanol production.
Abstract: The simple methods used to plan and measure non
patterned production system are developed from the basic definition
of working efficiency. Processing time is assigned as the variable
and used to write the equation of production efficiency.
Consequently, such equation is extensively used to develop the
planning method for production of interest using one-dimensional
stock cutting problem. The application of the developed method
shows that production efficiency and production planning can be
determined effectively.
Abstract: Data stream analysis is the process of computing
various summaries and derived values from large amounts of data
which are continuously generated at a rapid rate. The nature of a
stream does not allow a revisit on each data element. Furthermore,
data processing must be fast to produce timely analysis results. These
requirements impose constraints on the design of the algorithms to
balance correctness against timely responses. Several techniques
have been proposed over the past few years to address these
challenges. These techniques can be categorized as either dataoriented
or task-oriented. The data-oriented approach analyzes a
subset of data or a smaller transformed representation, whereas taskoriented
scheme solves the problem directly via approximation
techniques. We propose a hybrid approach to tackle the data stream
analysis problem. The data stream has been both statistically
transformed to a smaller size and computationally approximated its
characteristics. We adopt a Monte Carlo method in the approximation
step. The data reduction has been performed horizontally and
vertically through our EMR sampling method. The proposed method
is analyzed by a series of experiments. We apply our algorithm on
clustering and classification tasks to evaluate the utility of our
approach.
Abstract: Environmental awareness and depletion of the
petroleum resources are among vital factors that motivate a number
of researchers to explore the potential of reusing natural fiber as an
alternative composite material in industries such as packaging,
automotive and building constructions. Natural fibers are available in
abundance, low cost, lightweight polymer composite and most
importance its biodegradability features, which often called “ecofriendly"
materials. However, their applications are still limited due
to several factors like moisture absorption, poor wettability and large
scattering in mechanical properties. Among the main challenges on
natural fibers reinforced matrices composite is their inclination to
entangle and form fibers agglomerates during processing due to
fiber-fiber interaction. This tends to prevent better dispersion of the
fibers into the matrix, resulting in poor interfacial adhesion between
the hydrophobic matrix and the hydrophilic reinforced natural fiber.
Therefore, to overcome this challenge, fiber treatment process is one
common alternative that can be use to modify the fiber surface
topology by chemically, physically or mechanically technique.
Nevertheless, this paper attempt to focus on the effect of
mercerization treatment on mechanical properties enhancement of
natural fiber reinforced composite or so-called bio composite. It
specifically discussed on mercerization parameters, and natural fiber
reinforced composite mechanical properties enhancement.
Abstract: The control of commutation of switched reluctance
(SR) motor has nominally depended on a physical position detector.
The physical rotor position sensor limits robustness and increases
size and inertia of the SR drive system. The paper describes a method
to overcome these limitations by using magnetization characteristics
of the motor to indicate rotor and stator teeth overlap status. The
method is using active current probing pulses of same magnitude that
is used to simulate flux linkage in the winding being probed. A
microprocessor is used for processing magnetization data to deduce
rotor-stator teeth overlap status and hence rotor position. However,
the back-of-core saturation and mutual coupling introduces overlap
detection errors, hence that of commutation control. This paper
presents the concept of the detection scheme and the effects of backof
core saturation.
Abstract: In pattern recognition applications the low level
segmentation and the high level object recognition are generally
considered as two separate steps. The paper presents a method that
bridges the gap between the low and the high level object
recognition. It is based on a Bayesian network representation and
network propagation algorithm. At the low level it uses hierarchical
structure of quadratic spline wavelet image bases. The method is
demonstrated for a simple circuit diagram component identification
problem.
Abstract: To successfully provide a fast FIR filter with FTT algorithms, overlapped-save algorithms can be used to lower the computational complexity and achieve the desired real-time processing. As the length of the input block increases in order to improve the efficiency, a larger volume of zero padding will greatly increase the computation length of the FFT. In this paper, we use the overlapped block digital filtering to construct a parallel structure. As long as the down-sampling (or up-sampling) factor is an exact multiple lengths of the impulse response of a FIR filter, we can process the input block by using a parallel structure and thus achieve a low-complex fast FIR filter with overlapped-save algorithms. With a long filter length, the performance and the throughput of the digital filtering system will also be greatly enhanced.
Abstract: In this work a novel approach for color image
segmentation using higher order entropy as a textural feature for
determination of thresholds over a two dimensional image histogram
is discussed. A similar approach is applied to achieve multi-level
thresholding in both grayscale and color images. The paper discusses
two methods of color image segmentation using RGB space as the
standard processing space. The threshold for segmentation is decided
by the maximization of conditional entropy in the two dimensional
histogram of the color image separated into three grayscale images of
R, G and B. The features are first developed independently for the
three ( R, G, B ) spaces, and combined to get different color
component segmentation. By considering local maxima instead of the
maximum of conditional entropy yields multiple thresholds for the
same image which forms the basis for multilevel thresholding.