Abstract: This work presents a novel means of extracting fixedlength parameters from voice signals, such that words can be recognized
in linear time. The power and the zero crossing rate are first
calculated segment by segment from a voice signal; by doing so, two
feature sequences are generated. We then construct an FIR system
across these two sequences. The parameters of this FIR system, used
as the input of a multilayer proceptron recognizer, can be derived by
recursive LSE (least-square estimation), implying that the complexity of overall process is linear to the signal size. In the second part of
this work, we introduce a weighting factor λ to emphasize recent
input; therefore, we can further recognize continuous speech signals.
Experiments employ the voice signals of numbers, from zero to nine, spoken in Mandarin Chinese. The proposed method is verified to
recognize voice signals efficiently and accurately.
Abstract: This paper presents a comparative study on two most
popular control strategies for Permanent Magnet Synchronous Motor
(PMSM) drives: field-oriented control (FOC) and direct torque
control (DTC). The comparison is based on various criteria including
basic control characteristics, dynamic performance, and
implementation complexity. The study is done by simulation using
the Simulink Power System Blockset that allows a complete
representation of the power section (inverter and PMSM) and the
control system. The simulation and evaluation of both control
strategies are performed using actual parameters of Permanent
Magnet Synchronous Motor fed by an IGBT PWM inverter.
Abstract: Access control is a critical security service in Wire- less
Sensor Networks (WSNs). To prevent malicious nodes from joining
the sensor network, access control is required. On one hand, WSN
must be able to authorize and grant users the right to access to the
network. On the other hand, WSN must organize data collected by
sensors in such a way that an unauthorized entity (the adversary)
cannot make arbitrary queries. This restricts the network access only
to eligible users and sensor nodes, while queries from outsiders will
not be answered or forwarded by nodes. In this paper we presentee
different access control schemes so as to ?nd out their objectives,
provision, communication complexity, limits, etc. Using the node
density parameter, we also provide a comparison of these proposed
access control algorithms based on the network topology which can
be flat or hierarchical.
Abstract: In this paper is investigated a possible
optimization of some linear algebra problems which can be
solved by parallel processing using the special arrays called
systolic arrays. In this paper are used some special types of
transformations for the designing of these arrays. We show
the characteristics of these arrays. The main focus is on
discussing the advantages of these arrays in parallel
computation of matrix product, with special approach to the
designing of systolic array for matrix multiplication.
Multiplication of large matrices requires a lot of
computational time and its complexity is O(n3 ). There are
developed many algorithms (both sequential and parallel) with
the purpose of minimizing the time of calculations. Systolic
arrays are good suited for this purpose. In this paper we show
that using an appropriate transformation implicates in finding
more optimal arrays for doing the calculations of this type.
Abstract: In first stage of each microwave receiver there is Low
Noise Amplifier (LNA) circuit, and this stage has important rule in
quality factor of the receiver. The design of a LNA in Radio
Frequency (RF) circuit requires the trade-off many importance
characteristics such as gain, Noise Figure (NF), stability, power
consumption and complexity. This situation Forces desingners to
make choices in the desing of RF circuits. In this paper the aim is to
design and simulate a single stage LNA circuit with high gain and
low noise using MESFET for frequency range of 5 GHz to 6 GHz.
The desing simulation process is down using Advance Design
System (ADS). A single stage LNA has successfully designed with
15.83 dB forward gain and 1.26 dB noise figure in frequency of 5.3
GHz. Also the designed LNA should be working stably In a
frequency range of 5 GHz to 6 GHz.
Abstract: In this paper we present a soft timing phase estimation (STPE) method for wireless mobile receivers operating in low signal to noise ratios (SNRs). Discrete Polyphase Matched (DPM) filters, a Log-maximum a posterior probability (MAP) and/or a Soft-output Viterbi algorithm (SOVA) are combined to derive a new timing recovery (TR) scheme. We apply this scheme to wireless cellular communication system model that comprises of a raised cosine filter (RCF), a bit-interleaved turbo-coded multi-level modulation (BITMM) scheme and the channel is assumed to be memory-less. Furthermore, no clock signals are transmitted to the receiver contrary to the classical data aided (DA) models. This new model ensures that both the bandwidth and power of the communication system is conserved. However, the computational complexity of ideal turbo synchronization is increased by 50%. Several simulation tests on bit error rate (BER) and block error rate (BLER) versus low SNR reveal that the proposed iterative soft timing recovery (ISTR) scheme outperforms the conventional schemes.
Abstract: We consider the topological entropy of maps that in
general, cannot be described by one-dimensional dynamics. In particular,
we show that for a multivalued map F generated by singlevalued
maps, the topological entropy of any of the single-value map bounds the topological entropy of F from below.
Abstract: LDPC codes could be used in magnetic storage devices because of their better decoding performance compared to other error correction codes. However, their hardware implementation results in large and complex decoders. This one of the main obstacles the decoders to be incorporated in magnetic storage devices. We construct small high girth and rate 2 columnweight codes from cage graphs. Though these codes have low performance compared to higher column weight codes, they are easier to implement. The ease of implementation makes them more suitable for applications such as magnetic recording. Cages are the smallest known regular distance graphs, which give us the smallest known column-weight 2 codes given the size, girth and rate of the code.
Abstract: The heuristic decision rules used for project
scheduling will vary depending upon the project-s size, complexity,
duration, personnel, and owner requirements. The concept of project
complexity has received little detailed attention. The need to
differentiate between easy and hard problem instances and the
interest in isolating the fundamental factors that determine the
computing effort required by these procedures inspired a number of
researchers to develop various complexity measures.
In this study, the most common measures of project complexity are
presented. A new measure of project complexity is developed. The
main privilege of the proposed measure is that, it considers size,
shape and logic characteristics, time characteristics, resource
demands and availability characteristics as well as number of critical
activities and critical paths. The degree of sensitivity of the proposed
measure for complexity of project networks has been tested and
evaluated against the other measures of complexity of the considered
fifty project networks under consideration in the current study. The
developed measure showed more sensitivity to the changes in the
network data and gives accurate quantified results when comparing
the complexities of networks.
Abstract: Organizational innovation favors technological
innovation, but does it also influence technological innovation
persistence? This article investigates empirically the pattern of
technological innovation persistence and tests the potential impact of
organizational innovation using firm-level data from three waves of
the French Community Innovation Surveys. Evidence shows a
positive effect of organizational innovation on technological
innovation persistence, according to various measures of
organizational innovation. Moreover, this impact is more significant
for complex innovators (i.e., those who innovate in both products and
processes). These results highlight the complexity of managing
organizational practices with regard to the firm-s technological
innovation. They also add to comprehension of the drivers of
innovation persistence, through a focus on an often forgotten
dimension of innovation in a broader sense.
Abstract: This study focuses on examining why the range of
experience with respect to HIV infection is so diverse, especially in
regard to the latency period. An agent-based approach in modelling
the infection is used to extract high-level behaviour which cannot be
obtained analytically from the set of interaction rules at the cellular
level. A prototype model encompasses local variation in baseline
properties, contributing to the individual disease experience, and is
included in a network which mimics the chain of lymph nodes. The
model also accounts for stochastic events such as viral mutations.
The size and complexity of the model require major computational
effort and parallelisation methods are used.
Abstract: Detection, feature extraction and pose estimation of
people in images and video is made challenging by the variability of
human appearance, the complexity of natural scenes and the high
dimensionality of articulated body models and also the important
field in Image, Signal and Vision Computing in recent years. In this
paper, four types of people in 2D dimension image will be tested and
proposed. The system will extract the size and the advantage of them
(such as: tall fat, short fat, tall thin and short thin) from image. Fat
and thin, according to their result from the human body that has been
extract from image, will be obtained. Also the system extract every
size of human body such as length, width and shown them in output.
Abstract: A new code synchronization algorithm is proposed in
this paper for the secondary cell-search stage in wideband CDMA
systems. Rather than using the Cyclically Permutable (CP) code in the
Secondary Synchronization Channel (S-SCH) to simultaneously
determine the frame boundary and scrambling code group, the new
synchronization algorithm implements the same function with less
system complexity and less Mean Acquisition Time (MAT). The
Secondary Synchronization Code (SSC) is redesigned by splitting into
two sub-sequences. We treat the information of scrambling code group
as data bits and use simple time diversity BCH coding for further
reliability. It avoids involved and time-costly Reed-Solomon (RS)
code computations and comparisons. Analysis and simulation results
show that the Synchronization Error Rate (SER) yielded by the new
algorithm in Rayleigh fading channels is close to that of the
conventional algorithm in the standard. This new synchronization
algorithm reduces system complexities, shortens the average
cell-search time and can be implemented in the slot-based cell-search
pipeline. By taking antenna diversity and pipelining correlation
processes, the new algorithm also shows its flexible application in
multiple antenna systems.
Abstract: Multi-loop (De-centralized) Proportional-Integral-
Derivative (PID) controllers have been used extensively in process
industries due to their simple structure for control of multivariable
processes. The objective of this work is to design multiple-model
adaptive multi-loop PID strategy (Multiple Model Adaptive-PID)
and neural network based multi-loop PID strategy (Neural Net
Adaptive-PID) for the control of multivariable system. The first
method combines the output of multiple linear PID controllers,
each describing process dynamics at a specific level of operation.
The global output is an interpolation of the individual multi-loop
PID controller outputs weighted based on the current value of the
measured process variable. In the second method, neural network
is used to calculate the PID controller parameters based on the
scheduling variable that corresponds to major shift in the process
dynamics. The proposed control schemes are simple in structure with
less computational complexity. The effectiveness of the proposed
control schemes have been demonstrated on the CSTR process,
which exhibits dynamic non-linearity.
Abstract: The tree structured approach of non-uniform filterbank
(NUFB) is normally used in perfect reconstruction (PR). The PR is
not always feasible due to certain limitations, i.e, constraints in
selecting design parameters, design complexity and some times
output is severely affected by aliasing error if necessary and
sufficient conditions of PR is not satisfied perfectly. Therefore, there
has been generalized interest of researchers to go for near perfect
reconstruction (NPR). In this proposed work, an optimized tree
structure technique is used for the design of NPR non-uniform
filterbank. Window functions of Blackman family are used to design
the prototype FIR filter. A single variable linear optimization is used
to minimize the amplitude distortion. The main feature of the
proposed design is its simplicity with linear phase property.
Abstract: Authentication of multimedia contents has gained much attention in recent times. In this paper, we propose a secure semi-fragile watermarking, with a choice of two watermarks to be embedded. This technique operates in integer wavelet domain and makes use of semi fragile watermarks for achieving better robustness. A self-recovering algorithm is employed, that hides the image digest into some Wavelet subbands to detect possible malevolent object manipulation undergone by the image (object replacing and/or deletion). The Semi-fragility makes the scheme tolerant for JPEG lossy compression as low as quality of 70%, and locate the tempered area accurately. In addition, the system ensures more security because the embedded watermarks are protected with private keys. The computational complexity is reduced using parameterized integer wavelet transform. Experimental results show that the proposed scheme guarantees the safety of watermark, image recovery and location of the tempered area accurately.
Abstract: In order to protect original data, watermarking is first consideration direction for digital information copyright. In addition, to achieve high quality image, the algorithm maybe can not run on embedded system because the computation is very complexity. However, almost nowadays algorithms need to build on consumer production because integrator circuit has a huge progress and cheap price. In this paper, we propose a novel algorithm which efficient inserts watermarking on digital image and very easy to implement on digital signal processor. In further, we select a general and cheap digital signal processor which is made by analog device company to fit consumer application. The experimental results show that the image quality by watermarking insertion can achieve 46 dB can be accepted in human vision and can real-time execute on digital signal processor.
Abstract: In this paper, the processing of sonar signals has been
carried out using Minimal Resource Allocation Network (MRAN)
and a Probabilistic Neural Network (PNN) in differentiation of
commonly encountered features in indoor environments. The
stability-plasticity behaviors of both networks have been
investigated. The experimental result shows that MRAN possesses
lower network complexity but experiences higher plasticity than
PNN. An enhanced version called parallel MRAN (pMRAN) is
proposed to solve this problem and is proven to be stable in
prediction and also outperformed the original MRAN.
Abstract: Unlike general-purpose processors, digital signal
processors (DSP processors) are strongly application-dependent. To
meet the needs for diverse applications, a wide variety of DSP
processors based on different architectures ranging from the
traditional to VLIW have been introduced to the market over the
years. The functionality, performance, and cost of these processors
vary over a wide range. In order to select a processor that meets the
design criteria for an application, processor performance is usually
the major concern for digital signal processing (DSP) application
developers. Performance data are also essential for the designers of
DSP processors to improve their design. Consequently, several DSP
performance benchmarks have been proposed over the past decade or
so. However, none of these benchmarks seem to have included recent
new DSP applications.
In this paper, we use a new benchmark that we recently developed
to compare the performance of popular DSP processors from Texas
Instruments and StarCore. The new benchmark is based on the
Selectable Mode Vocoder (SMV), a speech-coding program from the
recent third generation (3G) wireless voice applications. All
benchmark kernels are compiled by the compilers of the respective
DSP processors and run on their simulators. Weighted arithmetic
mean of clock cycles and arithmetic mean of code size are used to
compare the performance of five DSP processors.
In addition, we studied how the performance of a processor is
affected by code structure, features of processor architecture and
optimization of compiler. The extensive experimental data gathered,
analyzed, and presented in this paper should be helpful for DSP
processor and compiler designers to meet their specific design goals.
Abstract: Visualizing “Courses – Pre – Required -
Architecture" on the screen has proven to be useful and helpful for
university actors and specially for students. In fact, these students
can easily identify courses and their pre required, perceive the
courses to follow in the future, and then can choose rapidly the
appropriate course to register in. Given a set of courses and their prerequired,
we present an algorithm for visualization a graph entitled
“Courses-Pre-Required-Graph" that present courses and their prerequired
in order to help students to recognize, lonely, what courses
to take in the future and perceive the contain of all courses that they
will study. Our algorithm using “Force Directed Placement"
technique visualizes the “Courses-Pre-Required-Graph" in such way
that courses are easily identifiable. The time complexity of our
drawing algorithm is O (n2), where n is the number of courses in the
“Courses-Pre-Required-Graph".