Abstract: Variable channel conditions in underwater networks,
and variable distances between sensors due to water current, leads to
variable bit error rate (BER). This variability in BER has great
effects on energy efficiency of error correction techniques used. In
this paper an efficient energy adaptive hybrid error correction
technique (AHECT) is proposed. AHECT adaptively changes error
technique from pure retransmission (ARQ) in a low BER case to a
hybrid technique with variable encoding rates (ARQ & FEC) in a
high BER cases. An adaptation algorithm depends on a precalculated
packet acceptance rate (PAR) look-up table, current BER,
packet size and error correction technique used is proposed. Based
on this adaptation algorithm a periodically 3-bit feedback is added to
the acknowledgment packet to state which error correction technique
is suitable for the current channel conditions and distance.
Comparative studies were done between this technique and other
techniques, and the results show that AHECT is more energy
efficient and has high probability of success than all those
techniques.
Abstract: This paper introduces two decoders for binary linear
codes based on Metaheuristics. The first one uses a genetic algorithm
and the second is based on a combination genetic algorithm with
a feed forward neural network. The decoder based on the genetic
algorithms (DAG) applied to BCH and convolutional codes give good
performances compared to Chase-2 and Viterbi algorithm respectively
and reach the performances of the OSD-3 for some Residue
Quadratic (RQ) codes. This algorithm is less complex for linear
block codes of large block length; furthermore their performances
can be improved by tuning the decoder-s parameters, in particular the
number of individuals by population and the number of generations.
In the second algorithm, the search space, in contrast to DAG which
was limited to the code word space, now covers the whole binary
vector space. It tries to elude a great number of coding operations
by using a neural network. This reduces greatly the complexity of
the decoder while maintaining comparable performances.
Abstract: One of the major disadvantages of the minimally
invasive surgery (MIS) is the lack of tactile feedback to the surgeon.
In order to identify and avoid any damage to the grasped complex
tissue by endoscopic graspers, it is important to measure the local
softness of tissue during MIS. One way to display the measured
softness to the surgeon is a graphical method. In this paper, a new
tactile sensor has been reported. The tactile sensor consists of an
array of four softness sensors, which are integrated into the jaws of a
modified commercial endoscopic grasper. Each individual softness
sensor consists of two piezoelectric polymer Polyvinylidene Fluoride
(PVDF) films, which are positioned below a rigid and a compliant
cylinder. The compliant cylinder is fabricated using a micro molding
technique. The combination of output voltages from PVDF films is
used to determine the softness of the grasped object. The theoretical
analysis of the sensor is also presented.
A method has been developed with the aim of reproducing the
tactile softness to the surgeon by using a graphical method. In this
approach, the proposed system, including the interfacing and the data
acquisition card, receives signals from the array of softness sensors.
After the signals are processed, the tactile information is displayed
by means of a color coding method. It is shown that the degrees of
softness of the grasped objects/tissues can be visually differentiated
and displayed on a monitor.
Abstract: In this paper, we present an innovative scheme of
blindly extracting message bits from an image distorted by an attack.
Support Vector Machine (SVM) is used to nonlinearly classify the
bits of the embedded message. Traditionally, a hard decoder is used
with the assumption that the underlying modeling of the Discrete
Cosine Transform (DCT) coefficients does not appreciably change.
In case of an attack, the distribution of the image coefficients is
heavily altered. The distribution of the sufficient statistics at the
receiving end corresponding to the antipodal signals overlap and a
simple hard decoder fails to classify them properly. We are
considering message retrieval of antipodal signal as a binary
classification problem. Machine learning techniques like SVM is
used to retrieve the message, when certain specific class of attacks is
most probable. In order to validate SVM based decoding scheme, we
have taken Gaussian noise as a test case. We generate a data set using
125 images and 25 different keys. Polynomial kernel of SVM has
achieved 100 percent accuracy on test data.
Abstract: This paper presents the enhanced frame-based video coding scheme. The input source video to the enhanced frame-based video encoder consists of a rectangular-size video and shapes of arbitrarily-shaped objects on video frames. The rectangular frame texture is encoded by the conventional frame-based coding technique and the video object-s shape is encoded using the contour-based vertex coding. It is possible to achieve several useful content-based functionalities by utilizing the shape information in the bitstream at the cost of a very small overhead to the bitrate.
Abstract: Bangla Vowel characterization determines the spectral properties of Bangla vowels for efficient synthesis as well as recognition of Bangla vowels. In this paper, Bangla vowels in isolated word have been analyzed based on speech production model within the framework of Analysis-by-Synthesis. This has led to the extraction of spectral parameters for the production model in order to produce different Bangla vowel sounds. The real and synthetic spectra are compared and a weighted square error has been computed along with the error in the formant bandwidths for efficient representation of Bangla vowels. The extracted features produced good representation of targeted Bangla vowel. Such a representation also plays essential role in low bit rate speech coding and vocoders.
Abstract: This paper presents a design of source encoding
calculator software which applies the two famous algorithms in the
field of information theory- the Shannon-Fano and the Huffman
schemes. This design helps to easily realize the algorithms without
going into a cumbersome, tedious and prone to error manual
mechanism of encoding the signals during the transmission. The
work describes the design of the software, how it works, comparison
with related works, its efficiency, its usefulness in the field of
information technology studies and the future prospects of the
software to engineers, students, technicians and alike. The designed
“Encodia" software has been developed, tested and found to meet the
intended requirements. It is expected that this application will help
students and teaching staff in their daily doing of information theory
related tasks. The process is ongoing to modify this tool so that it can
also be more intensely useful in research activities on source coding.
Abstract: A fast adaptive Tomlinson Harashima (T-H) precoder structure is presented for indoor wireless communications, where the channel may vary due to rotation and small movement of the mobile terminal. A frequency-selective slow fading channel which is time-invariant over a frame is assumed. In this adaptive T-H precoder, feedback coefficients are updated at the end of every uplink frame by using system identification technique for channel estimation in contrary with the conventional T-H precoding concept where the channel is estimated during the starting of the uplink frame via Wiener solution. In conventional T-H precoder it is assumed the channel is time-invariant in both uplink and downlink frames. However assuming the channel is time-invariant over only one frame instead of two, the proposed adaptive T-H precoder yields better performance than conventional T-H precoder if the channel is varied in uplink after receiving the training sequence.
Abstract: Multirate multimedia delivery applications in multihop Wireless Mesh Network (WMN) are data redundant and delay-sensitive, which brings a lot of challenges for designing efficient transmission systems. In this paper, we propose a new cross layer resource allocation scheme to minimize the receiver side distortion within the delay bound requirements, by exploring application layer Position and Value (P-V) diversity as well as the multihop Effective Capacity (EC). We specifically consider image transmission optimization here. First of all, the maximum supportable source traffic rate is identified by exploring the multihop Effective Capacity (EC) model. Furthermore, the optimal source coding rate is selected according to the P-V diversity of multirate media streaming, which significantly increases the decoded media quality. Simulation results show the proposed approach improved media quality significantly compared with traditional approaches under the same QoS requirements.
Abstract: This paper explores the implementation of adaptive
coding and modulation schemes for Multiple-Input Multiple-Output
Orthogonal Frequency Division Multiplexing (MIMO-OFDM) feedback
systems. Adaptive coding and modulation enables robust and
spectrally-efficient transmission over time-varying channels. The basic
premise is to estimate the channel at the receiver and feed this estimate
back to the transmitter, so that the transmission scheme can be
adapted relative to the channel characteristics. Two types of codebook
based channel feedback techniques are used in this work. The longterm
and short-term CSI at the transmitter is used for efficient channel
utilization. OFDM is a powerful technique employed in communication
systems suffering from frequency selectivity. Combined with
multiple antennas at the transmitter and receiver, OFDM proves to be
robust against delay spread. Moreover, it leads to significant data rates
with improved bit error performance over links having only a single
antenna at both the transmitter and receiver. The coded modulation
increases the effective transmit power relative to uncoded variablerate
variable-power MQAM performance for MIMO-OFDM feedback
system. Hence proposed arrangement becomes an attractive approach
to achieve enhanced spectral efficiency and improved error rate
performance for next generation high speed wireless communication
systems.
Abstract: In these days, multimedia data is transmitted and
processed in compressed format. Due to the decoding procedure and
filtering for edge detection, the feature extraction process of MPEG-7
Edge Histogram Descriptor is time-consuming as well as
computationally expensive. To improve efficiency of compressed
image retrieval, we propose a new edge histogram generation
algorithm in DCT domain in this paper. Using the edge information
provided by only two AC coefficients of DCT coefficients, we can get
edge directions and strengths directly in DCT domain. The
experimental results demonstrate that our system has good
performance in terms of retrieval efficiency and effectiveness.
Abstract: Low power consumption is a major constraint for battery-powered system like computer notebook or PDA. In the past, specialists usually designed both specific optimized equipments and codes to relief this concern. Doing like this could work for quite a long time, however, in this era, there is another significant restraint, the time to market. To be able to serve along the power constraint while can launch products in shorter production period, objectoriented programming (OOP) has stepped in to this field. Though everyone knows that OOP has quite much more overhead than assembly and procedural languages, development trend still heads to this new world, which contradicts with the target of low power consumption. Most of the prior power related software researches reported that OOP consumed much resource, however, as industry had to accept it due to business reasons, up to now, no papers yet had mentioned about how to choose the best OOP practice in this power limited boundary. This article is the pioneer that tries to specify and propose the optimized strategy in writing OOP software under energy concerned environment, based on quantitative real results. The language chosen for studying is C# based on .NET Framework 2.0 which is one of the trendy OOP development environments. The recommendation gotten from this research would be a good roadmap that can help developers in coding that well balances between time to market and time of battery.
Abstract: The vehicle fleet of public transportation companies is often equipped with intelligent on-board passenger information systems. A frequently used but time and labor-intensive way for keeping the on-board controllers up-to-date is the manual update using different memory cards (e.g. flash cards) or portable computers. This paper describes a compression algorithm that enables data transmission using low bandwidth wireless radio networks (e.g. GPRS) by minimizing the amount of data traffic. In typical cases it reaches a compression rate of an order of magnitude better than that of the general purpose compressors. Compressed data can be easily expanded by the low-performance controllers, too.
Abstract: We report in this paper the procedure of a system of
automatic speech recognition based on techniques of the dynamic
programming. The technique of temporal retiming is a technique
used to synchronize between two forms to compare. We will see how
this technique is adapted to the field of the automatic speech
recognition. We will expose, in a first place, the theory of the
function of retiming which is used to compare and to adjust an
unknown form with a whole of forms of reference constituting the
vocabulary of the application. Then we will give, in the second place,
the various algorithms necessary to their implementation on machine.
The algorithms which we will present were tested on part of the
corpus of words in Arab language Arabdic-10 [4] and gave whole
satisfaction. These algorithms are effective insofar as we apply them
to the small ones or average vocabularies.
Abstract: There have been numerous implementations of
security system using biometric, especially for identification and
verification cases. An example of pattern used in biometric is the iris
pattern in human eye. The iris pattern is considered unique for each
person. The use of iris pattern poses problems in encoding the human
iris.
In this research, an efficient iris recognition method is proposed.
In the proposed method the iris segmentation is based on the
observation that the pupil has lower intensity than the iris, and the
iris has lower intensity than the sclera. By detecting the boundary
between the pupil and the iris and the boundary between the iris and
the sclera, the iris area can be separated from pupil and sclera. A step
is taken to reduce the effect of eyelashes and specular reflection of
pupil. Then the four levels Coiflet wavelet transform is applied to the
extracted iris image. The modified Hamming distance is employed to
measure the similarity between two irises.
This research yields the identification success rate of 84.25% for
the CASIA version 1.0 database. The method gives an accuracy of
77.78% for the left eyes of MMU 1 database and 86.67% for the
right eyes. The time required for the encoding process, from the
segmentation until the iris code is generated, is 0.7096 seconds.
These results show that the accuracy and speed of the method is
better than many other methods.
Abstract: In this paper, we propose an effective relay
communication for layered video transmission as an alternative to
make the most of limited resources in a wireless communication
network where loss often occurs. Relaying brings stable multimedia
services to end clients, compared to multiple description coding
(MDC). Also, retransmission of only parity data about one or more
video layer using channel coder to the end client of the relay device is
paramount to the robustness of the loss situation. Using these
methods in resource-constrained environments, such as real-time user
created content (UCC) with layered video transmission, can provide
high-quality services even in a poor communication environment.
Minimal services are also possible. The mathematical analysis shows
that the proposed method reduced the probability of GOP loss rate
compared to MDC and raptor code without relay. The GOP loss rate
is about zero, while MDC and raptor code without relay have a GOP
loss rate of 36% and 70% in case of 10% frame loss rate.
Abstract: In present paper we proposed a simple and effective method to compress an image. Here we found success in size reduction of an image without much compromising with it-s quality. Here we used Haar Wavelet Transform to transform our original image and after quantization and thresholding of DWT coefficients Run length coding and Huffman coding schemes have been used to encode the image. DWT is base for quite populate JPEG 2000 technique.
Abstract: Innovational development of regions in Russia is generally faced with the essential influence from federal and local authorities. The organization of effective mechanism of innovation development (and self-development) is impossible without establishment of defined institutional conditions in the analyzed field. Creative utilization of scientific concepts and information should merge, giving rise to continuing innovation and advanced production. The paper presents an analysis of institutional conditions in the field of creation and development of innovation activity infrastructure and transferring of knowledge and skills between different economic agents in Russia. Knowledge is mainly privately owned, developed through R&D investments and incorporated into technology or a product. Innovation infrastructure is a strong concentration mechanism of advanced facilities, which are mainly located inside large agglomerations or city-regions in order to benefit from scale effects in both input markets (human capital, private financial capital) and output markets (higher education services, research services). The empirical results of the paper show that in the presence of more efficient innovation and knowledge transfer and transcoding system and of a more open attitude of economic agents towards innovation, the innovation and knowledge capacity of regional economy is much higher.
Abstract: Identifying protein coding regions in DNA sequences is a basic step in the location of genes. Several approaches based on signal processing tools have been applied to solve this problem, trying to achieve more accurate predictions. This paper presents a new predictor that improves the efficacy of three techniques that use the Fourier Transform to predict coding regions, and that could be computed using an algorithm that reduces the computation load. Some ideas about the combination of the predictor with other methods are discussed. ROC curves are used to demonstrate the efficacy of the proposed predictor, based on the computation of 25 DNA sequences from three different organisms.
Abstract: In this paper, LDPC Codes based on defected fullerene
graphs have been generated. And it is found that the codes generated
are fast in encoding and better in terms of error performance on
AWGN Channel.