Abstract: In this paper, a fast motion compensation algorithm is
proposed that improves coding efficiency for video sequences with
brightness variations. We also propose a cross entropy measure
between histograms of two frames to detect brightness variations. The
framewise brightness variation parameters, a multiplier and an offset
field for image intensity, are estimated and compensated. Simulation
results show that the proposed method yields a higher peak signal to
noise ratio (PSNR) compared with the conventional method, with a
greatly reduced computational load, when the video scene contains
illumination changes.
Abstract: European Rail Traffic Management System (ERTMS) is the European reference for interoperable and safer signaling systems to efficiently manage trains running. If implemented, it allows trains cross seamlessly intra-European national borders. ERTMS has defined a secure communication protocol, EURORADIO, based on open communication networks. Its RadioInfill function can improve the reaction of the signaling system to changes in line conditions, avoiding unnecessary braking: its advantages in terms of power saving and travel time has been analyzed. In this paper a software implementation of the EURORADIO protocol with RadioInfill for ERTMS Level 1 using GSM-R is illustrated as part of the SR-Secure Italian project. In this building-blocks architecture the EURORADIO layers communicates together through modular Application Programm Interfaces. Security coding rules and railway industry requirements specified by EN 50128 standard have been respected. The proposed implementation has successfully passed conformity tests and has been tested on a computer-based simulator.
Abstract: In DMVC, we have more than one options of sources available for construction of side information. The newer techniques make use of both the techniques simultaneously by constructing a bitmask that determines the source of every block or pixel of the side information. A lot of computation is done to determine each bit in the bitmask. In this paper, we have tried to define areas that can only be well predicted by temporal interpolation and not by multiview interpolation or synthesis. We predict that all such areas that are not covered by two cameras cannot be appropriately predicted by multiview synthesis and if we can identify such areas in the first place, we don-t need to go through the script of computations for all the pixels that lie in those areas. Moreover, this paper also defines a technique based on KLT to mark the above mentioned areas before any other processing is done on the side view.
Abstract: The H.264/AVC standard uses an intra prediction, 9
directional modes for 4x4 luma blocks and 8x8 luma blocks, 4
directional modes for 16x16 macroblock and 8x8 chroma blocks,
respectively. It means that, for a macroblock, it has to perform 736
different RDO calculation before a best RDO modes is determined.
With this Multiple intra-mode prediction, intra coding of H.264/AVC
offers a considerably higher improvement in coding efficiency
compared to other compression standards, but computational
complexity is increased significantly. This paper presents a fast intra
prediction algorithm for H.264/AVC intra prediction based a
characteristic of homogeneity information. In this study, the gradient
prediction method used to predict the homogeneous area and the
quadratic prediction function used to predict the nonhomogeneous
area. Based on the correlation between the homogeneity and block
size, the smaller block is predicted by gradient prediction and
quadratic prediction, so the bigger block is predicted by gradient
prediction. Experimental results are presented to show that the
proposed method reduce the complexity by up to 76.07%
maintaining the similar PSNR quality with about 1.94%bit rate
increase in average.
Abstract: In April 2009, a new variant of Influenza A virus
subtype H1N1 emerged in Mexico and spread all over the world. The
influenza has three subtypes in human (H1N1, H1N2 and H3N2)
Types B and C influenza tend to be associated with local or regional
epidemics. Preliminary genetic characterization of the influenza
viruses has identified them as swine influenza A (H1N1) viruses.
Nucleotide sequence analysis of the Haemagglutinin (HA) and
Neuraminidase (NA) are similar to each other and the majority of
their genes of swine influenza viruses, two genes coding for the
neuraminidase (NA) and matrix (M) proteins are similar to
corresponding genes of swine influenza. Sequence similarity between
the 2009 A (H1N1) virus and its nearest relatives indicates that its
gene segments have been circulating undetected for an extended
period. Nucleic acid sequence Maximum Likelihood (MCL) and
DNA Empirical base frequencies, Phylogenetic relationship amongst
the HA genes of H1N1 virus isolated in Genbank having high
nucleotide sequence homology.
In this paper we used 16 HA nucleotide sequences from NCBI for
computing sequence relationships similarity of swine influenza A
virus using the following method MCL the result is 28%, 36.64% for
Optimal tree with the sum of branch length, 35.62% for Interior
branch phylogeny Neighber – Join Tree, 1.85% for the overall
transition/transversion, and 8.28% for Overall mean distance.
Abstract: We demonstrate single-photon interference over 10 km using a plug and play system for quantum key distribution. The quality of the interferometer is measured by using the interferometer
visibility. The coding of the signal is based on the phase coding and the value of visibility is based on the interference effect, which result a number of count. The setup gives full control of polarization inside
the interferometer. The quality measurement of the interferometer is based on number of count per second and the system produces 94 % visibility in one of the detectors.
Abstract: XML has become a popular standard for information exchange via web. Each XML document can be presented as a rooted, ordered, labeled tree. The Node label shows the exact position of a node in the original document. Region and Dewey encoding are two famous methods of labeling trees. In this paper, we propose a new insert friendly labeling method named IFDewey based on recently proposed scheme, called Extended Dewey. In Extended Dewey many labels must be modified when a new node is inserted into the XML tree. Our method eliminates this problem by reserving even numbers for future insertion. Numbers generated by Extended Dewey may be even or odd. IFDewey modifies Extended Dewey so that only odd numbers are generated and even numbers can then be used for a much easier insertion of nodes.
Abstract: In this paper, we propose a reversible watermarking
scheme based on histogram shifting (HS) to embed watermark bits
into the H.264/AVC standard videos by modifying the last nonzero
level in the context adaptive variable length coding (CAVLC) domain.
The proposed method collects all of the last nonzero coefficients (or
called last level coefficient) of 4×4 sub-macro blocks in a macro
block and utilizes predictions for the current last level from the
neighbor block-s last levels to embed watermark bits. The feature of
the proposed method is low computational and has the ability of
reversible recovery. The experimental results have demonstrated that
our proposed scheme has acceptable degradation on video quality and
output bit-rate for most test videos.
Abstract: Advent enhancements in the field of computing have
increased massive use of web based electronic documents. Current
Copyright protection laws are inadequate to prove the ownership for
electronic documents and do not provide strong features against
copying and manipulating information from the web. This has
opened many channels for securing information and significant
evolutions have been made in the area of information security.
Digital Watermarking has developed into a very dynamic area of
research and has addressed challenging issues for digital content.
Watermarking can be visible (logos or signatures) and invisible
(encoding and decoding). Many visible watermarking techniques
have been studied for text documents but there are very few for web
based text. XML files are used to trade information on the internet
and contain important information. In this paper, two invisible
watermarking techniques using Synonyms and Acronyms are
proposed for XML files to prove the intellectual ownership and to
achieve the security. Analysis is made for different attacks and
amount of capacity to be embedded in the XML file is also noticed.
A comparative analysis for capacity is also made for both methods.
The system has been implemented using C# language and all tests are
made practically to get the results.
Abstract: Users of computer systems may often require the
private transfer of messages/communications between parties across
a network. Information warfare and the protection and dominance of
information in the military context is a prime example of an
application area in which the confidentiality of data needs to be
maintained. The safe transportation of critical data is therefore often
a vital requirement for many private communications. However,
unwanted interception/sniffing of communications is also a
possibility. An elementary stealthy transfer scheme is therefore
proposed by the authors. This scheme makes use of encoding,
splitting of a message and the use of a hashing algorithm to verify the
correctness of the reconstructed message. For this proof-of-concept
purpose, the authors have experimented with the random sending of
encoded parts of a message and the construction thereof to
demonstrate how data can stealthily be transferred across a network
so as to prevent the obvious retrieval of data.
Abstract: H.264/AVC offers a considerably higher improvement
in coding efficiency compared to other compression standards such
as MPEG-2, but computational complexity is increased significantly.
In this paper, we propose selective mode decision schemes for fast
intra prediction mode selection. The objective is to reduce the
computational complexity of the H.264/AVC encoder without
significant rate-distortion performance degradation. In our proposed
schemes, the intra prediction complexity is reduced by limiting the
luma and chroma prediction modes using the directional information
of the 16×16 prediction mode. Experimental results are presented to
show that the proposed schemes reduce the complexity by up to 78%
maintaining the similar PSNR quality with about 1.46% bit rate
increase in average.
Abstract: The motivation for adaptive modulation and coding is
to adjust the method of transmission to ensure that the maximum
efficiency is achieved over the link at all times. The receiver
estimates the channel quality and reports it back to the transmitter.
The transmitter then maps the reported quality into a link mode. This
mapping however, is not a one-to-one mapping. In this paper we
investigate a method for selecting the proper modulation scheme.
This method can dynamically adapt the mapping of the Signal-to-
Noise Ratio (SNR) into a link mode. It enables the use of the right
modulation scheme irrespective of changes in the channel conditions
by incorporating errors in the received data. We propose a Markov
model for this method, and use it to derive the average switching
thresholds and the average throughput. We show that the average
throughput of this method outperforms the conventional threshold
method.
Abstract: In this paper, a fragile watermarking scheme is proposed for color image specified object-s authentication. The color image is first transformed from RGB to YST color space, suitable for watermarking the color media. The T channel corresponds to the chrominance component of a color image andYS ÔèÑ T , therefore selected for embedding the watermark. The T channel is first divided into 2×2 non-overlapping blocks and the two LSBs are set to zero. The object that is to be authenticated is also divided into 2×2 nonoverlapping blocks and each block-s intensity mean is computed followed by eight bit encoding. The generated watermark is then embedded into T channel randomly selected 2×2 block-s LSBs using 2D-Torus Automorphism. Selection of block size is paramount for exact localization and recovery of work. The proposed scheme is blind, efficient and secure with ability to detect and locate even minor tampering applied to the image with full recovery of original work. The quality of watermarked media is quite high both subjectively and objectively. The technique is suitable for class of images with format such as gif, tif or bitmap.
Abstract: The purpose of this paper is to show efficiency and capability LZWµ in data compression. The LZWµ technique is enhancement from existing LZW technique. The modification the existing LZW is needed to produce LZWµ technique. LZW read one by one character at one time. Differ with LZWµ technique, where the LZWµ read three characters at one time. This paper focuses on data compression and tested efficiency and capability LZWµ by different data format such as doc type, pdf type and text type. Several experiments have been done by different types of data format. The results shows LZWµ technique is better compared to existing LZW technique in term of file size.
Abstract: This paper studies the design of a simple constellation
precoding for a multiple-input multiple-output orthogonal frequency
division multiplexing (MIMO-OFDM) system over Rayleigh fading
channels where OFDM is used to keep the diversity replicas orthogonal
and reduce ISI effects. A multi-user environment with K synchronous
co-channel users is considered. The proposed scheme provides
a bandwidth efficient transmission for individual users by increasing
the system throughput. In comparison with the existing coded
MIMO-OFDM schemes, the precoding technique is designed under
the consideration of its low implementation complexity while providing
a comparable error performance to the existing schemes.
Analytic and simulation results have been presented to show the distinguished
error performance.
Abstract: This method decrease usage power (expenditure) in networks on chips (NOC). This method data coding for data transferring in order to reduces expenditure. This method uses data compression reduces the size. Expenditure calculation in NOC occurs inside of NOC based on grown models and transitive activities in entry ports. The goal of simulating is to weigh expenditure for encoding, decoding and compressing in Baseline networks and reduction of switches in this type of networks. KeywordsNetworks on chip, Compression, Encoding, Baseline networks, Banyan networks.
Abstract: In the control theory one attempts to find a controller
that provides the best possible performance with respect to some
given measures of performance. There are many sorts of controllers
e.g. a typical PID controller, LQR controller, Fuzzy controller etc. In
the paper will be introduced polynomial controller with novel tuning
method which is based on the special pole placement encoding
scheme and optimization by Genetic Algorithms (GA). The examples
will show the performance of the novel designed polynomial
controller with comparison to common PID controller.
Abstract: In this paper we proposed the use of Huffman
coding to reduce the PAR of an OFDM system as a distortionless
scrambling technique, and we utilize the amount saved in the
total bit rate by the Huffman coding to send the encoding table
for accurate decoding at the receiver without reducing the
effective throughput. We found that the use of Huffman coding
reduces the PAR by about 6 dB. Also we have investigated the
effect of PAR reduction due to Huffman coding through testing
the spectral spreading and the inband distortion due to HPA with
different IBO values. We found a complete match of our
expectation from the proposed solution with the obtained
simulation results.
Abstract: Categorical data based on description of the
agricultural landscape imposed some mathematical and analytical
limitations. This problem however can be overcome by data
transformation through coding scheme and the use of non-parametric
multivariate approach. The present study describes data
transformation from qualitative to numerical descriptors. In a
collection of 103 random soil samples over a 60 hectare field,
categorical data were obtained from the following variables: levels of
nitrogen, phosphorus, potassium, pH, hue, chroma, value and data on
topography, vegetation type, and the presence of rocks. Categorical
data were coded, and Spearman-s rho correlation was then calculated
using PAST software ver. 1.78 in which Principal Component
Analysis was based. Results revealed successful data transformation,
generating 1030 quantitative descriptors. Visualization based on the
new set of descriptors showed clear differences among sites, and
amount of variation was successfully measured. Possible applications
of data transformation are discussed.
Abstract: A recent neurospiking coding scheme for feature extraction from biosonar echoes of various plants is examined with avariety of stochastic classifiers. Feature vectors derived are employedin well-known stochastic classifiers, including nearest-neighborhood,single Gaussian and a Gaussian mixture with EM optimization.Classifiers' performances are evaluated by using cross-validation and bootstrapping techniques. It is shown that the various classifers perform equivalently and that the modified preprocessing configuration yields considerably improved results.