Abstract: In this paper we have proposed three and two
stage still gray scale image compressor based on BTC. In our
schemes, we have employed a combination of four techniques
to reduce the bit rate. They are quad tree segmentation, bit
plane omission, bit plane coding using 32 visual patterns and
interpolative bit plane coding. The experimental results show
that the proposed schemes achieve an average bit rate of 0.46
bits per pixel (bpp) for standard gray scale images with an
average PSNR value of 30.25, which is better than the results
from the exiting similar methods based on BTC.
Abstract: This paper provides a flexible way of controlling
Variable-Bit-Rate (VBR) of compressed digital video, applicable to
the new H264 video compression standard. The entire video
sequence is assessed in advance and the quantisation level is then set
such that bit rate (and thus the frame rate) remains within
predetermined limits compatible with the bandwidth of the
transmission system and the capabilities of the remote end, while at
the same time providing constant quality similar to VBR encoding.
A process for avoiding buffer starvation by selectively eliminating
frames from the encoded output at times when the frame rate is slow
(large number of bits per frame) will be also described. Finally, the
problem of buffer overflow will be solved by selectively eliminating
frames from the received input to the decoder. The decoder detects
the omission of the frames and resynchronizes the transmission by
monitoring time stamps and repeating frames if necessary.
Abstract: Stochastic modeling of network traffic is an area of
significant research activity for current and future broadband
communication networks. Multimedia traffic is statistically
characterized by a bursty variable bit rate (VBR) profile. In this
paper, we develop an improved model for uniform activity level
video sources in ATM using a doubly stochastic autoregressive
model driven by an underlying spatial point process. We then
examine a number of burstiness metrics such as the peak-to-average
ratio (PAR), the temporal autocovariance function (ACF) and the
traffic measurements histogram. We found that the former measure is
most suitable for capturing the burstiness of single scene video
traffic. In the last phase of this work, we analyse statistical
multiplexing of several constant scene video sources. This proved,
expectedly, to be advantageous with respect to reducing the
burstiness of the traffic, as long as the sources are statistically
independent. We observed that the burstiness was rapidly
diminishing, with the largest gain occuring when only around 5
sources are multiplexed. The novel model used in this paper for
characterizing uniform activity video was thus found to be an
accurate model.
Abstract: Delivering streaming video over wireless is an
important component of many interactive multimedia applications
running on personal wireless handset devices. Such personal devices
have to be inexpensive, compact, and lightweight. But wireless
channels have a high channel bit error rate and limited bandwidth.
Delay variation of packets due to network congestion and the high bit
error rate greatly degrades the quality of video at the handheld
device. Therefore, mobile access to multimedia contents requires
video transcoding functionality at the edge of the mobile network for
interworking with heterogeneous networks and services. Therefore,
to guarantee quality of service (QoS) delivered to the mobile user, a
robust and efficient transcoding scheme should be deployed in
mobile multimedia transporting network. Hence, this paper
examines the challenges and limitations that the video transcoding
schemes in mobile multimedia transporting network face. Then
handheld resources, network conditions and content based mobile
and wireless video transcoding is proposed to provide high QoS
applications. Exceptional performance is demonstrated in the
experiment results. These experiments were designed to verify and
prove the robustness of the proposed approach. Extensive
experiments have been conducted, and the results of various video
clips with different bit rate and frame rate have been provided.
Abstract: Multi-user interference (MUI) is the main reason of system deterioration in the Spectral Amplitude Coding Optical Code Division Multiple Access (SAC-OCDMA) system. MUI increases with the number of simultaneous users, resulting into higher probability bit rate and limits the maximum number of simultaneous users. On the other hand, Phase induced intensity noise (PIIN) problem which is originated from spontaneous emission of broad band source from MUI severely limits the system performance should be addressed as well. Since the MUI is caused by the interference of simultaneous users, reducing the MUI value as small as possible is desirable. In this paper, an extensive study for the system performance specified by MUI and PIIN reducing is examined. Vectors Combinatorial (VC) codes families are adopted as a signature sequence for the performance analysis and a comparison with reported codes is performed. The results show that, when the received power increases, the PIIN noise for all the codes increases linearly. The results also show that the effect of PIIN can be minimized by increasing the code weight leads to preserve adequate signal to noise ratio over bit error probability. A comparison study between the proposed code and the existing codes such as Modified frequency hopping (MFH), Modified Quadratic- Congruence (MQC) has been carried out.
Abstract: This paper deals with the modeling and the evaluation of a multiplicative phase noise influence on the bit error ratio in a general space communication system. Our research is focused on systems with multi-state phase shift keying modulation techniques and it turns out, that the phase noise significantly affects the bit error rate, especially for higher signal to noise ratios. These results come from a system model created in Matlab environment and are shown in a form of constellation diagrams and bit error rate dependencies. The change of a user data bit rate is also considered and included into simulation results. Obtained outcomes confirm theoretical presumptions.
Abstract: The algorithm represents the DCT coefficients to concentrate signal energy and proposes combination and dictator to eliminate the correlation in the same level subband for encoding the DCT-based images. This work adopts DCT and modifies the SPIHT algorithm to encode DCT coefficients. The proposed algorithm also provides the enhancement function in low bit rate in order to improve the perceptual quality. Experimental results indicate that the proposed technique improves the quality of the reconstructed image in terms of both PSNR and the perceptual results close to JPEG2000 at the same bit rate.
Abstract: Available Bit Rate Service (ABR) is the lower priority
service and the better service for the transmission of data. On wireline
ATM networks ABR source is always getting the feedback from
switches about increase or decrease of bandwidth according to the
changing network conditions and minimum bandwidth is guaranteed.
In wireless networks guaranteeing the minimum bandwidth is really a
challenging task as the source is always in mobile and traveling from
one cell to another cell. Re establishment of virtual circuits from start
to end every time causes the delay in transmission. In our proposed
solution we proposed the mechanism to provide more available
bandwidth to the ABR source by re-usage of part of old Virtual
Channels and establishing the new ones. We want the ABR source to
transmit the data continuously (non-stop) inorderto avoid the delay.
In worst case scenario at least minimum bandwidth is to be allocated.
In order to keep the data flow continuously, priority is given to the
handoff ABR call against new ABR call.
Abstract: This paper presents a vocoder to obtain high quality synthetic speech at 600 bps. To reduce the bit rate, the algorithm is based on a sinusoidally excited linear prediction model which extracts few coding parameters, and three consecutive frames are grouped into a superframe and jointly vector quantization is used to obtain high coding efficiency. The inter-frame redundancy is exploited with distinct quantization schemes for different unvoiced/voiced frame combinations in the superframe. Experimental results show that the quality of the proposed coder is better than that of 2.4kbps LPC10e and achieves approximately the same as that of 2.4kbps MELP and with high robustness.
Abstract: High speed networks provide realtime variable bit rate
service with diversified traffic flow characteristics and quality
requirements. The variable bit rate traffic has stringent delay and
packet loss requirements. The burstiness of the correlated traffic
makes dynamic buffer management highly desirable to satisfy the
Quality of Service (QoS) requirements. This paper presents an
algorithm for optimization of adaptive buffer allocation scheme for
traffic based on loss of consecutive packets in data-stream and buffer
occupancy level. Buffer is designed to allow the input traffic to be
partitioned into different priority classes and based on the input
traffic behavior it controls the threshold dynamically. This algorithm
allows input packets to enter into buffer if its occupancy level is less
than the threshold value for priority of that packet. The threshold is
dynamically varied in runtime based on packet loss behavior. The
simulation is run for two priority classes of the input traffic –
realtime and non-realtime classes. The simulation results show that
Adaptive Partial Buffer Sharing (ADPBS) has better performance
than Static Partial Buffer Sharing (SPBS) and First In First Out
(FIFO) queue under the same traffic conditions.
Abstract: In this paper, we propose a Perceptually Optimized Foveation based Embedded ZeroTree Image Coder (POEFIC) that introduces a perceptual weighting to wavelet coefficients prior to control SPIHT encoding algorithm in order to reach a targeted bit rate with a perceptual quality improvement with respect to a given bit rate a fixation point which determines the region of interest ROI. The paper also, introduces a new objective quality metric based on a Psychovisual model that integrates the properties of the HVS that plays an important role in our POEFIC quality assessment. Our POEFIC coder is based on a vision model that incorporates various masking effects of human visual system HVS perception. Thus, our coder weights the wavelet coefficients based on that model and attempts to increase the perceptual quality for a given bit rate and observation distance. The perceptual weights for all wavelet subbands are computed based on 1) foveation masking to remove or reduce considerable high frequencies from peripheral regions 2) luminance and Contrast masking, 3) the contrast sensitivity function CSF to achieve the perceptual decomposition weighting. The new perceptually optimized codec has the same complexity as the original SPIHT techniques. However, the experiments results show that our coder demonstrates very good performance in terms of quality measurement.
Abstract: This paper presents an approach for an unequal error
protection of facial features of personal ID images coding. We
consider unequal error protection (UEP) strategies for the efficient
progressive transmission of embedded image codes over noisy
channels. This new method is based on the progressive image
compression embedded zerotree wavelet (EZW) algorithm and UEP
technique with defined region of interest (ROI). In this case is ROI
equal facial features within personal ID image. ROI technique is
important in applications with different parts of importance. In ROI
coding, a chosen ROI is encoded with higher quality than the
background (BG). Unequal error protection of image is provided by
different coding techniques and encoding LL band separately. In our
proposed method, image is divided into two parts (ROI, BG) that
consist of more important bytes (MIB) and less important bytes
(LIB). The proposed unequal error protection of image transmission
has shown to be more appropriate to low bit rate applications,
producing better quality output for ROI of the compresses image.
The experimental results verify effectiveness of the design. The
results of our method demonstrate the comparison of the UEP of
image transmission with defined ROI with facial features and the
equal error protection (EEP) over additive white gaussian noise
(AWGN) channel.
Abstract: This paper evaluate the multilevel modulation for
different techniques such as amplitude shift keying (M-ASK), MASK,
differential phase shift keying (M-ASK-Bipolar), Quaternary
Amplitude Shift Keying (QASK) and Quaternary Polarization-ASK
(QPol-ASK) at a total bit rate of 107 Gbps. The aim is to find a costeffective
very high speed transport solution. Numerical investigation
was performed using Monte Carlo simulations. The obtained results
indicate that some modulation formats can be operated at 100Gbps
in optical communication systems with low implementation effort
and high spectral efficiency.
Abstract: This paper investigates the performance of a speech
recognizer in an interactive voice response system for various coded
speech signals, coded by using a vector quantization technique namely
Multi Switched Split Vector Quantization Technique. The process of
recognizing the coded output can be used in Voice banking application.
The recognition technique used for the recognition of the coded speech
signals is the Hidden Markov Model technique. The spectral distortion
performance, computational complexity, and memory requirements of
Multi Switched Split Vector Quantization Technique and the
performance of the speech recognizer at various bit rates have been
computed. From results it is found that the speech recognizer is
showing better performance at 24 bits/frame and it is found that the
percentage of recognition is being varied from 100% to 93.33% for
various bit rates.
Abstract: The aim of this paper to characterize a larger set of
wavelet functions for implementation in a still image compression
system using SPIHT algorithm. This paper discusses important
features of wavelet functions and filters used in sub band coding to
convert image into wavelet coefficients in MATLAB. Image quality
is measured objectively using peak signal to noise ratio (PSNR) and
its variation with bit rate (bpp). The effect of different parameters is
studied on different wavelet functions. Our results provide a good
reference for application designers of wavelet based coder.
Abstract: We study in this paper the effect of the scene
changing on image sequences coding system using Embedded
Zerotree Wavelet (EZW). The scene changing considered here is the
full motion which may occurs. A special image sequence is generated
where the scene changing occurs randomly. Two scenarios are
considered: In the first scenario, the system must provide the
reconstruction quality as best as possible by the management of the
bit rate (BR) while the scene changing occurs. In the second scenario,
the system must keep the bit rate as constant as possible by the
management of the reconstruction quality. The first scenario may be
motivated by the availability of a large band pass transmission
channel where an increase of the bit rate may be possible to keep the
reconstruction quality up to a given threshold. The second scenario
may be concerned by the narrow band pass transmission channel
where an increase of the bit rate is not possible. In this last case,
applications for which the reconstruction quality is not a constraint
may be considered. The simulations are performed with five scales
wavelet decomposition using the 9/7-tap filter bank biorthogonal
wavelet. The entropy coding is performed using a specific defined
binary code book and EZW algorithm. Experimental results are
presented and compared to LEAD H263 EVAL. It is shown that if
the reconstruction quality is the constraint, the system increases the
bit rate to obtain the required quality. In the case where the bit rate
must be constant, the system is unable to provide the required quality
if the scene change occurs; however, the system is able to improve
the quality while the scene changing disappears.
Abstract: BER analysis of Impulse Radio Ultra Wideband (IRUWB) pulse modulations over S-V channel model is proposed in this paper. The UWB pulse is Gaussian monocycle pulse modulated using Pulse Amplitude Modulation (PAM) and Pulse Position Modulation (PPM). The channel model is generated from a modified S-V model. Bit-error rate (BER) is measured over several of bit rates. The result shows that all modulation are appropriate for both LOS and NLOS channel, but PAM gives better performance in bit rates and SNR. Moreover, as standard of speed has been given for UWB, the communication is appropriate with high bit rates in LOS channel.
Abstract: The H.264/AVC standard uses an intra prediction, 9
directional modes for 4x4 luma blocks and 8x8 luma blocks, 4
directional modes for 16x16 macroblock and 8x8 chroma blocks,
respectively. It means that, for a macroblock, it has to perform 736
different RDO calculation before a best RDO modes is determined.
With this Multiple intra-mode prediction, intra coding of H.264/AVC
offers a considerably higher improvement in coding efficiency
compared to other compression standards, but computational
complexity is increased significantly. This paper presents a fast intra
prediction algorithm for H.264/AVC intra prediction based a
characteristic of homogeneity information. In this study, the gradient
prediction method used to predict the homogeneous area and the
quadratic prediction function used to predict the nonhomogeneous
area. Based on the correlation between the homogeneity and block
size, the smaller block is predicted by gradient prediction and
quadratic prediction, so the bigger block is predicted by gradient
prediction. Experimental results are presented to show that the
proposed method reduce the complexity by up to 76.07%
maintaining the similar PSNR quality with about 1.94%bit rate
increase in average.
Abstract: H.264/AVC offers a considerably higher improvement
in coding efficiency compared to other compression standards such
as MPEG-2, but computational complexity is increased significantly.
In this paper, we propose selective mode decision schemes for fast
intra prediction mode selection. The objective is to reduce the
computational complexity of the H.264/AVC encoder without
significant rate-distortion performance degradation. In our proposed
schemes, the intra prediction complexity is reduced by limiting the
luma and chroma prediction modes using the directional information
of the 16×16 prediction mode. Experimental results are presented to
show that the proposed schemes reduce the complexity by up to 78%
maintaining the similar PSNR quality with about 1.46% bit rate
increase in average.
Abstract: In this paper, we provide complete end-to-end delay analyses including the relay nodes for instant messages. Message Session Relay Protocol (MSRP) is used to provide congestion control for large messages in the Instant Messaging (IM) service. Large messages are broken into several chunks. These chunks may traverse through a maximum number of two relay nodes before reaching destination according to the IETF specification of the MSRP relay extensions. We discuss the current solutions of sending large instant messages and introduce a proposal to reduce message flows in the IM service. We consider virtual traffic parameter i.e., the relay nodes are stateless non-blocking for scalability purpose. This type of relay node is also assumed to have input rate at constant bit rate. We provide a new scheduling policy that schedules chunks according to their previous node?s delivery time stamp tags. Validation and analysis is shown for such scheduling policy. The performance analysis with the model introduced in this paper is simple and straight forward, which lead to reduced message flows in the IM service.