Abstract: A subcarrier - spectral amplitude coding optical code
division multiple access system using the Khazani-Syed code with
Complementary subtraction detection technique is proposed. The
proposed system has been analyzed by taking into account the effects
of phase-induced intensity noise, shot noise, thermal noise and intermodulation
distortion noise. The performance of the system has been
compared with the spectral amplitude coding optical code division
multiple access system using the Hadamard code and the Modified
Quadratic Congruence code. The analysis shows that the proposed
system can eliminate the multiple access interference using the
Complementary subtraction detection technique, and hence improve
the overall system performance.
Abstract: In this paper we present a soft timing phase estimation (STPE) method for wireless mobile receivers operating in low signal to noise ratios (SNRs). Discrete Polyphase Matched (DPM) filters, a Log-maximum a posterior probability (MAP) and/or a Soft-output Viterbi algorithm (SOVA) are combined to derive a new timing recovery (TR) scheme. We apply this scheme to wireless cellular communication system model that comprises of a raised cosine filter (RCF), a bit-interleaved turbo-coded multi-level modulation (BITMM) scheme and the channel is assumed to be memory-less. Furthermore, no clock signals are transmitted to the receiver contrary to the classical data aided (DA) models. This new model ensures that both the bandwidth and power of the communication system is conserved. However, the computational complexity of ideal turbo synchronization is increased by 50%. Several simulation tests on bit error rate (BER) and block error rate (BLER) versus low SNR reveal that the proposed iterative soft timing recovery (ISTR) scheme outperforms the conventional schemes.
Abstract: LDPC codes could be used in magnetic storage devices because of their better decoding performance compared to other error correction codes. However, their hardware implementation results in large and complex decoders. This one of the main obstacles the decoders to be incorporated in magnetic storage devices. We construct small high girth and rate 2 columnweight codes from cage graphs. Though these codes have low performance compared to higher column weight codes, they are easier to implement. The ease of implementation makes them more suitable for applications such as magnetic recording. Cages are the smallest known regular distance graphs, which give us the smallest known column-weight 2 codes given the size, girth and rate of the code.
Abstract: Medical imaging uses the advantage of digital
technology in imaging and teleradiology. In teleradiology systems
large amount of data is acquired, stored and transmitted. A major
technology that may help to solve the problems associated with the
massive data storage and data transfer capacity is data compression
and decompression. There are many methods of image compression
available. They are classified as lossless and lossy compression
methods. In lossy compression method the decompressed image
contains some distortion. Fractal image compression (FIC) is a lossy
compression method. In fractal image compression an image is
coded as a set of contractive transformations in a complete metric
space. The set of contractive transformations is guaranteed to
produce an approximation to the original image. In this paper FIC is
achieved by PIFS using quadtree partitioning. PIFS is applied on
different images like , Ultrasound, CT Scan, Angiogram, X-ray,
Mammograms. In each modality approximately twenty images are
considered and the average values of compression ratio and PSNR
values are arrived. In this method of fractal encoding, the
parameter, tolerance factor Tmax, is varied from 1 to 10, keeping the
other standard parameters constant. For all modalities of images the
compression ratio and Peak Signal to Noise Ratio (PSNR) are
computed and studied. The quality of the decompressed image is
arrived by PSNR values. From the results it is observed that the
compression ratio increases with the tolerance factor and
mammogram has the highest compression ratio. The quality of the
image is not degraded upto an optimum value of tolerance factor,
Tmax, equal to 8, because of the properties of fractal compression.
Abstract: Among other factors that characterize satellite communication
channels is their high bit error rate. We present a system for
still image transmission over noisy satellite channels. The system
couples image compression together with error control codes to
improve the received image quality while maintaining its bandwidth
requirements. The proposed system is tested using a high resolution
satellite imagery simulated over the Rician fading channel. Evaluation
results show improvement in overall system including image quality
and bandwidth requirements compared to similar systems with different
coding schemes.
Abstract: Machine Translation (MT 3) of English text to its Urdu equivalent is a difficult challenge. Lot of attempts has been made, but a few limited solutions are provided till now. We present a direct approach, using an expert system to translate English text into its equivalent Urdu, using The Unicode Standard, Version 4.0 (ISBN 0-321-18578-1) Range: 0600–06FF. The expert system works with a knowledge base that contains grammatical patterns of English and Urdu, as well as a tense and gender-aware dictionary of Urdu words (with their English equivalents).
Abstract: As the majority of faults are found in a few of its modules so there is a need to investigate the modules that are affected severely as compared to other modules and proper maintenance need to be done on time especially for the critical applications. In this paper, we have explored the different predictor models to NASA-s public domain defect dataset coded in Perl programming language. Different machine learning algorithms belonging to the different learner categories of the WEKA project including Mamdani Based Fuzzy Inference System and Neuro-fuzzy based system have been evaluated for the modeling of maintenance severity or impact of fault severity. The results are recorded in terms of Accuracy, Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE). The results show that Neuro-fuzzy based model provides relatively better prediction accuracy as compared to other models and hence, can be used for the maintenance severity prediction of the software.
Abstract: A new method for color image segmentation using fuzzy logic is proposed in this paper. Our aim here is to automatically produce a fuzzy system for color classification and image segmentation with least number of rules and minimum error rate. Particle swarm optimization is a sub class of evolutionary algorithms that has been inspired from social behavior of fishes, bees, birds, etc, that live together in colonies. We use comprehensive learning particle swarm optimization (CLPSO) technique to find optimal fuzzy rules and membership functions because it discourages premature convergence. Here each particle of the swarm codes a set of fuzzy rules. During evolution, a population member tries to maximize a fitness criterion which is here high classification rate and small number of rules. Finally, particle with the highest fitness value is selected as the best set of fuzzy rules for image segmentation. Our results, using this method for soccer field image segmentation in Robocop contests shows 89% performance. Less computational load is needed when using this method compared with other methods like ANFIS, because it generates a smaller number of fuzzy rules. Large train dataset and its variety, makes the proposed method invariant to illumination noise
Abstract: In this paper, we present the video quality measure
estimation via a neural network. This latter predicts MOS (mean
opinion score) by providing height parameters extracted from
original and coded videos. The eight parameters that are used are: the
average of DFT differences, the standard deviation of DFT
differences, the average of DCT differences, the standard deviation
of DCT differences, the variance of energy of color, the luminance
Y, the chrominance U and the chrominance V. We chose Euclidean
Distance to make comparison between the calculated and estimated
output.
Abstract: Few decades ago, electronic and sensor technologies
are merged into vehicles as the Advanced Driver Assistance
System(ADAS). However, sensor-based ADASs have limitations
about weather interference and a line-of-sight nature problem. In our
project, we investigate a Relative Position based ADAS(RP-ADAS).
We divide the RP-ADAS into four main research areas: GNSS,
VANET, Security/Privacy, and Application. In this paper, we research
the GNSS technologies and determine the most appropriate one. With
the performance evaluation, we figure out that the C/A code based
GPS technologies are inappropriate for 'which lane-level' application.
However, they can be used as a 'which road-level' application.
Abstract: A new code synchronization algorithm is proposed in
this paper for the secondary cell-search stage in wideband CDMA
systems. Rather than using the Cyclically Permutable (CP) code in the
Secondary Synchronization Channel (S-SCH) to simultaneously
determine the frame boundary and scrambling code group, the new
synchronization algorithm implements the same function with less
system complexity and less Mean Acquisition Time (MAT). The
Secondary Synchronization Code (SSC) is redesigned by splitting into
two sub-sequences. We treat the information of scrambling code group
as data bits and use simple time diversity BCH coding for further
reliability. It avoids involved and time-costly Reed-Solomon (RS)
code computations and comparisons. Analysis and simulation results
show that the Synchronization Error Rate (SER) yielded by the new
algorithm in Rayleigh fading channels is close to that of the
conventional algorithm in the standard. This new synchronization
algorithm reduces system complexities, shortens the average
cell-search time and can be implemented in the slot-based cell-search
pipeline. By taking antenna diversity and pipelining correlation
processes, the new algorithm also shows its flexible application in
multiple antenna systems.
Abstract: Reinforced concrete stair slabs with mid landings i.e.
Dog-legged shaped are conventionally designed as per specifications
of standard codes of practices which guide about the effective span
according to the varying support conditions. Presently, the behavior
of such slabs has been investigated using Finite Element method. A
single flight stair slab with landings on both sides and supported at
ends on wall, and a multi flight stair slab with landings and six
different support arrangements have been analyzed. The results
obtained for stresses, strains and deflections are used to describe the
behavior of such stair slabs, including locations of critical moments
and deflections. Values of critical moments obtained by F.E. analysis
have also have been compared with that obtained from conventional
analysis. Analytical results show that the moments are also critical
near the kinks i.e. junction of mid-landing and inclined waist slab.
This change in the behavior of dog-legged stair slab may be due to
continuity of the material in transverse direction in two landings
adjoining the waist slab, hence additional stiffness achieved. This
change in the behavior is generally not taken care of in conventional
method of design.
Abstract: Although achieving zero-defect software release is
practically impossible, software industries should take maximum
care to detect defects/bugs well ahead in time allowing only bare
minimums to creep into released version. This is a clear indicator of
time playing an important role in the bug detection. In addition to
this, software quality is the major factor in software engineering
process. Moreover, early detection can be achieved only through
static code analysis as opposed to conventional testing.
BugCatcher.Net is a static analysis tool, which detects bugs in .NET®
languages through MSIL (Microsoft Intermediate Language)
inspection. The tool utilizes a Parser based on Finite State Automata
to carry out bug detection. After being detected, bugs need to be
corrected immediately. BugCatcher.Net facilitates correction, by
proposing a corrective solution for reported warnings/bugs to end
users with minimum side effects. Moreover, the tool is also capable
of analyzing the bug trend of a program under inspection.
Abstract: Unlike general-purpose processors, digital signal
processors (DSP processors) are strongly application-dependent. To
meet the needs for diverse applications, a wide variety of DSP
processors based on different architectures ranging from the
traditional to VLIW have been introduced to the market over the
years. The functionality, performance, and cost of these processors
vary over a wide range. In order to select a processor that meets the
design criteria for an application, processor performance is usually
the major concern for digital signal processing (DSP) application
developers. Performance data are also essential for the designers of
DSP processors to improve their design. Consequently, several DSP
performance benchmarks have been proposed over the past decade or
so. However, none of these benchmarks seem to have included recent
new DSP applications.
In this paper, we use a new benchmark that we recently developed
to compare the performance of popular DSP processors from Texas
Instruments and StarCore. The new benchmark is based on the
Selectable Mode Vocoder (SMV), a speech-coding program from the
recent third generation (3G) wireless voice applications. All
benchmark kernels are compiled by the compilers of the respective
DSP processors and run on their simulators. Weighted arithmetic
mean of clock cycles and arithmetic mean of code size are used to
compare the performance of five DSP processors.
In addition, we studied how the performance of a processor is
affected by code structure, features of processor architecture and
optimization of compiler. The extensive experimental data gathered,
analyzed, and presented in this paper should be helpful for DSP
processor and compiler designers to meet their specific design goals.
Abstract: This paper presents a CFD analysis of the flow field
around a thin flat plate of infinite span inclined at 90° to a fluid
stream of infinite extent. Numerical predictions have been compared
to experimental measurements, in order to assess the potential of the
finite volume code of determining the aerodynamic forces acting on a
bluff body invested by a fluid stream of infinite extent.
Several turbulence models and spatial node distributions have
been tested. Flow field characteristics in the neighborhood of the flat
plate have been investigated, allowing the development of a
preliminary procedure to be used as guidance in selecting the
appropriate grid configuration and the corresponding turbulence
model for the prediction of the flow field over a two-dimensional
vertical flat plate.
Abstract: Voice over Internet Protocol (VoIP) application or commonly known as softphone has been developing an increasingly large market in today-s telecommunication world and the trend is expected to continue with the enhancement of additional features. This includes leveraging on the existing presence services, location and contextual information to enable more ubiquitous and seamless communications. In this paper, we discuss the concept of seamless session transfer for real-time application such as VoIP and IPTV, and our prototype implementation of such concept on a selected open source VoIP application. The first part of this paper is about conducting performance evaluation and assessments across some commonly found open source VoIP applications that are Ekiga, Kphone, Linphone and Twinkle so as to identify one of them for implementing our design of seamless session transfer. Subjective testing has been carried out to evaluate the audio performance on these VoIP applications and rank them according to their Mean Opinion Score (MOS) results. The second part of this paper is to discuss on the performance evaluations of our prototype implementation of session transfer using Linphone.
Abstract: The third generation (3G) of cellular system adopted
the spread spectrum as solution for the transmission of the data in the
physical layer. Contrary to systems IS-95 or CDMAOne (systems
with spread spectrum of the preceding generation), the new standard,
called Universal Mobil Telecommunications System (UMTS), uses
long codes in the down link. The system is conceived for the vocal
communication and the transmission of the data. In particular, the
down link is very important, because of the asymmetrical request of
the data, i.e., more remote loading towards the mobiles than towards
the basic station. Moreover, the UMTS uses for the down link an
orthogonal spreading out with a variable factor of spreading out
(OVSF for Orthogonal Variable Spreading Factor). This
characteristic makes it possible to increase the flow of data of one or
more users by reducing their factor of spreading out without
changing the factor of spreading out of other users. In the current
standard of the UMTS, two techniques to increase the performances
of the down link were proposed, the diversity of sending antenna and
the codes space-time. These two techniques fight only fainding. The
receiver proposed for the mobil station is the RAKE, but one can
imagine a receiver more sophisticated, able to reduce the interference
between users and the impact of the coloured noise and interferences
to narrow band. In this context, where the users have long codes
synchronized with variable factor of spreading out and ignorance by
the mobile of the other active codes/users, the use of the sequences of
code pseudo-noises different lengths is presented in the form of one
of the most appropriate solutions.
Abstract: The last decade has shown that object-oriented
concept by itself is not that powerful to cope with the rapidly
changing requirements of ongoing applications. Component-based
systems achieve flexibility by clearly separating the stable parts of
systems (i.e. the components) from the specification of their
composition. In order to realize the reuse of components effectively
in CBSD, it is required to measure the reusability of components.
However, due to the black-box nature of components where the
source code of these components are not available, it is difficult to
use conventional metrics in Component-based Development as these
metrics require analysis of source codes. In this paper, we survey
few existing component-based reusability metrics. These metrics
give a border view of component-s understandability, adaptability,
and portability. It also describes the analysis, in terms of quality
factors related to reusability, contained in an approach that aids
significantly in assessing existing components for reusability.
Abstract: MC (Management Control)& IC (Internal Control) – what is the relationship? (an empirical study into the definitions between MC and IC) based on the wider considerations of Internal Control and Management Control terms, attention is focused not only on the financial aspects but also more on the soft aspects of the business, such as culture, behaviour, standards and values. The limited considerations of Management Control are focused mainly in the hard, financial aspects of business operation. The definitions of Management Control and Internal Control are often used interchangeably and the results of this empirical study reveal that Management Control is part of Internal Control, there is no causal link between the two concepts. Based on the interpretation of the respondents, the term Management Control has moved from a broad term to a more limited term with the soft aspects of the influencing of behaviour, performance measurements, incentives and culture. This paper is an exploratory study based on qualitative research and on a qualitative matrix method analysis of the thematic definition of the terms Management Control and Internal Control.
Abstract: The development of the signal compression
algorithms is having compressive progress. These algorithms are
continuously improved by new tools and aim to reduce, an average,
the number of bits necessary to the signal representation by means of
minimizing the reconstruction error. The following article proposes
the compression of Arabic speech signal by a hybrid method
combining the wavelet transform and the linear prediction. The
adopted approach rests, on one hand, on the original signal
decomposition by ways of analysis filters, which is followed by the
compression stage, and on the other hand, on the application of the
order 5, as well as, the compression signal coefficients. The aim of
this approach is the estimation of the predicted error, which will be
coded and transmitted. The decoding operation is then used to
reconstitute the original signal. Thus, the adequate choice of the
bench of filters is useful to the transform in necessary to increase the
compression rate and induce an impercevable distortion from an
auditive point of view.