Abstract: Compression algorithms reduce the redundancy in
data representation to decrease the storage required for that data.
Lossless compression researchers have developed highly
sophisticated approaches, such as Huffman encoding, arithmetic
encoding, the Lempel-Ziv (LZ) family, Dynamic Markov
Compression (DMC), Prediction by Partial Matching (PPM), and
Burrows-Wheeler Transform (BWT) based algorithms.
Decompression is also required to retrieve the original data by
lossless means. A compression scheme for text files coupled with
the principle of dynamic decompression, which decompresses only
the section of the compressed text file required by the user instead of
decompressing the entire text file. Dynamic decompressed files offer
better disk space utilization due to higher compression ratios
compared to most of the currently available text file formats.
Abstract: Wireless Sensor Network (WSN) comprises of sensor
nodes which are designed to sense the environment, transmit sensed
data back to the base station via multi-hop routing to reconstruct
physical phenomena. Since physical phenomena exists significant
overlaps between temporal redundancy and spatial redundancy, it is
necessary to use Redundancy Suppression Algorithms (RSA) for sensor
node to lower energy consumption by reducing the transmission
of redundancy. A conventional algorithm of RSAs is threshold-based
RSA, which sets threshold to suppress redundant data. Although
many temporal and spatial RSAs are proposed, temporal-spatial RSA
are seldom to be proposed because it is difficult to determine when
to utilize temporal or spatial RSAs. In this paper, we proposed a
novel temporal-spatial redundancy suppression algorithm, Codebookbase
Redundancy Suppression Mechanism (CRSM). CRSM adopts
vector quantization to generate a codebook, which is easily used to
implement temporal-spatial RSA. CRSM not only achieves power
saving and reliability for WSN, but also provides the predictability
of network lifetime. Simulation result shows that the network lifetime
of CRSM outperforms at least 23% of that of other RSAs.
Abstract: The purpose of this study was to investigate the effects of computer–based instructional designs, namely modality and redundancy principles on the attitude and learning of music theory among primary pupils of different Music Intelligence levels. The lesson of music theory was developed in three different modes, audio and image (AI), text with image (TI) and audio with image and text (AIT). The independent variables were the three modes of courseware. The moderator variable was music intelligence. The dependent variables were the post test score. ANOVA was used to determine the significant differences of the pretest scores among the three groups. Analyses of covariance (ANCOVA) and Post hoc were carried out to examine the main effects as well as the interaction effects of the independent variables on the dependent variables. High music intelligence pupils performed significantly better than low music intelligence pupils in all the three treatment modes. The AI mode was found to help pupils with low music intelligence significantly more than the TI and AIT modes.
Abstract: Among various testing methodologies, Built-in Self-
Test (BIST) is recognized as a low cost, effective paradigm. Also,
full adders are one of the basic building blocks of most arithmetic
circuits in all processing units. In this paper, an optimized testable 2-
bit full adder as a test building block is proposed. Then, a BIST
procedure is introduced to scale up the building block and to generate
a self testable n-bit full adders. The target design can achieve 100%
fault coverage using insignificant amount of hardware redundancy.
Moreover, Overall test time is reduced by utilizing polymorphic
gates and also by testing full adder building blocks in parallel.
Abstract: We suggest a novel method to incorporate longterm
redundancy (LTR) in signal time domain compression
methods. The proposition is based on block-sorting and curve
simplification. The proposition is illustrated on the ECG
signal as a post-processor for the FAN method. Test
applications on the new so-obtained FAN+ method using the
MIT-BIH database show substantial improvement of the
compression ratio-distortion behavior for a higher quality
reconstructed signal.
Abstract: Fault tree analysis is a well-known method for
reliability and safety assessment of engineering systems. In the last 3
decades, a number of methods have been introduced, in the literature,
for automatic construction of fault trees. The main difference between these methods is the starting model from which the tree is constructed. This paper presents a new methodology for the construction of static and dynamic fault trees from a system Simulink
model. The method is introduced and explained in detail, and its correctness and completeness is experimentally validated by using an example, taken from literature. Advantages of the method are also mentioned.
Abstract: XML is an important standard of data exchange and
representation. As a mature database system, using relational database
to support XML data may bring some advantages. But storing XML in
relational database has obvious redundancy that wastes disk space,
bandwidth and disk I/O when querying XML data. For the efficiency
of storage and query XML, it is necessary to use compressed XML
data in relational database. In this paper, a compressed relational
database technology supporting XML data is presented. Original
relational storage structure is adaptive to XPath query process. The
compression method keeps this feature. Besides traditional relational
database techniques, additional query process technologies on
compressed relations and for special structure for XML are presented.
In this paper, technologies for XQuery process in compressed
relational database are presented..
Abstract: High redundancy and strong uncertainty are two main characteristics for underwater robotic manipulators with unlimited workspace and mobility, but they also make the motion planning and control difficult and complex. In order to setup the groundwork for the research on control schemes, the mathematical representation is built by using the Denavit-Hartenberg (D-H) method [9]&[12]; in addition to the geometry of the manipulator which was studied for establishing the direct and inverse kinematics. Then, the dynamic model is developed and used by employing the Lagrange theorem. Furthermore, derivation and computer simulation is accomplished using the MATLAB environment. The result obtained is compared with mechanical system dynamics analysis software, ADAMS. In addition, the creation of intelligent artificial skin using Interlink Force Sensing ResistorTM technology is presented as groundwork for future work
Abstract: The residue number system (RNS), due to its
properties, is used in applications in which high performance
computation is needed. The carry free nature, which makes the
arithmetic, carry bounded as well as the paralleling facility is the
reason of its capability of high speed rendering. Since carry is not
propagated between the moduli in this system, the performance is
only restricted by the speed of the operations in each modulus. In this
paper a novel method of number representation by use of redundancy
is suggested in which {rn- 2,rn-1,rn} is the reference moduli set
where r=2k+1 and k =1, 2,3,.. This method achieves fast
computations and conversions and makes the circuits of them much
simpler.
Abstract: Effective estimation of just noticeable distortion (JND) for images is helpful to increase the efficiency of a compression algorithm in which both the statistical redundancy and the perceptual redundancy should be accurately removed. In this paper, we design a DCT-based model for estimating JND profiles of color images. Based on a mathematical model of measuring the base detection threshold for each DCT coefficient in the color component of color images, the luminance masking adjustment, the contrast masking adjustment, and the cross masking adjustment are utilized for luminance component, and the variance-based masking adjustment based on the coefficient variation in the block is proposed for chrominance components. In order to verify the proposed model, the JND estimator is incorporated into the conventional JPEG coder to improve the compression performance. A subjective and fair viewing test is designed to evaluate the visual quality of the coding image under the specified viewing condition. The simulation results show that the JPEG coder integrated with the proposed DCT-based JND model gives better coding bit rates at visually lossless quality for a variety of color images.
Abstract: Evolvable hardware (EHW) is a developing field that
applies evolutionary algorithm (EA) to automatically design circuits,
antennas, robot controllers etc. A lot of research has been done in this
area and several different EAs have been introduced to tackle
numerous problems, as scalability, evolvability etc. However every
time a specific EA is chosen for solving a particular task, all its
components, such as population size, initialization, selection
mechanism, mutation rate, and genetic operators, should be selected
in order to achieve the best results. In the last three decade the
selection of the right parameters for the EA-s components for solving
different “test-problems" has been investigated. In this paper the
behaviour of mutation rate for designing logic circuits, which has not
been done before, has been deeply analyzed. The mutation rate for an
EHW system modifies the number of inputs of each logic gates, the
functionality (for example from AND to NOR) and the connectivity
between logic gates. The behaviour of the mutation has been
analyzed based on the number of generations, genotype redundancy
and number of logic gates for the evolved circuits. The experimental
results found provide the behaviour of the mutation rate during
evolution for the design and optimization of simple logic circuits.
The experimental results propose the best mutation rate to be used for
designing combinational logic circuits. The research presented is
particular important for those who would like to implement a
dynamic mutation rate inside the evolutionary algorithm for evolving
digital circuits. The researches on the mutation rate during the last 40
years are also summarized.
Abstract: We here propose improved version of elastic graph matching (EGM) as a face detector, called the multi-scale EGM (MS-EGM). In this improvement, Gabor wavelet-based pyramid reduces computational complexity for the feature representation often used in the conventional EGM, but preserving a critical amount of information about an image. The MS-EGM gives us higher detection performance than Viola-Jones object detection algorithm of the AdaBoost Haar-like feature cascade. We also show rapid detection speeds of the MS-EGM, comparable to the Viola-Jones method. We find fruitful benefits in the MS-EGM, in terms of topological feature representation for a face.
Abstract: In this document, we have proposed a robust
conceptual strategy, in order to improve the robustness against the manufacturing defects and thus the reliability of logic CMOS circuits. However, in order to enable the use of future CMOS
technology nodes this strategy combines various types of design:
DFR (Design for Reliability), techniques of tolerance: hardware
redundancy TMR (Triple Modular Redundancy) for hard error
tolerance, the DFT (Design for Testability. The Results on largest ISCAS and ITC benchmark circuits show that our approach improves
considerably the reliability, by reducing the key factors, the area costs and fault tolerance probability.
Abstract: Pipeline ADCs are becoming popular at high speeds
and with high resolution. This paper discusses the options of number
of bits/stage conversion techniques in pipelined ADCs and their
effect on Area, Speed, Power Dissipation and Linearity. The basic
building blocks like op-amp, Sample and Hold Circuit, sub converter,
DAC, Residue Amplifier used in every stage is assumed to be
identical. The sub converters use flash architectures. The design is
implemented using 0.18
Abstract: The main objective developed in this paper is to find a
graphic technique for modeling, simulation and diagnosis of the
industrial systems. This importance is much apparent when it is about
a complex system such as the nuclear reactor with pressurized water
of several form with various several non-linearity and time scales. In
this case the analytical approach is heavy and does not give a fast
idea on the evolution of the system. The tool Bond Graph enabled us
to transform the analytical model into graphic model and the
software of simulation SYMBOLS 2000 specific to the Bond Graphs
made it possible to validate and have the results given by the
technical specifications. We introduce the analysis of the problem
involved in the faults localization and identification in the complex
industrial processes. We propose a method of fault detection applied
to the diagnosis and to determine the gravity of a detected fault. We
show the possibilities of application of the new diagnosis approaches
to the complex system control. The industrial systems became
increasingly complex with the faults diagnosis procedures in the
physical systems prove to become very complex as soon as the
systems considered are not elementary any more. Indeed, in front of
this complexity, we chose to make recourse to Fault Detection and
Isolation method (FDI) by the analysis of the problem of its control
and to conceive a reliable system of diagnosis making it possible to
apprehend the complex dynamic systems spatially distributed applied
to the standard pressurized water nuclear reactor.
Abstract: Bond Graph as a unified multidisciplinary tool is widely
used not only for dynamic modelling but also for Fault Detection and
Isolation because of its structural and causal proprieties. A binary
Fault Signature Matrix is systematically generated but to make the
final binary decision is not always feasible because of the problems
revealed by such method. The purpose of this paper is introducing a
methodology for the improvement of the classical binary method of
decision-making, so that the unknown and identical failure signatures
can be treated to improve the robustness. This approach consists of
associating the evaluated residuals and the components reliability data
to build a Hybrid Bayesian Network. This network is used in two
distinct inference procedures: one for the continuous part and the
other for the discrete part. The continuous nodes of the network are
the prior probabilities of the components failures, which are used by
the inference procedure on the discrete part to compute the posterior
probabilities of the failures. The developed methodology is applied
to a real steam generator pilot process.
Abstract: Stream Control Transmission Protocol (SCTP) has been
proposed to provide reliable transport of real-time communications.
Due to its attractive features, such as multi-streaming and multihoming,
the SCTP is often expected to be an alternative protocol
for TCP and UDP. In the original SCTP standard, the secondary path
is mainly regarded as a redundancy. Recently, most of researches
have focused on extending the SCTP to enable a host to send its
packets to a destination over multiple paths simultaneously. In order
to transfer packets concurrently over the multiple paths, the SCTP
should be well designed to avoid unnecessary fast retransmission
and the mis-estimation of congestion window size through the paths.
Therefore, we propose an Enhanced Cooperative ACK SCTP (ECASCTP)
to improve the path recovery efficiency of multi-homed host
which is under concurrent multiple transfer mode. We evaluated the
performance of our proposed scheme using ns-2 simulation in terms
of cwnd variation, path recovery time, and goodput. Our scheme
provides better performance in lossy and path asymmetric networks.
Abstract: Developing an accurate classifier for high dimensional microarray datasets is a challenging task due to availability of small sample size. Therefore, it is important to determine a set of relevant genes that classify the data well. Traditionally, gene selection method often selects the top ranked genes according to their discriminatory power. Often these genes are correlated with each other resulting in redundancy. In this paper, we have proposed a hybrid method using feature ranking and wrapper method (Genetic Algorithm with multiclass SVM) to identify a set of relevant genes that classify the data more accurately. A new fitness function for genetic algorithm is defined that focuses on selecting the smallest set of genes that provides maximum accuracy. Experiments have been carried on four well-known datasets1. The proposed method provides better results in comparison to the results found in the literature in terms of both classification accuracy and number of genes selected.
Abstract: There is lot of work done in prediction of the fault proneness of the software systems. But, it is the severity of the faults that is more important than number of faults existing in the developed system as the major faults matters most for a developer and those major faults needs immediate attention. In this paper, we tried to predict the level of impact of the existing faults in software systems. Neuro-Fuzzy based predictor models is applied NASA-s public domain defect dataset coded in C programming language. As Correlation-based Feature Selection (CFS) evaluates the worth of a subset of attributes by considering the individual predictive ability of each feature along with the degree of redundancy between them. So, CFS is used for the selecting the best metrics that have highly correlated with level of severity of faults. The results are compared with the prediction results of Logistic Models (LMT) that was earlier quoted as the best technique in [17]. The results are recorded in terms of Accuracy, Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE). The results show that Neuro-fuzzy based model provide a relatively better prediction accuracy as compared to other models and hence, can be used for the modeling of the level of impact of faults in function based systems.
Abstract: This paper focuses on testing database of existing
information system. At the beginning we describe the basic problems
of implemented databases, such as data redundancy, poor design of
database logical structure or inappropriate data types in columns of
database tables. These problems are often the result of incorrect
understanding of the primary requirements for a database of an
information system. Then we propose an algorithm to compare the
conceptual model created from vague requirements for a database
with a conceptual model reconstructed from implemented database.
An algorithm also suggests steps leading to optimization of
implemented database. The proposed algorithm is verified by an
implemented prototype. The paper also describes a fuzzy system
which works with the vague requirements for a database of an
information system, procedure for creating conceptual from vague
requirements and an algorithm for reconstructing a conceptual model
from implemented database.