Abstract: Software reliability prediction gives a great opportunity to measure the software failure rate at any point throughout system test. A software reliability prediction model provides with the technique for improving reliability. Software reliability is very important factor for estimating overall system reliability, which depends on the individual component reliabilities. It differs from hardware reliability in that it reflects the design perfection. Main reason of software reliability problems is high complexity of software. Various approaches can be used to improve the reliability of software. We focus on software reliability model in this article, assuming that there is a time redundancy, the value of which (the number of repeated transmission of basic blocks) can be an optimization parameter. We consider given mathematical model in the assumption that in the system may occur not only irreversible failures, but also a failure that can be taken as self-repairing failures that significantly affect the reliability and accuracy of information transfer. Main task of the given paper is to find a time distribution function (DF) of instructions sequence transmission, which consists of random number of basic blocks. We consider the system software unreliable; the time between adjacent failures has exponential distribution.
Abstract: This paper proposes a modeling method of the laws controlling manufacturing systems with temporal and non temporal constraints. A methodology of robust control construction generating the margins of passive and active robustness is being elaborated. Indeed, two paramount models are presented in this paper. The first utilizes the P-time Petri Nets which is used to manage the flow type disturbances. The second, the quality model, exploits the Intervals Constrained Petri Nets (ICPN) tool which allows the system to preserve its quality specificities. The redundancy of the robustness of the elementary parameters between passive and active is also used. The final model built allows the correlation of temporal and non temporal criteria by putting two paramount models in interaction. To do so, a set of definitions and theorems are employed and affirmed by applicator examples.
Abstract: Organization of video databases is becoming difficult
task as the amount of video content increases. Video classification
based on the content of videos can significantly increase the speed of
tasks such as browsing and searching for a particular video in a
database. In this paper, a content-based videos classification system
for the classes indoor and outdoor is presented. The system is
intended to be used on a mobile platform with modest resources. The
algorithm makes use of the temporal redundancy in videos, which
allows using an uncomplicated classification model while still
achieving reasonable accuracy. The training and evaluation was done
on a video database of 443 videos downloaded from a video sharing
service. A total accuracy of 87.36% was achieved.
Abstract: This paper studies the dependability of componentbased
applications, especially embedded ones, from the diagnosis
point of view. The principle of the diagnosis technique is to
implement inter-component tests in order to detect and locate the
faulty components without redundancy. The proposed approach for
diagnosing faulty components consists of two main aspects. The first
one concerns the execution of the inter-component tests which
requires integrating test functionality within a component. This is the
subject of this paper. The second one is the diagnosis process itself
which consists of the analysis of inter-component test results to
determine the fault-state of the whole system. Advantage of this
diagnosis method when compared to classical redundancy faulttolerant
techniques are application autonomy, cost-effectiveness and
better usage of system resources. Such advantage is very important
for many systems and especially for embedded ones.
Abstract: A novel algorithm for construct a seamless video mosaic of the entire panorama continuously by automatically analyzing and managing feature points, including management of quantity and quality, from the sequence is presented. Since a video contains significant redundancy, so that not all consecutive video images are required to create a mosaic. Only some key images need to be selected. Meanwhile, feature-based methods for mosaicing rely on correction of feature points? correspondence deeply, and if the key images have large frame interval, the mosaic will often be interrupted by the scarcity of corresponding feature points. A unique character of the method is its ability to handle all the problems above in video mosaicing. Experiments have been performed under various conditions, the results show that our method could achieve fast and accurate video mosaic construction. Keywords?video mosaic, feature points management, homography estimation.
Abstract: Evolvable hardware (EHW) refers to a selfreconfiguration
hardware design, where the configuration is under
the control of an evolutionary algorithm (EA). A lot of research has
been done in this area several different EA have been introduced.
Every time a specific EA is chosen for solving a particular problem,
all its components, such as population size, initialization, selection
mechanism, mutation rate, and genetic operators, should be selected
in order to achieve the best results. In the last three decade a lot of
research has been carried out in order to identify the best parameters
for the EA-s components for different “test-problems". However
different researchers propose different solutions. In this paper the
behaviour of mutation rate on (1+λ) evolution strategy (ES) for
designing logic circuits, which has not been done before, has been
deeply analyzed. The mutation rate for an EHW system modifies
values of the logic cell inputs, the cell type (for example from AND
to NOR) and the circuit output. The behaviour of the mutation has
been analyzed based on the number of generations, genotype
redundancy and number of logic gates used for the evolved circuits.
The experimental results found provide the behaviour of the mutation
rate to be used during evolution for the design and optimization of
logic circuits. The researches on the best mutation rate during the last
40 years are also summarized.
Abstract: Structural redundancy is an interesting point in
seismic design of structures. Initially, the structural redundancy is
described as indeterminate degree of a system. Although many definitions are presented for redundancy in structures, recently the
definition of structural redundancy has been related to the configuration of structural system and the number of lateral load
transferring directions in the structure. The steel frames with infill walls are general systems in the constructing of usual residential buildings in some countries. It is
obviously declared that the performance of structures will be affected by adding masonry infill walls. In order to investigate the effect of
infill walls on the redundancy of the steel frame which constructed
with masonry walls, the components of redundancy including redundancy variation index, redundancy strength index and
redundancy response modification factor were extracted for the
frames with masonry infills. Several steel frames with typical storey number and various numbers of bays were designed and considered.
The redundancy of frames with and without infill walls was evaluated by proposed method. The results showed the presence of infill causes increase of redundancy.
Abstract: The Sensor Network consists of densely deployed
sensor nodes. Energy optimization is one of the most important
aspects of sensor application design. Data acquisition and aggregation
techniques for processing data in-network should be energy efficient.
Due to the cross-layer design, resource-limited and noisy nature
of Wireless Sensor Networks(WSNs), it is challenging to study
the performance of these systems in a realistic setting. In this
paper, we propose optimizing queries by aggregation of data and
data redundancy to reduce energy consumption without requiring
all sensed data and directed diffusion communication paradigm to
achieve power savings, robust communication and processing data
in-network. To estimate the per-node power consumption POWERTossim
mica2 energy model is used, which provides scalable and
accurate results. The performance analysis shows that the proposed
methods overcomes the existing methods in the aspects of energy
consumption in wireless sensor networks.
Abstract: In MPEG and H.26x standards, to eliminate the
temporal redundancy we use motion estimation. Given that the
motion estimation stage is very complex in terms of computational
effort, a hardware implementation on a re-configurable circuit is
crucial for the requirements of different real time multimedia
applications. In this paper, we present hardware architecture for
motion estimation based on "Full Search Block Matching" (FSBM)
algorithm. This architecture presents minimum latency, maximum
throughput, full utilization of hardware resources such as embedded
memory blocks, and combining both pipelining and parallel
processing techniques. Our design is described in VHDL language,
verified by simulation and implemented in a Stratix II
EP2S130F1020C4 FPGA circuit. The experiment result show that the
optimum operating clock frequency of the proposed design is 89MHz
which achieves 160M pixels/sec.
Abstract: Scheduling algorithms are used in operating systems
to optimize the usage of processors. One of the most efficient
algorithms for scheduling is Multi-Layer Feedback Queue (MLFQ)
algorithm which uses several queues with different quanta. The most
important weakness of this method is the inability to define the
optimized the number of the queues and quantum of each queue. This
weakness has been improved in IMLFQ scheduling algorithm.
Number of the queues and quantum of each queue affect the response
time directly. In this paper, we review the IMLFQ algorithm for
solving these problems and minimizing the response time. In this
algorithm Recurrent Neural Network has been utilized to find both
the number of queues and the optimized quantum of each queue.
Also in order to prevent any probable faults in processes' response
time computation, a new fault tolerant approach has been presented.
In this approach we use combinational software redundancy to
prevent the any probable faults. The experimental results show that
using the IMLFQ algorithm results in better response time in
comparison with other scheduling algorithms also by using fault
tolerant mechanism we improve IMLFQ performance.
Abstract: This paper presents a vocoder to obtain high quality synthetic speech at 600 bps. To reduce the bit rate, the algorithm is based on a sinusoidally excited linear prediction model which extracts few coding parameters, and three consecutive frames are grouped into a superframe and jointly vector quantization is used to obtain high coding efficiency. The inter-frame redundancy is exploited with distinct quantization schemes for different unvoiced/voiced frame combinations in the superframe. Experimental results show that the quality of the proposed coder is better than that of 2.4kbps LPC10e and achieves approximately the same as that of 2.4kbps MELP and with high robustness.
Abstract: Most routing protocols (DSR, AODV etc.) that have
been designed for wireless adhoc networks incorporate the broadcasting
operation in their route discovery scheme. Probabilistic broadcasting
techniques have been developed to optimize the broadcast operation
which is otherwise very expensive in terms of the redundancy
and the traffic it generates. In this paper we have explored percolation
theory to gain a different perspective on probabilistic broadcasting
schemes which have been actively researched in the recent years.
This theory has helped us estimate the value of broadcast probability
in a wireless adhoc network as a function of the size of the network.
We also show that, operating at those optimal values of broadcast
probability there is at least 25-30% reduction in packet regeneration
during successful broadcasting.
Abstract: Time varying network induced delays in networked
control systems (NCS) are known for degrading control system-s
quality of performance (QoP) and causing stability problems. In
literature, a control method employing modeling of communication
delays as probability distribution, proves to be a better method. This
paper focuses on modeling of network induced delays as probability
distribution.
CAN and MIL-STD-1553B are extensively used to carry periodic
control and monitoring data in networked control systems.
In literature, methods to estimate only the worst-case delays for
these networks are available. In this paper probabilistic network
delay model for CAN and MIL-STD-1553B networks are given.
A systematic method to estimate values to model parameters from
network parameters is given. A method to predict network delay in
next cycle based on the present network delay is presented. Effect of
active network redundancy and redundancy at node level on network
delay and system response-time is also analyzed.
Abstract: Recently global concerns for the energy security have
steadily been on the increase and are expected to become a major
issue over the next few decades. Energy security refers to a resilient
energy system. This resilient system would be capable of
withstanding threats through a combination of active, direct security
measures and passive or more indirect measures such as redundancy,
duplication of critical equipment, diversity in fuel, other sources of
energy, and reliance on less vulnerable infrastructure. Threats and
disruptions (disturbances) to one part of the energy system affect
another. The paper presents methodology in theoretical background
about energy system as an interconnected network and energy supply
disturbances impact to the network. The proposed methodology uses
a network flow approach to develop mathematical model of the
energy system network as the system of nodes and arcs with energy
flowing from node to node along paths in the network.
Abstract: In this paper, we propose a hardware and software
design method for automotive Electronic Control Units (ECU)
considering the functional safety. The proposed ECU is considered for
the application to Electro-Mechanical Actuator systems and the
validity of the design method is shown by the application to the
Electro-Mechanical Brake (EMB) control system which is used as a
brake actuator in Brake-By-Wire (BBW) systems. The importance of a
functional safety-based design approach to EMB ECU design has been
emphasized because of its safety-critical functions, which are executed
with the aid of many electric actuators, sensors, and application
software. Based on hazard analysis and risk assessment according to
ISO26262, the EMB system should be ASIL-D-compliant, the highest
ASIL level. To this end, an external signature watchdog and an
Infineon 32-bit microcontroller TriCore are used to reduce risks
considering common-cause hardware failure. Moreover, a software
design method is introduced for implementing functional
safety-oriented monitoring functions based on an asymmetric dual
core architecture considering redundancy and diversity. The validity
of the proposed ECU design approach is verified by using the EMB
Hardware-In-the-Loop (HILS) system, which consists of the EMB
assembly, actuator ECU, a host PC, and a few debugging devices.
Furthermore, it is shown that the existing sensor fault tolerant control
system can be used more effectively for mitigating the effects of
hardware and software faults by applying the proposed ECU design
method.
Abstract: It is important problems to increase the detection rates
and reduce false positive rates in Intrusion Detection System (IDS).
Although preventative techniques such as access control and
authentication attempt to prevent intruders, these can fail, and as a
second line of defence, intrusion detection has been introduced. Rare
events are events that occur very infrequently, detection of rare
events is a common problem in many domains. In this paper we
propose an intrusion detection method that combines Rough set and
Fuzzy Clustering. Rough set has to decrease the amount of data and
get rid of redundancy. Fuzzy c-means clustering allow objects to
belong to several clusters simultaneously, with different degrees of
membership. Our approach allows us to recognize not only known
attacks but also to detect suspicious activity that may be the result of
a new, unknown attack. The experimental results on Knowledge
Discovery and Data Mining-(KDDCup 1999) Dataset show that the
method is efficient and practical for intrusion detection systems.
Abstract: The paper presents the design concept of a unitselection
text-to-speech synthesis system for the Slovenian language.
Due to its modular and upgradable architecture, the system can be
used in a variety of speech user interface applications, ranging from
server carrier-grade voice portal applications, desktop user interfaces
to specialized embedded devices.
Since memory and processing power requirements are important
factors for a possible implementation in embedded devices, lexica
and speech corpora need to be reduced. We describe a simple and
efficient implementation of a greedy subset selection algorithm that
extracts a compact subset of high coverage text sentences. The
experiment on a reference text corpus showed that the subset
selection algorithm produced a compact sentence subset with a small
redundancy.
The adequacy of the spoken output was evaluated by several
subjective tests as they are recommended by the International
Telecommunication Union ITU.
Abstract: Memory forensic is important in digital investigation.
The forensic is based on the data stored in physical memory that
involve memory management and processing time. However, the
current forensic tools do not consider the efficiency in terms of
storage management and the processing time. This paper shows the
high redundancy of data found in the physical memory that cause
inefficiency in processing time and memory management. The
experiment is done using Borland C compiler on Windows XP with
512 MB of physical memory.
Abstract: The purpose of this study was to investigate effects of
modality and redundancy principles on music theory learning among
pupils of different anxiety levels. The lesson of music theory was
developed in three different modes, audio and image (AI), text with
image (TI) and audio with image and text (AIT). The independent
variables were the three modes of courseware. The moderator
variable was the anxiety level, while the dependent variable was the
post test score. The study sample consisted of 405 third-grade pupils.
Descriptive and inferential statistics were conducted to analyze the
collected data. Analyses of covariance (ANCOVA) and Post hoc
were carried out to examine the main effects as well as the
interaction effects of the independent variables on the dependent
variable. The findings of this study showed that medium anxiety
pupils performed significantly better than low and high anxiety
pupils in all the three treatment modes. The AI mode was found to
help pupils with high anxiety significantly more than the TI and AIT
modes.
Abstract: This paper describes a UDP over IP based, server-oriented redundant host configuration protocol (RHCP) that can be used by collaborating embedded systems in an ad-hoc network to acquire a dynamic IP address. The service is provided by a single network device at a time and will be dynamically reassigned to one of the other network clients if the primary provider fails. The protocol also allows all participating clients to monitor the dynamic makeup of the network over time. So far the algorithm has been implemented and tested on an 8-bit embedded system architecture with a 10Mbit Ethernet interface.