Abstract: As a rapid growth of digital videos and data
communications, video summarization that provides a shorter version
of the video for fast video browsing and retrieval is necessary.
Key frame extraction is one of the mechanisms to generate video
summary. In general, the extracted key frames should both represent
the entire video content and contain minimum redundancy. However,
most of the existing approaches heuristically select key frames; hence,
the selected key frames may not be the most different frames and/or
not cover the entire content of a video. In this paper, we propose
a method of video summarization which provides the reasonable
objective functions for selecting key frames. In particular, we apply
a statistical dependency measure called quadratic mutual informaion
as our objective functions for maximizing the coverage of the
entire video content as well as minimizing the redundancy among
selected key frames. The proposed key frame extraction algorithm
finds key frames as an optimization problem. Through experiments,
we demonstrate the success of the proposed video summarization
approach that produces video summary with better coverage of
the entire video content while less redundancy among key frames
comparing to the state-of-the-art approaches.
Abstract: With the development of HyperSpectral Imagery
(HSI) technology, the spectral resolution of HSI became denser,
which resulted in large number of spectral bands, high correlation
between neighboring, and high data redundancy. However, the
semantic interpretation is a challenging task for HSI analysis
due to the high dimensionality and the high correlation of the
different spectral bands. In fact, this work presents a dimensionality
reduction approach that allows to overcome the different issues
improving the semantic interpretation of HSI. Therefore, in order
to preserve the spatial information, the Tensor Locality Preserving
Projection (TLPP) has been applied to transform the original HSI.
In the second step, knowledge has been extracted based on the
adjacency graph to describe the different pixels. Based on the
transformation matrix using TLPP, a weighted matrix has been
constructed to rank the different spectral bands based on their
contribution score. Thus, the relevant bands have been adaptively
selected based on the weighted matrix. The performance of the
presented approach has been validated by implementing several
experiments, and the obtained results demonstrate the efficiency
of this approach compared to various existing dimensionality
reduction techniques. Also, according to the experimental results,
we can conclude that this approach can adaptively select the
relevant spectral improving the semantic interpretation of HSI.
Abstract: Connected vehicles are equipped with wireless sensors
that aid in Vehicle to Vehicle (V2V) and Vehicle to Infrastructure
(V2I) communication. These vehicles will in the near future
provide road safety, improve transport efficiency, and reduce traffic
congestion. One of the challenges for connected vehicles is how
to ensure that information sent across the network is secure. If
security of the network is not guaranteed, several attacks can occur,
thereby compromising the robustness, reliability, and efficiency of
the network. This paper discusses existing security mechanisms and
unique properties of connected vehicles. The methodology employed
in this work is exploratory. The paper reviews existing security
solutions for connected vehicles. More concretely, it discusses
various cryptographic mechanisms available, and suggests areas
of improvement. The study proposes a combination of symmetric
key encryption and public key cryptography to improve security.
The study further proposes message aggregation as a technique to
overcome message redundancy. This paper offers a comprehensive
overview of connected vehicles technology, its applications, its
security mechanisms, open challenges, and potential areas of future
research.
Abstract: Digital cameras to reduce cost, use an image sensor to
capture color images. Color Filter Array (CFA) in digital cameras
permits only one of the three primary (red-green-blue) colors to be
sensed in a pixel and interpolates the two missing components
through a method named demosaicking. Captured data is interpolated
into a full color image and compressed in applications. Color
interpolation before compression leads to data redundancy. This
paper proposes a new Vector Quantization (VQ) technique to
construct a VQ codebook with Differential Evolution (DE)
Algorithm. The new technique is compared to conventional Linde-
Buzo-Gray (LBG) method.
Abstract: Compared with traditional distributed environment, the
net-centric environment brings on more demanding challenges for
information sharing with the characteristics of ultra-large scale and
strong distribution, dynamic, autonomy, heterogeneity, redundancy.
This paper realizes an information sharing model and a series of core
services, through which provides an open, flexible and scalable
information sharing platform.
Abstract: Pioneer networked systems assume that connections are reliable, and a faulty operation will be considered in case of losing a connection. Transient connections are typical of mobile devices. Areas of application of data sharing system such as these, lead to the conclusion that network connections may not always be reliable, and that the conventional approaches can be improved. Nigerian commercial banking industry is a critical system whose operation is increasingly becoming dependent on information technology (IT) driven information system. The proposed solution to this problem makes use of a hierarchically clustered network structure which we selected to reflect (as much as possible) the typical organizational structure of the Nigerian commercial banks. Representative transactions such as data updates and replication of the results of such updates were used to simulate the proposed model to show its applicability.
Abstract: Wireless Sensor Network (WSN) comprises of sensor
nodes which are designed to sense the environment, transmit sensed
data back to the base station via multi-hop routing to reconstruct
physical phenomena. Since physical phenomena exists significant
overlaps between temporal redundancy and spatial redundancy, it is
necessary to use Redundancy Suppression Algorithms (RSA) for sensor
node to lower energy consumption by reducing the transmission
of redundancy. A conventional algorithm of RSAs is threshold-based
RSA, which sets threshold to suppress redundant data. Although
many temporal and spatial RSAs are proposed, temporal-spatial RSA
are seldom to be proposed because it is difficult to determine when
to utilize temporal or spatial RSAs. In this paper, we proposed a
novel temporal-spatial redundancy suppression algorithm, Codebookbase
Redundancy Suppression Mechanism (CRSM). CRSM adopts
vector quantization to generate a codebook, which is easily used to
implement temporal-spatial RSA. CRSM not only achieves power
saving and reliability for WSN, but also provides the predictability
of network lifetime. Simulation result shows that the network lifetime
of CRSM outperforms at least 23% of that of other RSAs.
Abstract: Among various testing methodologies, Built-in Self-
Test (BIST) is recognized as a low cost, effective paradigm. Also,
full adders are one of the basic building blocks of most arithmetic
circuits in all processing units. In this paper, an optimized testable 2-
bit full adder as a test building block is proposed. Then, a BIST
procedure is introduced to scale up the building block and to generate
a self testable n-bit full adders. The target design can achieve 100%
fault coverage using insignificant amount of hardware redundancy.
Moreover, Overall test time is reduced by utilizing polymorphic
gates and also by testing full adder building blocks in parallel.
Abstract: Stream Control Transmission Protocol (SCTP) has been
proposed to provide reliable transport of real-time communications.
Due to its attractive features, such as multi-streaming and multihoming,
the SCTP is often expected to be an alternative protocol
for TCP and UDP. In the original SCTP standard, the secondary path
is mainly regarded as a redundancy. Recently, most of researches
have focused on extending the SCTP to enable a host to send its
packets to a destination over multiple paths simultaneously. In order
to transfer packets concurrently over the multiple paths, the SCTP
should be well designed to avoid unnecessary fast retransmission
and the mis-estimation of congestion window size through the paths.
Therefore, we propose an Enhanced Cooperative ACK SCTP (ECASCTP)
to improve the path recovery efficiency of multi-homed host
which is under concurrent multiple transfer mode. We evaluated the
performance of our proposed scheme using ns-2 simulation in terms
of cwnd variation, path recovery time, and goodput. Our scheme
provides better performance in lossy and path asymmetric networks.
Abstract: Developing an accurate classifier for high dimensional microarray datasets is a challenging task due to availability of small sample size. Therefore, it is important to determine a set of relevant genes that classify the data well. Traditionally, gene selection method often selects the top ranked genes according to their discriminatory power. Often these genes are correlated with each other resulting in redundancy. In this paper, we have proposed a hybrid method using feature ranking and wrapper method (Genetic Algorithm with multiclass SVM) to identify a set of relevant genes that classify the data more accurately. A new fitness function for genetic algorithm is defined that focuses on selecting the smallest set of genes that provides maximum accuracy. Experiments have been carried on four well-known datasets1. The proposed method provides better results in comparison to the results found in the literature in terms of both classification accuracy and number of genes selected.
Abstract: This paper studies the dependability of componentbased
applications, especially embedded ones, from the diagnosis
point of view. The principle of the diagnosis technique is to
implement inter-component tests in order to detect and locate the
faulty components without redundancy. The proposed approach for
diagnosing faulty components consists of two main aspects. The first
one concerns the execution of the inter-component tests which
requires integrating test functionality within a component. This is the
subject of this paper. The second one is the diagnosis process itself
which consists of the analysis of inter-component test results to
determine the fault-state of the whole system. Advantage of this
diagnosis method when compared to classical redundancy faulttolerant
techniques are application autonomy, cost-effectiveness and
better usage of system resources. Such advantage is very important
for many systems and especially for embedded ones.
Abstract: Structural redundancy is an interesting point in
seismic design of structures. Initially, the structural redundancy is
described as indeterminate degree of a system. Although many definitions are presented for redundancy in structures, recently the
definition of structural redundancy has been related to the configuration of structural system and the number of lateral load
transferring directions in the structure. The steel frames with infill walls are general systems in the constructing of usual residential buildings in some countries. It is
obviously declared that the performance of structures will be affected by adding masonry infill walls. In order to investigate the effect of
infill walls on the redundancy of the steel frame which constructed
with masonry walls, the components of redundancy including redundancy variation index, redundancy strength index and
redundancy response modification factor were extracted for the
frames with masonry infills. Several steel frames with typical storey number and various numbers of bays were designed and considered.
The redundancy of frames with and without infill walls was evaluated by proposed method. The results showed the presence of infill causes increase of redundancy.
Abstract: It is important problems to increase the detection rates
and reduce false positive rates in Intrusion Detection System (IDS).
Although preventative techniques such as access control and
authentication attempt to prevent intruders, these can fail, and as a
second line of defence, intrusion detection has been introduced. Rare
events are events that occur very infrequently, detection of rare
events is a common problem in many domains. In this paper we
propose an intrusion detection method that combines Rough set and
Fuzzy Clustering. Rough set has to decrease the amount of data and
get rid of redundancy. Fuzzy c-means clustering allow objects to
belong to several clusters simultaneously, with different degrees of
membership. Our approach allows us to recognize not only known
attacks but also to detect suspicious activity that may be the result of
a new, unknown attack. The experimental results on Knowledge
Discovery and Data Mining-(KDDCup 1999) Dataset show that the
method is efficient and practical for intrusion detection systems.
Abstract: The paper presents the design concept of a unitselection
text-to-speech synthesis system for the Slovenian language.
Due to its modular and upgradable architecture, the system can be
used in a variety of speech user interface applications, ranging from
server carrier-grade voice portal applications, desktop user interfaces
to specialized embedded devices.
Since memory and processing power requirements are important
factors for a possible implementation in embedded devices, lexica
and speech corpora need to be reduced. We describe a simple and
efficient implementation of a greedy subset selection algorithm that
extracts a compact subset of high coverage text sentences. The
experiment on a reference text corpus showed that the subset
selection algorithm produced a compact sentence subset with a small
redundancy.
The adequacy of the spoken output was evaluated by several
subjective tests as they are recommended by the International
Telecommunication Union ITU.
Abstract: This paper describes a UDP over IP based, server-oriented redundant host configuration protocol (RHCP) that can be used by collaborating embedded systems in an ad-hoc network to acquire a dynamic IP address. The service is provided by a single network device at a time and will be dynamically reassigned to one of the other network clients if the primary provider fails. The protocol also allows all participating clients to monitor the dynamic makeup of the network over time. So far the algorithm has been implemented and tested on an 8-bit embedded system architecture with a 10Mbit Ethernet interface.