Abstract: In 3D-wavelet video coding framework temporal
filtering is done along the trajectory of motion using Motion
Compensated Temporal Filtering (MCTF). Hence computationally
efficient motion estimation technique is the need of MCTF. In this
paper a predictive technique is proposed in order to reduce the
computational complexity of the MCTF framework, by exploiting
the high correlation among the frames in a Group Of Picture (GOP).
The proposed technique applies coarse and fine searches of any fast
block based motion estimation, only to the first pair of frames in a
GOP. The generated motion vectors are supplied to the next
consecutive frames, even to subsequent temporal levels and only fine
search is carried out around those predicted motion vectors. Hence
coarse search is skipped for all the motion estimation in a GOP
except for the first pair of frames. The technique has been tested for
different fast block based motion estimation algorithms over different
standard test sequences using MC-EZBC, a state-of-the-art scalable
video coder. The simulation result reveals substantial reduction (i.e.
20.75% to 38.24%) in the number of search points during motion
estimation, without compromising the quality of the reconstructed
video compared to non-predictive techniques. Since the motion
vectors of all the pair of frames in a GOP except the first pair will
have value ±1 around the motion vectors of the previous pair of
frames, the number of bits required for motion vectors is also
reduced by 50%.
Abstract: In this paper, a novel multi join algorithm to join
multiple relations will be introduced. The novel algorithm is based
on a hashed-based join algorithm of two relations to produce a double index. This is done by scanning the two relations once. But
instead of moving the records into buckets, a double index will be built. This will eliminate the collision that can happen from a complete hash algorithm. The double index will be divided into join
buckets of similar categories from the two relations. The algorithm then joins buckets with similar keys to produce joined buckets. This
will lead at the end to a complete join index of the two relations. without actually joining the actual relations. The time complexity
required to build the join index of two categories is Om log m where m is the size of each category. Totaling time complexity to O n log m
for all buckets. The join index will be used to materialize the joined relation if required. Otherwise, it will be used along with other join
indices of other relations to build a lattice to be used in multi-join operations with minimal I/O requirements. The lattice of the join indices can be fitted into the main memory to reduce time complexity of the multi join algorithm.
Abstract: with increasing circuits- complexity and demand to
use portable devices, power consumption is one of the most
important parameters these days. Full adders are the basic block of
many circuits. Therefore reducing power consumption in full adders
is very important in low power circuits. One of the most powerconsuming
modules in full adders is XOR/XNOR circuit. This paper
presents two new full adders based on two new logic approaches. The
proposed logic approaches use one XOR or XNOR gate to implement
a full adder cell. Therefore, delay and power will be decreased. Using
two new approaches and two XOR and XNOR gates, two new full
adders have been implemented in this paper. Simulations are carried
out by HSPICE in 0.18μm bulk technology with 1.8V supply voltage.
The results show that the ten-transistors proposed full adder has 12%
less power consumption and is 5% faster in comparison to MB12T
full adder. 9T is more efficient in area and is 24% better than similar
10T full adder in term of power consumption. The main drawback of
the proposed circuits is output threshold loss problem.
Abstract: The practical implementation of audio-video coupled speech recognition systems is mainly limited by the hardware complexity to integrate two radically different information capturing devices with good temporal synchronisation. In this paper, we propose a solution based on a smart CMOS image sensor in order to simplify the hardware integration difficulties. By using on-chip image processing, this smart sensor can calculate in real time the X/Y projections of the captured image. This on-chip projection reduces considerably the volume of the output data. This data-volume reduction permits a transmission of the condensed visual information via the same audio channel by using a stereophonic input available on most of the standard computation devices such as PC, PDA and mobile phones. A prototype called VMIKE (Visio-Microphone) has been designed and realised by using standard 0.35um CMOS technology. A preliminary experiment gives encouraged results. Its efficiency will be further investigated in a large variety of applications such as biometrics, speech recognition in noisy environments, and vocal control for military or disabled persons, etc.
Abstract: Intelligent technologies are increasingly facilitating
sustainable water management strategies in Australia. While this
innovation can present clear cost benefits to utilities through
immediate leak detection and deference of capital costs, the impact of
this technology on households is less distinct. By offering real-time
engagement and detailed end-use consumption breakdowns, there is
significant potential for demand reduction as a behavioural response
to increased information. Despite this potential, passive
implementation without well-planned residential engagement
strategies is likely to result in a lost opportunity. This paper begins
this research process by exploring the effect of smart water meters
through the lens of three behaviour change theories. The Theory of
Planned Behaviour (TPB), Belief Revision theory (BR) and Practice
Theory emphasise different variables that can potentially influence
and predict household water engagements. In acknowledging the
strengths of each theory, the nuances and complexity of household
water engagement can be recognised which can contribute to
effective planning for residential smart meter engagement strategies.
Abstract: Scheduling for the flexible job shop is very important
in both fields of production management and combinatorial
optimization. However, it quit difficult to achieve an optimal solution
to this problem with traditional optimization approaches owing to the
high computational complexity. The combining of several
optimization criteria induces additional complexity and new
problems. In this paper, a Pareto approach to solve the multi
objective flexible job shop scheduling problems is proposed. The
objectives considered are to minimize the overall completion time
(makespan) and total weighted tardiness (TWT). An effective
simulated annealing algorithm based on the proposed approach is
presented to solve multi objective flexible job shop scheduling
problem. An external memory of non-dominated solutions is
considered to save and update the non-dominated solutions during
the solution process. Numerical examples are used to evaluate and
study the performance of the proposed algorithm. The proposed
algorithm can be applied easily in real factory conditions and for
large size problems. It should thus be useful to both practitioners and
researchers.
Abstract: In this paper a hybrid technique of Genetic Algorithm
and Simulated Annealing (HGASA) is applied for Fractal Image
Compression (FIC). With the help of this hybrid evolutionary
algorithm effort is made to reduce the search complexity of matching
between range block and domain block. The concept of Simulated
Annealing (SA) is incorporated into Genetic Algorithm (GA) in order
to avoid pre-mature convergence of the strings. One of the image
compression techniques in the spatial domain is Fractal Image
Compression but the main drawback of FIC is that it involves more
computational time due to global search. In order to improve the
computational time along with acceptable quality of the decoded
image, HGASA technique has been proposed. Experimental results
show that the proposed HGASA is a better method than GA in terms
of PSNR for Fractal image Compression.
Abstract: Using bottom-up image processing algorithms to predict human eye fixations and extract the relevant embedded information in images has been widely applied in the design of active machine vision systems. Scene text is an important feature to be extracted, especially in vision-based mobile robot navigation as many potential landmarks such as nameplates and information signs contain text. This paper proposes an edge-based text region extraction algorithm, which is robust with respect to font sizes, styles, color/intensity, orientations, and effects of illumination, reflections, shadows, perspective distortion, and the complexity of image backgrounds. Performance of the proposed algorithm is compared against a number of widely used text localization algorithms and the results show that this method can quickly and effectively localize and extract text regions from real scenes and can be used in mobile robot navigation under an indoor environment to detect text based landmarks.
Abstract: The voice signal in Voice over Internet protocol (VoIP) system is processed through the best effort policy based IP network, which leads to the network degradations including delay, packet loss jitter. The work in this paper presents the implementation of finite impulse response (FIR) filter for voice quality improvement in the VoIP system through distributed arithmetic (DA) algorithm. The VoIP simulations are conducted with AMR-NB 6.70 kbps and G.729a speech coders at different packet loss rates and the performance of the enhanced VoIP signal is evaluated using the perceptual evaluation of speech quality (PESQ) measurement for narrowband signal. The results show reduction in the computational complexity in the system and significant improvement in the quality of the VoIP voice signal.
Abstract: In this article, we aim to discuss the formulation of two explicit group iterative finite difference methods for time-dependent two dimensional Burger-s problem on a variable mesh. For the non-linear problems, the discretization leads to a non-linear system whose Jacobian is a tridiagonal matrix. We discuss the Newton-s explicit group iterative methods for a general Burger-s equation. The proposed explicit group methods are derived from the standard point and rotated point Crank-Nicolson finite difference schemes. Their computational complexity analysis is discussed. Numerical results are given to justify the feasibility of these two proposed iterative methods.
Abstract: In recent years, sustainable supply chain management
(SSCM) has been widely researched in academic domain. However,
due to the traditional operational role and the complexity of supply
chain management in the cement industry, a relatively small amount
of research has been conducted on cement supply chain simulation
integrated with sustainability criteria. This paper analyses the cement
supply chain operations using the Push-Pull supply chain
frameworks, the Life Cycle Assessment (LCA) methodology; and
proposal integration approach, proposes three supply chain scenarios
based on Make-To-Stock (MTS), Pack-To-Order (PTO) and Grind-
To-Order (GTO) strategies. A Discrete-Event Simulation (DES)
model of SSCM is constructed using Arena software to implement
the three-target scenarios. We conclude with the simulation results
that (GTO) is the optimal supply chain strategy that demonstrates the
best economic, ecological and social performance in the cement
industry.
Abstract: Many studies have shown that parallelization decreases efficiency [1], [2]. There are many reasons for these decrements. This paper investigates those which appear in the context of parallel data integration. Integration processes generally cannot be allocated to packages of identical size (i. e. tasks of identical complexity). The reason for this is unknown heterogeneous input data which result in variable task lengths. Process delay is defined by the slowest processing node. It leads to a detrimental effect on the total processing time. With a real world example, this study will show that while process delay does initially increase with the introduction of more nodes it ultimately decreases again after a certain point. The example will make use of the cloud computing platform Hadoop and be run inside Amazon-s EC2 compute cloud. A stochastic model will be set up which can explain this effect.
Abstract: The paper discusses complexity of component-based
development (CBD) of embedded systems. Although CBD has its
merits, it must be augmented with methods to control the complexities
that arise due to resource constraints, timeliness, and run-time deployment
of components in embedded system development. Software
component specification, system-level testing, and run-time reliability
measurement are some ways to control the complexity.
Abstract: SDMA (Space-Division Multiple Access) is a MIMO
(Multiple-Input and Multiple-Output) based wireless communication
network architecture which has the potential to significantly increase
the spectral efficiency and the system performance. The maximum
likelihood (ML) detection provides the optimal performance, but its
complexity increases exponentially with the constellation size of
modulation and number of users. The QR decomposition (QRD)
MUD can be a substitute to ML detection due its low complexity and
near optimal performance. The minimum mean-squared-error
(MMSE) multiuser detection (MUD) minimises the mean square
error (MSE), which may not give guarantee that the BER of the
system is also minimum. But the minimum bit error rate (MBER)
MUD performs better than the classic MMSE MUD in term of
minimum probability of error by directly minimising the BER cost
function. Also the MBER MUD is able to support more users than
the number of receiving antennas, whereas the rest of MUDs fail in
this scenario. In this paper the performance of various MUD
techniques is verified for the correlated MIMO channel models based
on IEEE 802.16n standard.
Abstract: Sensor relocation is to repair coverage holes caused by node failures. One way to repair coverage holes is to find redundant nodes to replace faulty nodes. Most researches took a long time to find redundant nodes since they randomly scattered redundant nodes around the sensing field. To record the precise position of sensor nodes, most researches assumed that GPS was installed in sensor nodes. However, high costs and power-consumptions of GPS are heavy burdens for sensor nodes. Thus, we propose a fast sensor relocation algorithm to arrange redundant nodes to form redundant walls without GPS. Redundant walls are constructed in the position where the average distance to each sensor node is the shortest. Redundant walls can guide sensor nodes to find redundant nodes in the minimum time. Simulation results show that our algorithm can find the proper redundant node in the minimum time and reduce the relocation time with low message complexity.
Abstract: Considering payload, reliability, security and operational lifetime as major constraints in transmission of images we put forward in this paper a steganographic technique implemented at the physical layer. We suggest transmission of Halftoned images (payload constraint) in wireless sensor networks to reduce the amount of transmitted data. For low power and interference limited applications Turbo codes provide suitable reliability. Ensuring security is one of the highest priorities in many sensor networks. The Turbo Code structure apart from providing forward error correction can be utilized to provide for encryption. We first consider the Halftoned image and then the method of embedding a block of data (called secret) in this Halftoned image during the turbo encoding process is presented. The small modifications required at the turbo decoder end to extract the embedded data are presented next. The implementation complexity and the degradation of the BER (bit error rate) in the Turbo based stego system are analyzed. Using some of the entropy based crypt analytic techniques we show that the strength of our Turbo based stego system approaches that found in the OTPs (one time pad).
Abstract: Petri Net (PN) has proven to be effective graphical, mathematical, simulation, and control tool for Discrete Event Systems (DES). But, with the growth in the complexity of modern industrial, and communication systems, PN found themselves inadequate to address the problems of uncertainty, and imprecision in data. This gave rise to amalgamation of Fuzzy logic with Petri nets and a new tool emerged with the name of Fuzzy Petri Nets (FPN). Although there had been a lot of research done on FPN and a number of their applications have been anticipated, but their basic types and structure are still ambiguous. Therefore, in this research, an effort is made to categorize FPN according to their structure and algorithms Further, literature review of the applications of FPN in the light of their classifications has been done.
Abstract: This paper presents a integer frequency offset (IFO)
estimation scheme for the 3GPP long term evolution (LTE) downlink
system. Firstly, the conventional joint detection method for IFO and
sector cell index (CID) information is introduced. Secondly, an IFO
estimation without explicit sector CID information is proposed, which
can operate jointly with the proposed IFO estimation and reduce
the time delay in comparison with the conventional joint method.
Also, the proposed method is computationally efficient and has almost
similar performance in comparison with the conventional method over
the Pedestrian and Vehicular channel models.
Abstract: Today, computer systems are more and more complex and support growing security risks. The security managers need to find effective security risk assessment methodologies that allow modeling well the increasing complexity of current computer systems but also maintaining low the complexity of the assessment procedure. This paper provides a brief analysis of common security risk assessment methodologies leading to the selection of a proper methodology to fulfill these requirements. Then, a detailed analysis of the most effective methodology is accomplished, presenting numerical examples to demonstrate how easy it is to use.
Abstract: We consider linear regression models where both input data (the values of independent variables) and output data (the observations of the dependent variable) are interval-censored. We introduce a possibilistic generalization of the least squares estimator, so called OLS-set for the interval model. This set captures the impact of the loss of information on the OLS estimator caused by interval censoring and provides a tool for quantification of this effect. We study complexity-theoretic properties of the OLS-set. We also deal with restricted versions of the general interval linear regression model, in particular the crisp input – interval output model. We give an argument that natural descriptions of the OLS-set in the crisp input – interval output cannot be computed in polynomial time. Then we derive easily computable approximations for the OLS-set which can be used instead of the exact description. We illustrate the approach by an example.