Abstract: Ad hoc networks are characterized by multi-hop
wireless connectivity and frequently changing network topology.
Forming security association among a group of nodes in ad-hoc
networks is more challenging than in conventional networks due to the
lack of central authority, i.e. fixed infrastructure. With that view in
mind, group key management plays an important building block of
any secure group communication. The main contribution of this paper
is a low complexity key management scheme that is suitable for fully
self-organized ad-hoc networks. The protocol is also password
authenticated, making it resilient against active attacks. Unlike other
existing key agreement protocols, ours make no assumption about the
structure of the underlying wireless network, making it suitable for
“truly ad-hoc" networks. Finally, we will analyze our protocol to show
the computation and communication burden on individual nodes for
key establishment.
Abstract: Different methods containing biometric algorithms are
presented for the representation of eigenfaces detection including
face recognition, are identification and verification. Our theme of this
research is to manage the critical processing stages (accuracy, speed,
security and monitoring) of face activities with the flexibility of
searching and edit the secure authorized database. In this paper we
implement different techniques such as eigenfaces vector reduction
by using texture and shape vector phenomenon for complexity
removal, while density matching score with Face Boundary Fixation
(FBF) extracted the most likelihood characteristics in this media
processing contents. We examine the development and performance
efficiency of the database by applying our creative algorithms in both
recognition and detection phenomenon. Our results show the
performance accuracy and security gain with better achievement than
a number of previous approaches in all the above processes in an
encouraging mode.
Abstract: In the present paper, a set of parametric FE stress
analyses is carried out for two-planar welded tubular DKT-joints
under two different axial load cases. Analysis results are used to
present general remarks on the effect of geometrical parameters on
the stress concentration factors (SCFs) at the inner saddle, outer
saddle, toe, and heel positions on the main (outer) brace. Then a new
set of SCF parametric equations is developed through nonlinear
regression analysis for the fatigue design of two-planar DKT-joints.
An assessment study of these equations is conducted against the
experimental data; and the satisfaction of the criteria regarding the
acceptance of parametric equations is checked. Significant effort has
been devoted by researchers to the study of SCFs in various uniplanar
tubular connections. Nevertheless, for multi-planar joints
covering the majority of practical applications, very few
investigations have been reported due to the complexity and high
cost involved.
Abstract: The linear methods of heart rate variability analysis
such as non-parametric (e.g. fast Fourier transform analysis) and
parametric methods (e.g. autoregressive modeling) has become an
established non-invasive tool for marking the cardiac health, but their
sensitivity and specificity were found to be lower than expected with
positive predictive value
Abstract: The H.264/AVC standard is a highly efficient video
codec providing high-quality videos at low bit-rates. As employing
advanced techniques, the computational complexity has been
increased. The complexity brings about the major problem in the
implementation of a real-time encoder and decoder. Parallelism is the
one of approaches which can be implemented by multi-core system.
We analyze macroblock-level parallelism which ensures the same bit
rate with high concurrency of processors. In order to reduce the
encoding time, dynamic data partition based on macroblock region is
proposed. The data partition has the advantages in load balancing and
data communication overhead. Using the data partition, the encoder
obtains more than 3.59x speed-up on a four-processor system. This
work can be applied to other multimedia processing applications.
Abstract: In this paper, a recursive algorithm for the
computation of 2-D DCT using Ramanujan Numbers is proposed.
With this algorithm, the floating-point multiplication is completely
eliminated and hence the multiplierless algorithm can be
implemented using shifts and additions only. The orthogonality of
the recursive kernel is well maintained through matrix factorization
to reduce the computational complexity. The inherent parallel
structure yields simpler programming and hardware implementation
and provides
log 1
2
3
2 N N-N+
additions and
N N
2 log
2 shifts which is
very much less complex when compared to other recent multiplierless
algorithms.
Abstract: The H.264/AVC standard uses an intra prediction, 9
directional modes for 4x4 luma blocks and 8x8 luma blocks, 4
directional modes for 16x16 macroblock and 8x8 chroma blocks,
respectively. It means that, for a macroblock, it has to perform 736
different RDO calculation before a best RDO modes is determined.
With this Multiple intra-mode prediction, intra coding of H.264/AVC
offers a considerably higher improvement in coding efficiency
compared to other compression standards, but computational
complexity is increased significantly. This paper presents a fast intra
prediction algorithm for H.264/AVC intra prediction based a
characteristic of homogeneity information. In this study, the gradient
prediction method used to predict the homogeneous area and the
quadratic prediction function used to predict the nonhomogeneous
area. Based on the correlation between the homogeneity and block
size, the smaller block is predicted by gradient prediction and
quadratic prediction, so the bigger block is predicted by gradient
prediction. Experimental results are presented to show that the
proposed method reduce the complexity by up to 76.07%
maintaining the similar PSNR quality with about 1.94%bit rate
increase in average.
Abstract: In this paper, low end Digital Signal Processors (DSPs)
are applied to accelerate integer neural networks. The use of DSPs
to accelerate neural networks has been a topic of study for some
time, and has demonstrated significant performance improvements.
Recently, work has been done on integer only neural networks, which
greatly reduces hardware requirements, and thus allows for cheaper
hardware implementation. DSPs with Arithmetic Logic Units (ALUs)
that support floating or fixed point arithmetic are generally more
expensive than their integer only counterparts due to increased circuit
complexity. However if the need for floating or fixed point math
operation can be removed, then simpler, lower cost DSPs can be
used. To achieve this, an integer only neural network is created in
this paper, which is then accelerated by using DSP instructions to
improve performance.
Abstract: A data warehouse (DW) is a system which has value and role for decision-making by querying. Queries to DW are critical regarding to their complexity and length. They often access millions of tuples, and involve joins between relations and aggregations. Materialized views are able to provide the better performance for DW queries. However, these views have maintenance cost, so materialization of all views is not possible. An important challenge of DW environment is materialized view selection because we have to realize the trade-off between performance and view maintenance cost. Therefore, in this paper, we introduce a new approach aimed at solve this challenge based on Two-Phase Optimization (2PO), which is a combination of Simulated Annealing (SA) and Iterative Improvement (II), with the use of Multiple View Processing Plan (MVPP). Our experiments show that our method provides a further improvement in term of query processing cost and view maintenance cost.
Abstract: We have defined two suites of metrics, which cover
static and dynamic aspects of component assembly. The static
metrics measure complexity and criticality of component assembly,
wherein complexity is measured using Component Packing Density
and Component Interaction Density metrics. Further, four criticality
conditions namely, Link, Bridge, Inheritance and Size criticalities
have been identified and quantified. The complexity and criticality
metrics are combined to form a Triangular Metric, which can be used
to classify the type and nature of applications. Dynamic metrics are
collected during the runtime of a complete application. Dynamic
metrics are useful to identify super-component and to evaluate the
degree of utilisation of various components. In this paper both static
and dynamic metrics are evaluated using Weyuker-s set of properties.
The result shows that the metrics provide a valid means to measure
issues in component assembly. We relate our metrics suite with
McCall-s Quality Model and illustrate their impact on product
quality and to the management of component-based product
development.
Abstract: This paper solves the environmental/ economic dispatch
power system problem using the Non-dominated Sorting Genetic
Algorithm-II (NSGA-II) and its hybrid with a Convergence Accelerator
Operator (CAO), called the NSGA-II/CAO. These multiobjective
evolutionary algorithms were applied to the standard IEEE 30-bus
six-generator test system. Several optimization runs were carried out
on different cases of problem complexity. Different quality measure
which compare the performance of the two solution techniques were
considered. The results demonstrated that the inclusion of the CAO
in the original NSGA-II improves its convergence while preserving
the diversity properties of the solution set.
Abstract: The complexity of today-s software systems makes
collaborative development necessary to accomplish tasks.
Frameworks are necessary to allow developers perform their tasks
independently yet collaboratively. Similarity detection is one of the
major issues to consider when developing such frameworks. It allows
developers to mine existing repositories when developing their own
views of a software artifact, and it is necessary for identifying the
correspondences between the views to allow merging them and
checking their consistency. Due to the importance of the
requirements specification stage in software development, this paper
proposes a framework for collaborative development of Object-
Oriented formal specifications along with a similarity detection
approach to support the creation, merging and consistency checking
of specifications. The paper also explores the impact of using
additional concepts on improving the matching results. Finally, the
proposed approach is empirically evaluated.
Abstract: We present a simplified equalization technique for a
π/4 differential quadrature phase shift keying ( π/4 -DQPSK) modulated
signal in a multipath fading environment. The proposed equalizer is
realized as a fractionally spaced adaptive decision feedback equalizer
(FS-ADFE), employing exponential step-size least mean square
(LMS) algorithm as the adaptation technique. The main advantage of
the scheme stems from the usage of exponential step-size LMS algorithm
in the equalizer, which achieves similar convergence behavior
as that of a recursive least squares (RLS) algorithm with significantly
reduced computational complexity. To investigate the finite-precision
performance of the proposed equalizer along with the π/4 -DQPSK
modem, the entire system is evaluated on a 16-bit fixed point digital
signal processor (DSP) environment. The proposed scheme is found
to be attractive even for those cases where equalization is to be
performed within a restricted number of training samples.
Abstract: The security of computer networks plays a strategic
role in modern computer systems. Intrusion Detection Systems (IDS)
act as the 'second line of defense' placed inside a protected
network, looking for known or potential threats in network traffic
and/or audit data recorded by hosts. We developed an Intrusion
Detection System using LAMSTAR neural network to learn patterns
of normal and intrusive activities, to classify observed system
activities and compared the performance of LAMSTAR IDS with
other classification techniques using 5 classes of KDDCup99 data.
LAMSAR IDS gives better performance at the cost of high
Computational complexity, Training time and Testing time, when
compared to other classification techniques (Binary Tree classifier,
RBF classifier, Gaussian Mixture classifier). we further reduced the
Computational Complexity of LAMSTAR IDS by reducing the
dimension of the data using principal component analysis which in
turn reduces the training and testing time with almost the same
performance.
Abstract: H.264/AVC offers a considerably higher improvement
in coding efficiency compared to other compression standards such
as MPEG-2, but computational complexity is increased significantly.
In this paper, we propose selective mode decision schemes for fast
intra prediction mode selection. The objective is to reduce the
computational complexity of the H.264/AVC encoder without
significant rate-distortion performance degradation. In our proposed
schemes, the intra prediction complexity is reduced by limiting the
luma and chroma prediction modes using the directional information
of the 16×16 prediction mode. Experimental results are presented to
show that the proposed schemes reduce the complexity by up to 78%
maintaining the similar PSNR quality with about 1.46% bit rate
increase in average.
Abstract: Preliminary studies on Kuwait high voltage
transmission system show significant increase in the short circuit
level at some of the grid substations and some generating stations.
This increase results from the growth in the power transmission
systems in size and complexity. New generating stations are expected
to be added to the system within the next few years. This paper
describes the study analysis performed to evaluate the available and
potential solutions to control SC levels in Kuwait power system. It
also presents a modified planning of the transmission network in
order to fulfill this task.
Abstract: This paper discusses the Urdu script characteristics,
Urdu Nastaleeq and a simple but a novel and robust technique to
recognize the printed Urdu script without a lexicon. Urdu being a
family of Arabic script is cursive and complex script in its nature, the
main complexity of Urdu compound/connected text is not its
connections but the forms/shapes the characters change when it is
placed at initial, middle or at the end of a word. The characters
recognition technique presented here is using the inherited
complexity of Urdu script to solve the problem. A word is scanned
and analyzed for the level of its complexity, the point where the level
of complexity changes is marked for a character, segmented and
feeded to Neural Networks. A prototype of the system has been
tested on Urdu text and currently achieves 93.4% accuracy on the
average.
Abstract: Today modern simulations solutions in the wind turbine industry have achieved a high degree of complexity and detail in result. Limitations exist when it is time to validate model results against measurements. Regarding Model validation it is of special interest to identify mode frequencies and to differentiate them from the different excitations. A wind turbine is a complex device and measurements regarding any part of the assembly show a lot of noise. Input excitations are difficult or even impossible to measure due to the stochastic nature of the environment. Traditional techniques for frequency analysis or features extraction are widely used to analyze wind turbine sensor signals, but have several limitations specially attending to non stationary signals (Events). A new technique based on autoregresive analysis techniques is introduced here for a specific application, a comparison and examples related to different events in the wind turbine operations are presented.
Abstract: One of the disadvantages of using OFDM is the larger
peak to averaged power ratio (PAPR) in its time domain signal. The
larger PAPR signal would course the fatal degradation of bit error
rate performance (BER) due to the inter-modulation noise in the nonlinear
channel. This paper proposes an improved DSI (Dummy
Sequence Insertion) method, which can achieve the better PAPR and
BER performances. The feature of proposed method is to optimize
the phase of each dummy sub-carrier so as to reduce the PAPR
performance by changing all predetermined phase coefficients in the
time domain signal, which is calculated for data sub-carriers and
dummy sub-carriers separately. To achieve the better PAPR
performance, this paper also proposes to employ the time-frequency
domain swapping algorithm for fine adjustment of phase coefficient
of the dummy subcarriers, which can achieve the less complexity of
processing and achieves the better PAPR and BER performances
than those for the conventional DSI method. This paper presents
various computer simulation results to verify the effectiveness of
proposed method as comparing with the conventional methods in the
non-linear channel.
Abstract: As the electrical power industry is restructured, the electrical power exchange is becoming extended. One of the key information used to determine how much power can be transferred through the network is known as available transfer capability (ATC). To calculate ATC, traditional deterministic approach is based on the severest case, but the approach has the complexity of procedure. Therefore, novel approach for ATC calculation is proposed using cost-optimization method in this paper, and is compared with well-being method and risk-benefit method. This paper proposes the optimal transfer capability of HVDC system between mainland and a separated island in Korea through these three methods. These methods will consider production cost, wheeling charge through HVDC system and outage cost with one depth (N-1 contingency)