Abstract: In this paper we present a novel method, which
reduces the computational complexity of abrupt cut detection. We
have proposed fast algorithm, where the similarity of frames within
defined step is evaluated instead of comparing successive frames.
Based on the results of simulation on large video collection, the
proposed fast algorithm is able to achieve 80% reduction of needed
frames comparisons compared to actually used methods without the
shot cut detection accuracy degradation.
Abstract: A reliability, availability and maintainability (RAM) model has been built for acid gas removal plant for system analysis that will play an important role in any process modifications, if required, for achieving its optimum performance. Due to the complexity of the plant, the model was based on a Reliability Block Diagram (RBD) with a Monte Carlo simulation engine. The model has been validated against actual plant data as well as local expert opinions, resulting in an acceptable simulation model. The results from the model showed that the operation and maintenance can be further improved, resulting in reduction of the annual production loss.
Abstract: In modern era, the biggest challenge facing the
software industry is the upcoming of new technologies. So, the
software engineers are gearing up themselves to meet and manage
change in large software system. Also they find it difficult to deal
with software cognitive complexities. In the last few years many
metrics were proposed to measure the cognitive complexity of
software. This paper aims at a comprehensive survey of the metric of
software cognitive complexity. Some classic and efficient software
cognitive complexity metrics, such as Class Complexity (CC),
Weighted Class Complexity (WCC), Extended Weighted Class
Complexity (EWCC), Class Complexity due to Inheritance (CCI) and
Average Complexity of a program due to Inheritance (ACI), are
discussed and analyzed. The comparison and the relationship of these
metrics of software complexity are also presented.
Abstract: In this paper, Selective Adaptive Parallel Interference Cancellation (SA-PIC) technique is presented for Multicarrier Direct Sequence Code Division Multiple Access (MC DS-CDMA) scheme. The motivation of using SA-PIC is that it gives high performance and at the same time, reduces the computational complexity required to perform interference cancellation. An upper bound expression of the bit error rate (BER) for the SA-PIC under Rayleigh fading channel condition is derived. Moreover, the implementation complexities for SA-PIC and Adaptive Parallel Interference Cancellation (APIC) are discussed and compared. The performance of SA-PIC is investigated analytically and validated via computer simulations.
Abstract: One-way functions are functions that are easy to
compute but hard to invert. Their existence is an open conjecture; it
would imply the existence of intractable problems (i.e. NP-problems
which are not in the P complexity class).
If true, the existence of one-way functions would have an impact
on the theoretical framework of physics, in particularly, quantum
mechanics. Such aspect of one-way functions has never been shown
before.
In the present work, we put forward the following.
We can calculate the microscopic state (say, the particle spin in the
z direction) of a macroscopic system (a measuring apparatus
registering the particle z-spin) by the system macroscopic state (the
apparatus output); let us call this association the function F. The
question is: can we compute the function F in the inverse direction?
In other words, can we compute the macroscopic state of the system
through its microscopic state (the preimage F -1)?
In the paper, we assume that the function F is a one-way function.
The assumption implies that at the macroscopic level the Schrödinger
equation becomes unfeasible to compute. This unfeasibility plays a
role of limit of the validity of the linear Schrödinger equation.
Abstract: This paper proposes an efficient lattice-reduction-aided
detection (LRD) scheme to improve the detection performance of
MIMO-OFDM system. In this proposed scheme, V candidate symbols
are considered at the first layer, and V probable streams are
detected with LRD scheme according to the first detected V candidate
symbols. Then, the most probable stream is selected through a ML
test. Since the proposed scheme can more accurately detect initial
symbol and can reduce transmission of error to rest symbols, the
proposed scheme shows more improved performance than conventional
LRD with very low complexity.
Abstract: In general, class complexity is measured based on any
one of these factors such as Line of Codes (LOC), Functional points
(FP), Number of Methods (NOM), Number of Attributes (NOA) and so on. There are several new techniques, methods and metrics with
the different factors that are to be developed by the researchers for calculating the complexity of the class in Object Oriented (OO)
software. Earlier, Arockiam et.al has proposed a new complexity measure namely Extended Weighted Class Complexity (EWCC)
which is an extension of Weighted Class Complexity which is proposed by Mishra et.al. EWCC is the sum of cognitive weights of
attributes and methods of the class and that of the classes derived. In EWCC, a cognitive weight of each attribute is considered to be 1.
The main problem in EWCC metric is that, every attribute holds the
same value but in general, cognitive load in understanding the
different types of attributes cannot be the same. So here, we are proposing a new metric namely Attribute Weighted Class Complexity
(AWCC). In AWCC, the cognitive weights have to be assigned for the attributes which are derived from the effort needed to understand
their data types. The proposed metric has been proved to be a better
measure of complexity of class with attributes through the case studies and experiments
Abstract: Electrocardiogram (ECG) data compression algorithm
is needed that will reduce the amount of data to be transmitted, stored
and analyzed, but without losing the clinical information content. A
wavelet ECG data codec based on the Set Partitioning In Hierarchical
Trees (SPIHT) compression algorithm is proposed in this paper. The
SPIHT algorithm has achieved notable success in still image coding.
We modified the algorithm for the one-dimensional (1-D) case and
applied it to compression of ECG data.
By this compression method, small percent root mean square
difference (PRD) and high compression ratio with low
implementation complexity are achieved. Experiments on selected
records from the MIT-BIH arrhythmia database revealed that the
proposed codec is significantly more efficient in compression and in
computation than previously proposed ECG compression schemes.
Compression ratios of up to 48:1 for ECG signals lead to acceptable
results for visual inspection.
Abstract: Mechanical design of the thin-film solar framed
module and mounting system is important to enhance module
reliability and to increase areas of applications. The stress induced by
different mounting positions played a main role controlling the
stability of the whole mechanical structure. From the finite element
method, under the pressure from the back of module, the stress at Lc
(center point of the Long frame) increased and the stresses at Center,
Corner and Sc (center point of the Short frame) decreased while the
mounting position was away from the center of the module. In addition,
not only the stress of the glass but also the stress of the frame
decreased. Accordingly it was safer to mount in the position away
from the center of the module. The emphasis of designing frame
system of the module was on the upper support of the Short frame.
Strength of the overall structure and design of the corner were also
important due to the complexity of the stress in the Long frame.
Abstract: Electromagnetic interference (EMI) is one of the
serious problems in most electrical and electronic appliances
including fluorescent lamps. The electronic ballast used to regulate
the power flow through the lamp is the major cause for EMI. The
interference is because of the high frequency switching operation of
the ballast. Formerly, some EMI mitigation techniques were in
practice, but they were not satisfactory because of the hardware
complexity in the circuit design, increased parasitic components and
power consumption and so on. The majority of the researchers have
their spotlight only on EMI mitigation without considering the other
constraints such as cost, effective operation of the equipment etc. In
this paper, we propose a technique for EMI mitigation in fluorescent
lamps by integrating Frequency Modulation and Evolutionary
Programming. By the Frequency Modulation technique, the
switching at a single central frequency is extended to a range of
frequencies, and so, the power is distributed throughout the range of
frequencies leading to EMI mitigation. But in order to meet the
operating frequency of the ballast and the operating power of the
fluorescent lamps, an optimal modulation index is necessary for
Frequency Modulation. The optimal modulation index is determined
using Evolutionary Programming. Thereby, the proposed technique
mitigates the EMI to a satisfactory level without disturbing the
operation of the fluorescent lamp.
Abstract: Facial expression analysis is rapidly becoming an
area of intense interest in computer science and human-computer
interaction design communities. The most expressive way humans
display emotions is through facial expressions. In this paper we
present a method to analyze facial expression from images by
applying Gabor wavelet transform (GWT) and Discrete Cosine
Transform (DCT) on face images. Radial Basis Function (RBF)
Network is used to classify the facial expressions. As a second stage,
the images are preprocessed to enhance the edge details and non
uniform down sampling is done to reduce the computational
complexity and processing time. Our method reliably works even
with faces, which carry heavy expressions.
Abstract: In modern literary criticism the problem of genre is one of discussion. Genre is a phenomenon, located in the intersection of the synchronous and diachronic processes in the development of literature, and this is due to the complexity of its solutions. It defines the place of contact between literary works and literary process.
Abstract: Software complexity metrics are used to predict
critical information about reliability and maintainability of software
systems. Object oriented software development requires a different
approach to software complexity metrics. Object Oriented Software
Metrics can be broadly classified into static and dynamic metrics.
Static Metrics give information at the code level whereas dynamic
metrics provide information on the actual runtime. In this paper we
will discuss the various complexity metrics, and the comparison
between static and dynamic complexity.
Abstract: MATCH project [1] entitle the development of an
automatic diagnosis system that aims to support treatment of colon
cancer diseases by discovering mutations that occurs to tumour
suppressor genes (TSGs) and contributes to the development of
cancerous tumours. The constitution of the system is based on a)
colon cancer clinical data and b) biological information that will be
derived by data mining techniques from genomic and proteomic
sources The core mining module will consist of the popular, well
tested hybrid feature extraction methods, and new combined
algorithms, designed especially for the project. Elements of rough
sets, evolutionary computing, cluster analysis, self-organization maps
and association rules will be used to discover the annotations
between genes, and their influence on tumours [2]-[11].
The methods used to process the data have to address their high
complexity, potential inconsistency and problems of dealing with the
missing values. They must integrate all the useful information
necessary to solve the expert's question. For this purpose, the system
has to learn from data, or be able to interactively specify by a domain
specialist, the part of the knowledge structure it needs to answer a
given query. The program should also take into account the
importance/rank of the particular parts of data it analyses, and adjusts
the used algorithms accordingly.
Abstract: The paper provides an in-depth tutorial of mathematical
construction of maximal length sequences (m-sequences) via primitive
polynomials and how to map the same when implemented in
shift registers. It is equally important to check whether a polynomial
is primitive or not so as to get proper m-sequences. A fast method to
identify primitive polynomials over binary fields is proposed where
the complexity is considerably less in comparison with the standard
procedures for the same purpose.
Abstract: The paper deals with the estimation of amplitude and phase of an analogue multi-harmonic band-limited signal from irregularly spaced sampling values. To this end, assuming the signal fundamental frequency is known in advance (i.e., estimated at an independent stage), a complexity-reduced algorithm for signal reconstruction in time domain is proposed. The reduction in complexity is achieved owing to completely new analytical and summarized expressions that enable a quick estimation at a low numerical error. The proposed algorithm for the calculation of the unknown parameters requires O((2M+1)2) flops, while the straightforward solution of the obtained equations takes O((2M+1)3) flops (M is the number of the harmonic components). It is applied in signal reconstruction, spectral estimation, system identification, as well as in other important signal processing problems. The proposed method of processing can be used for precise RMS measurements (for power and energy) of a periodic signal based on the presented signal reconstruction. The paper investigates the errors related to the signal parameter estimation, and there is a computer simulation that demonstrates the accuracy of these algorithms.
Abstract: This paper describes an optimal approach for feature
subset selection to classify the leaves based on Genetic Algorithm
(GA) and Kernel Based Principle Component Analysis (KPCA). Due
to high complexity in the selection of the optimal features, the
classification has become a critical task to analyse the leaf image
data. Initially the shape, texture and colour features are extracted
from the leaf images. These extracted features are optimized through
the separate functioning of GA and KPCA. This approach performs
an intersection operation over the subsets obtained from the
optimization process. Finally, the most common matching subset is
forwarded to train the Support Vector Machine (SVM). Our
experimental results successfully prove that the application of GA
and KPCA for feature subset selection using SVM as a classifier is
computationally effective and improves the accuracy of the classifier.
Abstract: The group mutual exclusion (GME) problem is a
variant of the mutual exclusion problem. In the present paper a
token-based group mutual exclusion algorithm, capable of handling
transient faults, is proposed. The algorithm uses the concept of
dynamic request sets. A time out mechanism is used to detect the
token loss; also, a distributed scheme is used to regenerate the token.
The worst case message complexity of the algorithm is n+1. The
maximum concurrency and forum switch complexity of the
algorithm are n and min (n, m) respectively, where n is the number of
processes and m is the number of groups. The algorithm also satisfies
another desirable property called smooth admission. The scheme can
also be adapted to handle the extended group mutual exclusion
problem.
Abstract: An optimal solution for a large number of constraint
satisfaction problems can be found using the technique of
substitution and elimination of variables analogous to the technique
that is used to solve systems of equations. A decision function
f(A)=max(A2) is used to determine which variables to eliminate. The
algorithm can be expressed in six lines and is remarkable in both its
simplicity and its ability to find an optimal solution. However it is
inefficient in that it needs to square the updated A matrix after each
variable elimination. To overcome this inefficiency the algorithm is
analyzed and it is shown that the A matrix only needs to be squared
once at the first step of the algorithm and then incrementally updated
for subsequent steps, resulting in significant improvement and an
algorithm complexity of O(n3).
Abstract: Selective harmonic elimination-pulse width modulation techniques offer a tight control of the harmonic spectrum of a given voltage waveform generated by a power electronic converter along with a low number of switching transitions. Traditional optimization methods suffer from various drawbacks, such as prolonged and tedious computational steps and convergence to local optima; thus, the more the number of harmonics to be eliminated, the larger the computational complexity and time. This paper presents a novel method for output voltage harmonic elimination and voltage control of PWM AC/AC voltage converters using the principle of hybrid Real-Coded Genetic Algorithm-Pattern Search (RGA-PS) method. RGA is the primary optimizer exploiting its global search capabilities, PS is then employed to fine tune the best solution provided by RGA in each evolution. The proposed method enables linear control of the fundamental component of the output voltage and complete elimination of its harmonic contents up to a specified order. Theoretical studies have been carried out to show the effectiveness and robustness of the proposed method of selective harmonic elimination. Theoretical results are validated through simulation studies using PSIM software package.