Abstract: Vertex configuration for a vertex in an orthogonal
pseudo-polyhedron is an identity of a vertex that is determined by the
number of edges, dihedral angles, and non-manifold properties
meeting at the vertex. There are up to sixteen vertex configurations
for any orthogonal pseudo-polyhedron (OPP). Understanding the
relationship between these vertex configurations will give us insight
into the structure of an OPP and help us design better algorithms for
many 3-dimensional geometric problems. In this paper, 16 vertex
configurations for OPP are described first. This is followed by a
number of formulas giving insight into the relationship between
different vertex configurations in an OPP. These formulas
will be useful as an extension of orthogonal polyhedra usefulness on
pattern analysis in 3D-digital images.
Abstract: This paper presents a distributed intrusion
detection system IDS, based on the concept of specialized
distributed agents community representing agents with the
same purpose for detecting distributed attacks. The semantic of
intrusion events occurring in a predetermined network has been
defined. The correlation rules referring the process which our
proposed IDS combines the captured events that is distributed
both spatially and temporally. And then the proposed IDS tries
to extract significant and broad patterns for set of well-known
attacks. The primary goal of our work is to provide intrusion
detection and real-time prevention capability against insider
attacks in distributed and fully automated environments.
Abstract: The aim of the present study was to analyze and
distinguish playing pattern between winning and losing field hockey
team in Delhi 2012 tournament. The playing pattern is focus to the D
penetration (right, center, left.) and to distinguish D penetration
linking to end shot made from it. The data was recorded and analyzed
using Sportscode elite computer software. 12 matches were analyzed
from the tournament. Two groups of performance indicators are used
to analyze, that is D penetration right, center, and left. The type of
shot chosen is hit, push, flick, drag, drag flick, deflect sweep, deflect
push, scoop, sweep, and reverse hit. This is to distinguish the pattern
of play between winning and losing, only 2 performance indicator
showed high significant differences from right (Z=-2.87, p=.004,
p
Abstract: A low-complexity, high-accurate frequency offset
estimation for multi-band orthogonal frequency division multiplexing (MB-OFDM) based ultra-wide band systems is presented regarding different carrier frequency offsets, different channel frequency
responses, different preamble patterns in different bands. Utilizing a
half-cycle Constant Amplitude Zero Auto Correlation (CAZAC) sequence as the preamble sequence, the estimator with a semi-cross
contrast scheme between two successive OFDM symbols is proposed. The CRLB and complexity of the proposed algorithm are derived.
Compared to the reference estimators, the proposed method achieves
significantly less complexity (about 50%) for all preamble patterns of the MB-OFDM systems. The CRLBs turn out to be of well performance.
Abstract: Genome profiling (GP), a genotype based technology, which exploits random PCR and temperature gradient gel electrophoresis, has been successful in identification/classification of organisms. In this technology, spiddos (Species identification dots) and PaSS (Pattern similarity score) were employed for measuring the closeness (or distance) between genomes. Based on the closeness (PaSS), we can buildup phylogenetic trees of the organisms. We noticed that the topology of the tree is rather robust against the experimental fluctuation conveyed by spiddos. This fact was confirmed quantitatively in this study by computer-simulation, providing the limit of the reliability of this highly powerful methodology. As a result, we could demonstrate the effectiveness of the GP approach for identification/classification of organisms.
Abstract: In this paper we show that adjusting ART in accordance with static network scenario can substantially improve the performance of AODV by reducing control overheads. We explain the relationship of control overheads with network size and request patterns of the users. Through simulation we show that making ART proportionate to network static time reduces the amount of control overheads independent of network size and user request patterns.
Abstract: Iris-based biometric authentication is gaining importance
in recent times. Iris biometric processing however, is a complex
process and computationally very expensive. In the overall processing
of iris biometric in an iris-based biometric authentication system,
feature processing is an important task. In feature processing, we extract
iris features, which are ultimately used in matching. Since there
is a large number of iris features and computational time increases
as the number of features increases, it is therefore a challenge to
develop an iris processing system with as few as possible number of
features and at the same time without compromising the correctness.
In this paper, we address this issue and present an approach to feature
extraction and feature matching process. We apply Daubechies D4
wavelet with 4 levels to extract features from iris images. These
features are encoded with 2 bits by quantizing into 4 quantization
levels. With our proposed approach it is possible to represent an
iris template with only 304 bits, whereas existing approaches require
as many as 1024 bits. In addition, we assign different weights to
different iris region to compare two iris templates which significantly
increases the accuracy. Further, we match the iris template based on
a weighted similarity measure. Experimental results on several iris
databases substantiate the efficacy of our approach.
Abstract: Although the field of parametric Pattern Recognition (PR) has been thoroughly studied for over five decades, the use of the Order Statistics (OS) of the distributions to achieve this has not been reported. The pioneering work on using OS for classification was presented in [1] for the Uniform distribution, where it was shown that optimal PR can be achieved in a counter-intuitive manner, diametrically opposed to the Bayesian paradigm, i.e., by comparing the testing sample to a few samples distant from the mean. This must be contrasted with the Bayesian paradigm in which, if we are allowed to compare the testing sample with only a single point in the feature space from each class, the optimal strategy would be to achieve this based on the (Mahalanobis) distance from the corresponding central points, for example, the means. In [2], we showed that the results could be extended for a few symmetric distributions within the exponential family. In this paper, we attempt to extend these results significantly by considering asymmetric distributions within the exponential family, for some of which even the closed form expressions of the cumulative distribution functions are not available. These distributions include the Rayleigh, Gamma and certain Beta distributions. As in [1] and [2], the new scheme, referred to as Classification by Moments of Order Statistics (CMOS), attains an accuracy very close to the optimal Bayes’ bound, as has been shown both theoretically and by rigorous experimental testing.
Abstract: The Constraints imposed by non-thermal
leptogenesis on the survival of the neutrino mass models describing
the presently available neutrino mass patterns, are studied
numerically. We consider the Majorana CP violating phases coming
from right-handed Majorana mass matrices to estimate the baryon
asymmetry of the universe, for different neutrino mass models
namely quasi-degenerate, inverted hierarchical and normal
hierarchical models, with tribimaximal mixings. Considering two
possible diagonal forms of Dirac neutrino mass matrix as either
charged lepton or up-quark mass matrix, the heavy right-handed
mass matrices are constructed from the light neutrino mass matrix.
Only the normal hierarchical model leads to the best predictions of
baryon asymmetry of the universe, consistent with observations in
non-thermal leptogenesis scenario.
Abstract: The most common forensic activity is searching a hard
disk for string of data. Nowadays, investigators and analysts are
increasingly experiencing large, even terabyte sized data sets when
conducting digital investigations. Therefore consecutive searching can
take weeks to complete successfully. There are two primary search
methods: index-based search and bitwise search. Index-based
searching is very fast after the initial indexing but initial indexing
takes a long time. In this paper, we discuss a high speed bitwise search
model for large-scale digital forensic investigations. We used pattern
matching board, which is generally used for network security, to
search for string and complex regular expressions. Our results indicate
that in many cases, the use of pattern matching board can substantially
increase the performance of digital forensic search tools.
Abstract: Fingerprint based identification system; one of a well
known biometric system in the area of pattern recognition and has
always been under study through its important role in forensic
science that could help government criminal justice community. In
this paper, we proposed an identification framework of individuals by
means of fingerprint. Different from the most conventional
fingerprint identification frameworks the extracted Geometrical
element features (GEFs) will go through a Discretization process.
The intention of Discretization in this study is to attain individual
unique features that could reflect the individual varianceness in order
to discriminate one person from another. Previously, Discretization
has been shown a particularly efficient identification on English
handwriting with accuracy of 99.9% and on discrimination of twins-
handwriting with accuracy of 98%. Due to its high discriminative
power, this method is adopted into this framework as an independent
based method to seek for the accuracy of fingerprint identification.
Finally the experimental result shows that the accuracy rate of
identification of the proposed system using Discretization is 100%
for FVC2000, 93% for FVC2002 and 89.7% for FVC2004 which is
much better than the conventional or the existing fingerprint
identification system (72% for FVC2000, 26% for FVC2002 and
32.8% for FVC2004). The result indicates that Discretization
approach manages to boost up the classification effectively, and
therefore prove to be suitable for other biometric features besides
handwriting and fingerprint.
Abstract: The experimental results on combustion of rice husk
in a conical fluidized bed combustor (referred to as the conical FBC)
using silica sand as the bed material are presented in this paper. The
effects of excess combustion air and combustor loading as well as the
sand bed height on the combustion pattern in FBC were investigated.
Temperatures and gas concentrations (CO and NO) along over the
combustor height as well as in the flue gas downstream from the ash
collecting cyclone were measured. The results showed that the axial
temperature profiles in FBC were explicitly affected by the
combustor loading whereas the excess air and bed height were found
to have minor influences on the temperature pattern. Meanwhile, the
combustor loading and the excess air significantly affected the axial
CO and NO concentration profiles; however, these profiles were
almost independent of the bed height. The combustion and thermal
efficiencies for this FBC were quantified for different operating
conditions.
Abstract: With the popularity of the multi-core and many-core architectures there is a great requirement for software frameworks which can support parallel programming methodologies. In this paper we introduce an Eclipse toolkit, JConqurr which is easy to use and provides robust support for flexible parallel progrmaming. JConqurr is a multi-core and many-core programming toolkit for Java which is capable of providing support for common parallel programming patterns which include task, data, divide and conquer and pipeline parallelism. The toolkit uses an annotation and a directive mechanism to convert the sequential code into parallel code. In addition to that we have proposed a novel mechanism to achieve the parallelism using graphical processing units (GPU). Experiments with common parallelizable algorithms have shown that our toolkit can be easily and efficiently used to convert sequential code to parallel code and significant performance gains can be achieved.
Abstract: The growing health hazardous impact of arsenic (As)
contamination in environment is the impetus of the present
investigation. Application of lactic acid bacteria (LAB) for the
removal of toxic and heavy metals from water has been reported.
This study was performed in order to isolate and characterize the Asresistant
LAB from mud and sludge samples for using as efficient As
uptaking probiotic. Isolation of As-resistant LAB colonies was
performed by spread plate technique using bromocresol purple
impregnated-MRS (BP-MRS) agar media provided with As @ 50
μg/ml. Isolated LAB were employed for probiotic characterization
process, acid and bile tolerance, lactic acid production, antibacterial
activity and antibiotic tolerance assays. After As-resistant and
removal characterizations, the LAB were identified using 16S rDNA
sequencing. A total of 103 isolates were identified as As-resistant
strains of LAB. The survival of 6 strains (As99-1, As100-2, As101-3,
As102-4, As105-7, and As112-9) was found after passing through the
sequential probiotic characterizations. Resistant pattern pronounced
hollow zones at As concentration >2000 μg/ml in As99-1, As100-2,
and As101-3 LAB strains, whereas it was found at ~1000 μg/ml in
rest 3 strains. Among 6 strains, the As uptake efficiency of As102-4
(0.006 μg/h/mg wet weight of cell) was higher (17 – 209%)
compared to remaining LAB. 16S rDNA sequencing data of 3 (As99-
1, As100-2, and As101-3) and 3 (As102-4, As105-7, and As112-9)
LAB strains clearly showed 97 to 99% (340 bp) homology to
Pediococcus dextrinicus and Pediococcus acidilactici, respectively.
Though, there was no correlation between the metal resistant and
removal efficiency of LAB examined but identified elevated As
removing LAB would probably be a potential As uptaking probiotic
agent. Since present experiment concerned with only As removal
from pure water, As removal and removal mechanism in natural
condition of intestinal milieu should be assessed in future studies.
Abstract: In this paper a new method is suggested for risk
management by the numerical patterns in data-mining. These patterns
are designed using probability rules in decision trees and are cared to
be valid, novel, useful and understandable. Considering a set of
functions, the system reaches to a good pattern or better objectives.
The patterns are analyzed through the produced matrices and some
results are pointed out. By using the suggested method the direction
of the functionality route in the systems can be controlled and best
planning for special objectives be done.
Abstract: People usually have a telephone voice, which means
they adjust their speech to fit particular situations and to blend in with
other interlocutors. The question is: Do we speak differently to
different people? This possibility has been suggested by social
psychologists within Accommodation Theory [1]. Converging toward
the speech of another person can be regarded as a polite speech
strategy while choosing a language not used by the other interlocutor
can be considered as the clearest example of speech divergence [2].
The present study sets out to investigate such processes in the course
of everyday telephone conversations. Using Joos-s [3] model of
formality in spoken English, the researchers try to explore
convergence to or divergence from the addressee. The results
propound the actuality that lexical choice, and subsequently, patterns
of style vary intriguingly in concordance with the person being
addressed.
Abstract: A novel method of learning complex fuzzy decision regions in the n-dimensional feature space is proposed. Through the fuzzy decision regions, a given pattern's class membership value of every class is determined instead of the conventional crisp class the pattern belongs to. The n-dimensional fuzzy decision region is approximated by union of hyperellipsoids. By explicitly parameterizing these hyperellipsoids, the decision regions are determined by estimating the parameters of each hyperellipsoid.Genetic Algorithm is applied to estimate the parameters of each region component. With the global optimization ability of GA, the learned decision region can be arbitrarily complex.
Abstract: The indoor airflow with a mixed natural/forced convection
was numerically calculated using the laminar and turbulent
approach. The Boussinesq approximation was considered for a simplification
of the mathematical model and calculations. The results
obtained, such as mean velocity fields, were successfully compared
with experimental PIV flow visualizations. The effect of the distance
between the cooled wall and the heat exchanger on the temperature
and velocity distributions was calculated. In a room with a simple
shape, the computational code OpenFOAM demonstrated an ability to
numerically predict flow patterns. Furthermore, numerical techniques,
boundary type conditions and the computational grid quality were
examined. Calculations using the turbulence model k-omega had a
significant effect on the results influencing temperature and velocity
distributions.
Abstract: A feature weighting and selection method is proposed
which uses the structure of a weightless neuron and exploits the
principles that govern the operation of Genetic Algorithms and
Evolution. Features are coded onto chromosomes in a novel way
which allows weighting information regarding the features to be
directly inferred from the gene values. The proposed method is
significant in that it addresses several problems concerned with
algorithms for feature selection and weighting as well as providing
significant advantages such as speed, simplicity and suitability for
real-time systems.
Abstract: The paper evaluates several hundred one-day-ahead
VaR forecasting models in the time period between the years 2004
and 2009 on data from six world stock indices - DJI, GSPC, IXIC,
FTSE, GDAXI and N225. The models model mean using the ARMA
processes with up to two lags and variance with one of GARCH,
EGARCH or TARCH processes with up to two lags. The models are
estimated on the data from the in-sample period and their forecasting
accuracy is evaluated on the out-of-sample data, which are more
volatile. The main aim of the paper is to test whether a model
estimated on data with lower volatility can be used in periods with
higher volatility. The evaluation is based on the conditional coverage
test and is performed on each stock index separately. The primary
result of the paper is that the volatility is best modelled using a
GARCH process and that an ARMA process pattern cannot be found
in analyzed time series.