Abstract: In this paper, an efficient local appearance feature
extraction method based the multi-resolution Curvelet transform is
proposed in order to further enhance the performance of the well
known Linear Discriminant Analysis(LDA) method when applied
to face recognition. Each face is described by a subset of band
filtered images containing block-based Curvelet coefficients. These
coefficients characterize the face texture and a set of simple statistical
measures allows us to form compact and meaningful feature vectors.
The proposed method is compared with some related feature extraction
methods such as Principal component analysis (PCA), as well
as Linear Discriminant Analysis LDA, and independent component
Analysis (ICA). Two different muti-resolution transforms, Wavelet
(DWT) and Contourlet, were also compared against the Block Based
Curvelet-LDA algorithm. Experimental results on ORL, YALE and
FERET face databases convince us that the proposed method provides
a better representation of the class information and obtains much
higher recognition accuracies.
Abstract: The pipe inspection operation is the difficult detective
performance. Almost applications are mainly relies on a manual
recognition of defective areas that have carried out detection by an
engineer. Therefore, an automation process task becomes a necessary
in order to avoid the cost incurred in such a manual process. An
automated monitoring method to obtain a complete picture of the
sewer condition is proposed in this work. The focus of the research is
the automated identification and classification of discontinuities in
the internal surface of the pipe. The methodology consists of several
processing stages including image segmentation into the potential
defect regions and geometrical characteristic features. Automatic
recognition and classification of pipe defects are carried out by means
of using an artificial neural network technique (ANN) based on
Radial Basic Function (RBF). Experiments in a realistic environment
have been conducted and results are presented.
Abstract: The design of a pattern classifier includes an attempt
to select, among a set of possible features, a minimum subset of
weakly correlated features that better discriminate the pattern classes.
This is usually a difficult task in practice, normally requiring the
application of heuristic knowledge about the specific problem
domain. The selection and quality of the features representing each
pattern have a considerable bearing on the success of subsequent
pattern classification. Feature extraction is the process of deriving
new features from the original features in order to reduce the cost of
feature measurement, increase classifier efficiency, and allow higher
classification accuracy. Many current feature extraction techniques
involve linear transformations of the original pattern vectors to new
vectors of lower dimensionality. While this is useful for data
visualization and increasing classification efficiency, it does not
necessarily reduce the number of features that must be measured
since each new feature may be a linear combination of all of the
features in the original pattern vector. In this paper a new approach is
presented to feature extraction in which feature selection, feature
extraction, and classifier training are performed simultaneously using
a genetic algorithm. In this approach each feature value is first
normalized by a linear equation, then scaled by the associated weight
prior to training, testing, and classification. A knn classifier is used to
evaluate each set of feature weights. The genetic algorithm optimizes
a vector of feature weights, which are used to scale the individual
features in the original pattern vectors in either a linear or a nonlinear
fashion. By this approach, the number of features used in classifying
can be finely reduced.
Abstract: A new approach based on the consideration that electroencephalogram (EEG) signals are chaotic signals was presented for automated diagnosis of electroencephalographic changes. This consideration was tested successfully using the nonlinear dynamics tools, like the computation of Lyapunov exponents. This paper presented the usage of statistics over the set of the Lyapunov exponents in order to reduce the dimensionality of the extracted feature vectors. Since classification is more accurate when the pattern is simplified through representation by important features, feature extraction and selection play an important role in classifying systems such as neural networks. Multilayer perceptron neural network (MLPNN) architectures were formulated and used as basis for detection of electroencephalographic changes. Three types of EEG signals (EEG signals recorded from healthy volunteers with eyes open, epilepsy patients in the epileptogenic zone during a seizure-free interval, and epilepsy patients during epileptic seizures) were classified. The selected Lyapunov exponents of the EEG signals were used as inputs of the MLPNN trained with Levenberg- Marquardt algorithm. The classification results confirmed that the proposed MLPNN has potential in detecting the electroencephalographic changes.
Abstract: The main features of NPP-2006/MIR-1200 design are
described. Estimation of individual doses for population under
normal operation and accident conditions is performed for
Leningradskaya NPP – 2 as an example. The radiation effect on
population and environment doesn-t exceed the established
normative limit and is as low as reasonably achievable. NPP-
2006/MIR-1200 design meets all Russian and international
requirements for power units under construction.
Abstract: The SOM has several beneficial features which make
it a useful method for data mining. One of the most important
features is the ability to preserve the topology in the projection.
There are several measures that can be used to quantify the goodness
of the map in order to obtain the optimal projection, including the
average quantization error and many topological errors. Many
researches have studied how the topology preservation should be
measured. One option consists of using the topographic error which
considers the ratio of data vectors for which the first and second best
BMUs are not adjacent. In this work we present a study of the
behaviour of the topographic error in different kinds of maps. We
have found that this error devaluates the rectangular maps and we
have studied the reasons why this happens. Finally, we suggest a new
topological error to improve the deficiency of the topographic error.
Abstract: The effects of dynamic subgrid scale (SGS) models are
investigated in variational multiscale (VMS) LES simulations of bluff
body flows. The spatial discretization is based on a mixed finite
element/finite volume formulation on unstructured grids. In the VMS
approach used in this work, the separation between the largest and the
smallest resolved scales is obtained through a variational projection
operator and a finite volume cell agglomeration. The dynamic version
of Smagorinsky and WALE SGS models are used to account for
the effects of the unresolved scales. In the VMS approach, these
effects are only modeled in the smallest resolved scales. The dynamic
VMS-LES approach is applied to the simulation of the flow around a
circular cylinder at Reynolds numbers 3900 and 20000 and to the flow
around a square cylinder at Reynolds numbers 22000 and 175000. It
is observed as in previous studies that the dynamic SGS procedure
has a smaller impact on the results within the VMS approach than in
LES. But improvements are demonstrated for important feature like
recirculating part of the flow. The global prediction is improved for
a small computational extra cost.
Abstract: This paper makes an attempt to solve the problem of
searching and retrieving of similar MRI photos via Internet services
using morphological features which are sourced via the original
image. This study is aiming to be considered as an additional tool of
searching and retrieve methods. Until now the main way of the
searching mechanism is based on the syntactic way using keywords.
The technique it proposes aims to serve the new requirements of
libraries. One of these is the development of computational tools for
the control and preservation of the intellectual property of digital
objects, and especially of digital images. For this purpose, this paper
proposes the use of a serial number extracted by using a previously
tested semantic properties method. This method, with its center being
the multi-layers of a set of arithmetic points, assures the following
two properties: the uniqueness of the final extracted number and the
semantic dependence of this number on the image used as the
method-s input. The major advantage of this method is that it can
control the authentication of a published image or its partial
modification to a reliable degree. Also, it acquires the better of the
known Hash functions that the digital signature schemes use and
produces alphanumeric strings for cases of authentication checking,
and the degree of similarity between an unknown image and an
original image.
Abstract: Speckle noise affects all coherent imaging systems
including medical ultrasound. In medical images, noise suppression
is a particularly delicate and difficult task. A tradeoff between noise
reduction and the preservation of actual image features has to be made
in a way that enhances the diagnostically relevant image content.
Even though wavelets have been extensively used for denoising
speckle images, we have found that denoising using contourlets gives
much better performance in terms of SNR, PSNR, MSE, variance and
correlation coefficient. The objective of the paper is to determine the
number of levels of Laplacian pyramidal decomposition, the number
of directional decompositions to perform on each pyramidal level and
thresholding schemes which yields optimal despeckling of medical
ultrasound images, in particular. The proposed method consists of the
log transformed original ultrasound image being subjected to contourlet
transform, to obtain contourlet coefficients. The transformed
image is denoised by applying thresholding techniques on individual
band pass sub bands using a Bayes shrinkage rule. We quantify the
achieved performance improvement.
Abstract: Frequent patterns are patterns such as sets of features or items that appear in data frequently. Finding such frequent patterns has become an important data mining task because it reveals associations, correlations, and many other interesting relationships hidden in a dataset. Most of the proposed frequent pattern mining algorithms have been implemented with imperative programming languages such as C, Cµ, Java. The imperative paradigm is significantly inefficient when itemset is large and the frequent pattern is long. We suggest a high-level declarative style of programming using a functional language. Our supposition is that the problem of frequent pattern discovery can be efficiently and concisely implemented via a functional paradigm since pattern matching is a fundamental feature supported by most functional languages. Our frequent pattern mining implementation using the Haskell language confirms our hypothesis about conciseness of the program. The performance studies on speed and memory usage support our intuition on efficiency of functional language.
Abstract: Theory of Constraints has been emerging as an
important tool for optimization of manufacturing/service systems.
Goldratt in his first book “ The Goal " gave the introduction on
Theory of Constraints and its applications in a factory scenario. A
large number of production managers around the globe read this book
but only a few could implement it in their plants because the book did
not explain the steps to implement TOC in the factory. To overcome
these limitations, Goldratt wrote this book to explain TOC, DBR and
the method to implement it. In this paper, an attempt has been made
to summarize the salient features of TOC and DBR listed in the book
and the correct approach to implement TOC in a factory setting. The
simulator available along with the book was actually used by the
authors and the claim of Goldratt regarding the use of DBR and
Buffer management to ease the work of production managers was
tested and was found to be correct.
Abstract: In this paper, we focus on the fusion of images from
different sources using multiresolution wavelet transforms. Based on
reviews of popular image fusion techniques used in data analysis,
different pixel and energy based methods are experimented. A novel
architecture with a hybrid algorithm is proposed which applies pixel
based maximum selection rule to low frequency approximations and
filter mask based fusion to high frequency details of wavelet
decomposition. The key feature of hybrid architecture is the
combination of advantages of pixel and region based fusion in a
single image which can help the development of sophisticated
algorithms enhancing the edges and structural details. A Graphical
User Interface is developed for image fusion to make the research
outcomes available to the end user. To utilize GUI capabilities for
medical, industrial and commercial activities without MATLAB
installation, a standalone executable application is also developed
using Matlab Compiler Runtime.
Abstract: An application framework provides a reusable design
and implementation for a family of software systems. Frameworks
are introduced to reduce the cost of a product line (i.e., a family of
products that shares the common features). Software testing is a timeconsuming
and costly ongoing activity during the application
software development process. Generating reusable test cases for the
framework applications during the framework development stage,
and providing and using the test cases to test part of the framework
application whenever the framework is used reduces the application
development time and cost considerably. This paper introduces the
Framework Interface State Transition Tester (FIST2), a tool for
automated unit testing of Java framework applications. During the
framework development stage, given the formal descriptions of the
framework hooks, the specifications of the methods of the
framework-s extensible classes, and the illegal behavior description
of the Framework Interface Classes (FICs), FIST2 generates unitlevel
test cases for the classes. At the framework application
development stage, given the customized method specifications of
the implemented FICs, FIST2 automates the use, execution, and
evaluation of the already generated test cases to test the implemented
FICs. The paper illustrates the use of the FIST2 tool for testing
several applications that use the SalesPoint framework.
Abstract: This paper attempts to discuss the evolution of the
retrieval techniques focusing on development, challenges and trends
of the image retrieval. It highlights both the already addressed and
outstanding issues. The explosive growth of image data leads to the
need of research and development of Image Retrieval. However,
Image retrieval researches are moving from keyword, to low level
features and to semantic features. Drive towards semantic features is
due to the problem of the keywords which can be very subjective and
time consuming while low level features cannot always describe high
level concepts in the users- mind.
Abstract: This paper presents an architecture of current filesystem
implementations as well as our new filesystem SpadFS and operating
system Spad with rewritten VFS layer targeted at high performance
I/O applications. The paper presents microbenchmarks and real-world
benchmarks of different filesystems on the same kernel as well as
benchmarks of the same filesystem on different kernels – enabling
the reader to make conclusion how much is the performance of
various tasks affected by operating system and how much by physical
layout of data on disk. The paper describes our novel features–most
notably continuous allocation of directories and cross-file readahead
– and shows their impact on performance.
Abstract: An image texture analysis and target recognition approach of using an improved image texture feature coding method (TFCM) and Support Vector Machine (SVM) for target detection is presented. With our proposed target detection framework, targets of interest can be detected accurately. Cascade-Sliding-Window technique was also developed for automated target localization. Application to mammogram showed that over 88% of normal mammograms and 80% of abnormal mammograms can be correctly identified. The approach was also successfully applied to Synthetic Aperture Radar (SAR) and Ground Penetrating Radar (GPR) images for target detection.
Abstract: In this paper, we present an improved fast and robust
search algorithm for copy detection using histogram-based features for
short MPEG video clips from large video database. There are two
types of histogram features used to generate more robust features. The
first one is based on the adjacent pixel intensity difference quantization
(APIDQ) algorithm, which had been reliably applied to human face
recognition previously. An APIDQ histogram is utilized as the feature
vector of the frame image. Another one is ordinal histogram feature
which is robust to color distortion. Furthermore, by Combining with a
temporal division method, the spatial and temporal features of the
video sequence are integrated to realize fast and robust video search
for copy detection. Experimental results show the proposed algorithm
can detect the similar video clip more accurately and robust than
conventional fast video search algorithm.
Abstract: A novel algorithm for construct a seamless video mosaic of the entire panorama continuously by automatically analyzing and managing feature points, including management of quantity and quality, from the sequence is presented. Since a video contains significant redundancy, so that not all consecutive video images are required to create a mosaic. Only some key images need to be selected. Meanwhile, feature-based methods for mosaicing rely on correction of feature points? correspondence deeply, and if the key images have large frame interval, the mosaic will often be interrupted by the scarcity of corresponding feature points. A unique character of the method is its ability to handle all the problems above in video mosaicing. Experiments have been performed under various conditions, the results show that our method could achieve fast and accurate video mosaic construction. Keywords?video mosaic, feature points management, homography estimation.
Abstract: In this paper, we propose an improved fast search
algorithm using combined histogram features and temporal division
method for short MPEG video clips from large video database. There
are two types of histogram features used to generate more robust
features. The first one is based on the adjacent pixel intensity
difference quantization (APIDQ) algorithm, which had been reliably
applied to human face recognition previously. An APIDQ histogram is
utilized as the feature vector of the frame image. Another one is
ordinal feature which is robust to color distortion. Combined with
active search [4], a temporal pruning algorithm, fast and robust video
search can be realized. The proposed search algorithm has been
evaluated by 6 hours of video to search for given 200 MPEG video
clips which each length is 30 seconds. Experimental results show the
proposed algorithm can detect the similar video clip in merely 120ms,
and Equal Error Rate (ERR) of 1% is achieved, which is more
accurately and robust than conventional fast video search algorithm.
Abstract: An experimental investigation was performed on pulp
liquid flow in straight ducts with a square cross section. Fully
developed steady flow was visualized and the fiber concentration was
obtained using a light-section method developed by the author et al.
The obtained results reveal quantitatively, in a definite form, the
distribution of the fiber concentration. From the results and
measurements of pressure loss, it is found that the flow characteristics
of pulp liquid in ducts can be classified into five patterns. The
relationships among the distributions of mean and fluctuation of fiber
concentration, the pressure loss and the flow velocity are discussed,
and then the features for each pattern are extracted. The degree of
nonuniformity of the fiber concentration, which is indicated by the
standard deviation of its distribution, is decreased from 0.3 to 0.05
with an increase in the velocity of the tested pulp liquid from 0.4 to
0.8%.