Abstract: In this paper, we introduce a new method for elliptical
object identification. The proposed method adopts a hybrid scheme
which consists of Eigen values of covariance matrices, Circular
Hough transform and Bresenham-s raster scan algorithms. In this
approach we use the fact that the large Eigen values and small Eigen
values of covariance matrices are associated with the major and minor
axial lengths of the ellipse. The centre location of the ellipse can be
identified using circular Hough transform (CHT). Sparse matrix
technique is used to perform CHT. Since sparse matrices squeeze zero
elements and contain a small number of nonzero elements they
provide an advantage of matrix storage space and computational time.
Neighborhood suppression scheme is used to find the valid Hough
peaks. The accurate position of circumference pixels is identified
using raster scan algorithm which uses the geometrical symmetry
property. This method does not require the evaluation of tangents or
curvature of edge contours, which are generally very sensitive to
noise working conditions. The proposed method has the advantages of
small storage, high speed and accuracy in identifying the feature. The
new method has been tested on both synthetic and real images.
Several experiments have been conducted on various images with
considerable background noise to reveal the efficacy and robustness.
Experimental results about the accuracy of the proposed method,
comparisons with Hough transform and its variants and other
tangential based methods are reported.
Abstract: Experiments with pumpkin-rowanberry marmalade
candies were carried out at the Faculty of Food Technology of the
Latvia University of Agriculture. The objective of this investigation
was to evaluate the quality changes of pumpkin-rowanberry
marmalade candies packed in different packaging materials during
the storage of 15 weeks, and to find the most suitable packaging
material for prolongation of low sugar marmalade candies shelf-life.
An active packaging in combination with modified atmosphere
(MAP, CO2 100%) was examined and compared with traditional
packaging in air ambiance. Polymer Multibarrier 60 and paper bags
were used. Influence of iron based oxygen absorber in sachets of 500
cc obtained from Mitsubishi Gas Chemical Europe Ageless® on the
marmalade candies’ quality was tested during shelf life. Samples of
80±5 g were packaged in polymer pouches (110 mm x 110 mm),
hermetically sealed by MULTIVAC C300 vacuum chamber machine,
and stored in a room temperature +21±0.5 °C. The physiochemical
properties –moisture content, hardness, aw, pH, changes of
atmosphere content (CO2 and O2), ascorbic acid, total carotenoids,
total phenols in headspace of packs, and microbial conditions were
analysed before packaging and in the 1st, 3rd , 5th, 8th, 11th and 15th
weeks of storage.
Abstract: In this paper we present a new method for coin
identification. The proposed method adopts a hybrid scheme using
Eigenvalues of covariance matrix, Circular Hough Transform (CHT)
and Bresenham-s circle algorithm. The statistical and geometrical
properties of the small and large Eigenvalues of the covariance
matrix of a set of edge pixels over a connected region of support are
explored for the purpose of circular object detection. Sparse matrix
technique is used to perform CHT. Since sparse matrices squeeze
zero elements and contain only a small number of non-zero elements,
they provide an advantage of matrix storage space and computational
time. Neighborhood suppression scheme is used to find the valid
Hough peaks. The accurate position of the circumference pixels is
identified using Raster scan algorithm which uses geometrical
symmetry property. After finding circular objects, the proposed
method uses the texture on the surface of the coins called texton,
which are unique properties of coins, refers to the fundamental micro
structure in generic natural images. This method has been tested on
several real world images including coin and non-coin images. The
performance is also evaluated based on the noise withstanding
capability.
Abstract: Sparse representation has long been studied and several
dictionary learning methods have been proposed. The dictionary
learning methods are widely used because they are adaptive. In this
paper, a new dictionary learning method for audio is proposed. Signals
are at first decomposed into different degrees of Intrinsic Mode
Functions (IMF) using Empirical Mode Decomposition (EMD)
technique. Then these IMFs form a learned dictionary. To reduce the
size of the dictionary, the K-means method is applied to the dictionary
to generate a K-EMD dictionary. Compared to K-SVD algorithm, the
K-EMD dictionary decomposes audio signals into structured
components, thus the sparsity of the representation is increased by
34.4% and the SNR of the recovered audio signals is increased by
20.9%.
Abstract: In this work, grinding or microcutting tools in the form of pellets were manufactured using a bounded alumina abrasive grains. The bound used is a vitreous material containing quartz feldspars, kaolinite and a quantity of hematite. The pellets were used in glass grinding process to replace the free abrasive grains lapping process. The study of the elaborated pellets were done to define their effectiveness in the grinding process and to optimize the influence of the pellets elaboration parameters. The obtained results show the existence of an optimal combination of the pellets elaboration parameters for each glass grinding phase (coarse to fine grinding). The final roughness (rms) reached by the elaborated pellets on a BK7 glass surface was about 0.392 μm.
Abstract: The problem addressed herein is the efficient management of the Grid/Cluster intense computation involved, when the preconditioned Bi-CGSTAB Krylov method is employed for the iterative solution of the large and sparse linear system arising from the discretization of the Modified Helmholtz-Dirichlet problem by the Hermite Collocation method. Taking advantage of the Collocation ma-trix's red-black ordered structure we organize efficiently the whole computation and map it on a pipeline architecture with master-slave communication. Implementation, through MPI programming tools, is realized on a SUN V240 cluster, inter-connected through a 100Mbps and 1Gbps ethernet network,and its performance is presented by speedup measurements included.
Abstract: A series of microarray experiments produces observations
of differential expression for thousands of genes across multiple
conditions.
Principal component analysis(PCA) has been widely used in
multivariate data analysis to reduce the dimensionality of the data in
order to simplify subsequent analysis and allow for summarization of
the data in a parsimonious manner. PCA, which can be implemented
via a singular value decomposition(SVD), is useful for analysis of
microarray data.
For application of PCA using SVD we use the DNA microarray
data for the small round blue cell tumors(SRBCT) of childhood
by Khan et al.(2001). To decide the number of components which
account for sufficient amount of information we draw scree plot.
Biplot, a graphic display associated with PCA, reveals important
features that exhibit relationship between variables and also the
relationship of variables with observations.
Abstract: Although Face detection is not a recent activity in the
field of image processing, it is still an open area for research. The
greatest step in this field is the work reported by Viola and its recent
analogous is Huang et al. Both of them use similar features and also
similar training process. The former is just for detecting upright
faces, but the latter can detect multi-view faces in still grayscale
images using new features called 'sparse feature'. Finding these
features is very time consuming and inefficient by proposed methods.
Here, we propose a new approach for finding sparse features using a
genetic algorithm system. This method requires less computational
cost and gets more effective features in learning process for face
detection that causes more accuracy.
Abstract: The paper provides a discussion of the most relevant
aspects of yield curve modeling. Two classes of models are
considered: stochastic and parsimonious function based, through the
approaches developed by Vasicek (1977) and Nelson and Siegel
(1987). Yield curve estimates for Croatia are presented and their
dynamics analyzed and finally, a comparative analysis of models is
conducted.
Abstract: In this paper we propose a method for vision systems
to consistently represent functional dependencies between different
visual routines along with relational short- and long-term knowledge
about the world. Here the visual routines are bound to visual properties
of objects stored in the memory of the system. Furthermore,
the functional dependencies between the visual routines are seen
as a graph also belonging to the object-s structure. This graph is
parsed in the course of acquiring a visual property of an object to
automatically resolve the dependencies of the bound visual routines.
Using this representation, the system is able to dynamically rearrange
the processing order while keeping its functionality. Additionally, the
system is able to estimate the overall computational costs of a certain
action. We will also show that the system can efficiently use that
structure to incorporate already acquired knowledge and thus reduce
the computational demand.
Abstract: This paper presents an effective framework for Chinesesyntactic parsing, which includes two parts. The first one is a parsing framework, which is based on an improved bottom-up chart parsingalgorithm, and integrates the idea of the beam search strategy of N bestalgorithm and heuristic function of A* algorithm for pruning, then get multiple parsing trees. The second is a novel evaluation model, which integrates contextual and partial lexical information into traditional PCFG model and defines a new score function. Using this model, the tree with the highest score is found out as the best parsing tree. Finally,the contrasting experiment results are given. Keywords?syntactic parsing, PCFG, pruning, evaluation model.
Abstract: Text categorization is the problem of classifying text documents into a set of predefined classes. After a preprocessing step the documents are typically represented as large sparse vectors. When training classifiers on large collections of documents, both the time and memory restrictions can be quite prohibitive. This justifies the application of features selection methods to reduce the dimensionality of the document-representation vector. Four feature selection methods are evaluated: Random Selection, Information Gain (IG), Support Vector Machine (called SVM_FS) and Genetic Algorithm with SVM (GA_FS). We showed that the best results were obtained with SVM_FS and GA_FS methods for a relatively small dimension of the features vector comparative with the IG method that involves longer vectors, for quite similar classification accuracies. Also we present a novel method to better correlate SVM kernel-s parameters (Polynomial or Gaussian kernel).
Abstract: We present the development of a system of programs designed for the compilation and execution of applications for handheld computers. In introduction we describe the purpose of the project and its components. The next two paragraphs present the first two components of the project (the scanner and parser generators). Then we describe the Object Pascal compiler and the virtual machines for Windows and Palm OS. In conclusion we emphasize the ways in which the project can be extended.
Abstract: Although achieving zero-defect software release is
practically impossible, software industries should take maximum
care to detect defects/bugs well ahead in time allowing only bare
minimums to creep into released version. This is a clear indicator of
time playing an important role in the bug detection. In addition to
this, software quality is the major factor in software engineering
process. Moreover, early detection can be achieved only through
static code analysis as opposed to conventional testing.
BugCatcher.Net is a static analysis tool, which detects bugs in .NET®
languages through MSIL (Microsoft Intermediate Language)
inspection. The tool utilizes a Parser based on Finite State Automata
to carry out bug detection. After being detected, bugs need to be
corrected immediately. BugCatcher.Net facilitates correction, by
proposing a corrective solution for reported warnings/bugs to end
users with minimum side effects. Moreover, the tool is also capable
of analyzing the bug trend of a program under inspection.
Abstract: In this paper, a wavelet-based neural network (WNN) classifier for recognizing EEG signals is implemented and tested under three sets EEG signals (healthy subjects, patients with epilepsy and patients with epileptic syndrome during the seizure). First, the Discrete Wavelet Transform (DWT) with the Multi-Resolution Analysis (MRA) is applied to decompose EEG signal at resolution levels of the components of the EEG signal (δ, θ, α, β and γ) and the Parseval-s theorem are employed to extract the percentage distribution of energy features of the EEG signal at different resolution levels. Second, the neural network (NN) classifies these extracted features to identify the EEGs type according to the percentage distribution of energy features. The performance of the proposed algorithm has been evaluated using in total 300 EEG signals. The results showed that the proposed classifier has the ability of recognizing and classifying EEG signals efficiently.
Abstract: e-mail has become an important means of electronic
communication but the viability of its usage is marred by Unsolicited
Bulk e-mail (UBE) messages. UBE consists of many types
like pornographic, virus infected and 'cry-for-help' messages as well
as fake and fraudulent offers for jobs, winnings and medicines. UBE
poses technical and socio-economic challenges to usage of e-mails.
To meet this challenge and combat this menace, we need to
understand UBE. Towards this end, the current paper presents a
content-based textual analysis of nearly 3000 winnings-announcing
UBE. Technically, this is an application of Text Parsing and
Tokenization for an un-structured textual document and we approach
it using Bag Of Words (BOW) and Vector Space Document Model
techniques. We have attempted to identify the most frequently
occurring lexis in the winnings-announcing UBE documents. The
analysis of such top 100 lexis is also presented. We exhibit the
relationship between occurrence of a word from the identified lexisset
in the given UBE and the probability that the given UBE will be
the one announcing fake winnings. To the best of our knowledge and
survey of related literature, this is the first formal attempt for
identification of most frequently occurring lexis in winningsannouncing
UBE by its textual analysis. Finally, this is a sincere
attempt to bring about alertness against and mitigate the threat of
such luring but fake UBE.
Abstract: This paper presents a text clustering system developed based on a k-means type subspace clustering algorithm to cluster large, high dimensional and sparse text data. In this algorithm, a new step is added in the k-means clustering process to automatically calculate the weights of keywords in each cluster so that the important words of a cluster can be identified by the weight values. For understanding and interpretation of clustering results, a few keywords that can best represent the semantic topic are extracted from each cluster. Two methods are used to extract the representative words. The candidate words are first selected according to their weights calculated by our new algorithm. Then, the candidates are fed to the WordNet to identify the set of noun words and consolidate the synonymy and hyponymy words. Experimental results have shown that the clustering algorithm is superior to the other subspace clustering algorithms, such as PROCLUS and HARP and kmeans type algorithm, e.g., Bisecting-KMeans. Furthermore, the word extraction method is effective in selection of the words to represent the topics of the clusters.
Abstract: Many-core GPUs provide high computing ability and
substantial bandwidth; however, optimizing irregular applications
like SpMV on GPUs becomes a difficult but meaningful task. In this
paper, we propose a novel method to improve the performance of
SpMV on GPUs. A new storage format called HYB-R is proposed to
exploit GPU architecture more efficiently. The COO portion of the
matrix is partitioned recursively into a ELL portion and a COO
portion in the process of creating HYB-R format to ensure that there
are as many non-zeros as possible in ELL format. The method of
partitioning the matrix is an important problem for HYB-R kernel, so
we also try to tune the parameters to partition the matrix for higher
performance. Experimental results show that our method can get
better performance than the fastest kernel (HYB) in NVIDIA-s
SpMV library with as high as 17% speedup.
Abstract: The Continuously Adaptive Mean-Shift (CamShift)
algorithm, incorporating scene depth information is combined with
the l1-minimization sparse representation based method to form a
hybrid kernel and state space-based tracking algorithm. We take
advantage of the increased efficiency of the former with the
robustness to occlusion property of the latter. A simple interchange
scheme transfers control between algorithms based upon drift and
occlusion likelihood. It is quantified by the projection of target
candidates onto a depth map of the 2D scene obtained with a low cost
stereo vision webcam. Results are improved tracking in terms of drift
over each algorithm individually, in a challenging practical outdoor
multiple occlusion test case.
Abstract: Databases have become ubiquitous. Almost all IT applications are storing into and retrieving information from databases. Retrieving information from the database requires knowledge of technical languages such as Structured Query Language (SQL). However majority of the users who interact with the databases do not have a technical background and are intimidated by the idea of using languages such as SQL. This has led to the development of a few Natural Language Database Interfaces (NLDBIs). A NLDBI allows the user to query the database in a natural language. This paper highlights on architecture of new NLDBI system, its implementation and discusses on results obtained. In most of the typical NLDBI systems the natural language statement is converted into an internal representation based on the syntactic and semantic knowledge of the natural language. This representation is then converted into queries using a representation converter. A natural language query is translated to an equivalent SQL query after processing through various stages. The work has been experimented on primitive database queries with certain constraints.