Abstract: Knowledge and these notions have become more and
more important and we speak about a knowledge based society
today. A lot of small and big companies have reacted upon these new
challenges. But there is a deep abyss about knowledge conception
and practice between the professional researchers and company - life.
The question of this research was: How can small and mediumsized
companies be equal to the demands of new economy?
Questionnaires were used in this research and a special segment of
the native knowledge based on economy was focused on.
Researchers would have liked to know what the sources of success
are and how they can be in connection with questions of knowledge
acquisition, knowledge transfer, knowledge utilization in small and
medium-sized companies. These companies know that they have to
change their behaviour and thinking, but they are not on the suitable
level that they can compete with bigger or multinational companies.
Abstract: This article discusses the problem of estimating the
orientation of inclined ground on which a human subject stands based
on information provided by the vestibular system consisting of the
otolith and semicircular canals. It is assumed that body segments are
not necessarily aligned and thus forming an open kinematic chain.
The semicircular canals analogues to a technical gyrometer provide a
measure of the angular velocity whereas the otolith analogues to a
technical accelerometer provide a measure of the translational
acceleration. Two solutions are proposed and discussed. The first is
based on a stand-alone Kalman filter that optimally fuses the two
measurements based on their dynamic characteristics and their noise
properties. In this case, no body dynamic model is needed. In the
second solution, a central extended disturbance observer that
incorporates a body dynamic model (internal model) is employed.
The merits of both solutions are discussed and demonstrated by
experimental and simulation results.
Abstract: Motion detection is a basic operation in the selection of significant segments of the video signals. For an effective Human Computer Intelligent Interaction, the computer needs to recognize the motion and track the moving object. Here an efficient neural network system is proposed for motion detection from the static background. This method mainly consists of four parts like Frame Separation, Rough Motion Detection, Network Formation and Training, Object Tracking. This paper can be used to verify real time detections in such a way that it can be used in defense applications, bio-medical applications and robotics. This can also be used for obtaining detection information related to the size, location and direction of motion of moving objects for assessment purposes. The time taken for video tracking by this Neural Network is only few seconds.
Abstract: Most fingerprint recognition techniques are based on minutiae matching and have been well studied. However, this technology still suffers from problems associated with the handling of poor quality impressions. One problem besetting fingerprint matching is distortion. Distortion changes both geometric position and orientation, and leads to difficulties in establishing a match among multiple impressions acquired from the same finger tip. Marking all the minutiae accurately as well as rejecting false minutiae is another issue still under research. Our work has combined many methods to build a minutia extractor and a minutia matcher. The combination of multiple methods comes from a wide investigation into research papers. Also some novel changes like segmentation using Morphological operations, improved thinning, false minutiae removal methods, minutia marking with special considering the triple branch counting, minutia unification by decomposing a branch into three terminations, and matching in the unified x-y coordinate system after a two-step transformation are used in the work.
Abstract: In this paper, a method for matching image segments
using triangle-based (geometrical) regions is proposed. Triangular
regions are formed from triples of vertex points obtained from a
keypoint detector (SIFT). However, triangle regions are subject to
noise and distortion around the edges and vertices (especially acute
angles). Therefore, these triangles are expanded into parallelogramshaped
regions. The extracted image segments inherit an important
triangle property; the invariance to affine distortion. Given two
images, matching corresponding regions is conducted by computing
the relative affine matrix, rectifying one of the regions w.r.t. the other
one, then calculating the similarity between the reference and
rectified region. The experimental tests show the efficiency and
robustness of the proposed algorithm against geometrical distortion.
Abstract: Segmentation, filtering out of measurement errors and
identification of breakpoints are integral parts of any analysis of
microarray data for the detection of copy number variation (CNV).
Existing algorithms designed for these tasks have had some successes
in the past, but they tend to be O(N2) in either computation time or
memory requirement, or both, and the rapid advance of microarray
resolution has practically rendered such algorithms useless. Here we
propose an algorithm, SAD, that is much faster and much less thirsty
for memory – O(N) in both computation time and memory requirement
-- and offers higher accuracy. The two key ingredients of SAD are the
fundamental assumption in statistics that measurement errors are
normally distributed and the mathematical relation that the product of
two Gaussians is another Gaussian (function). We have produced a
computer program for analyzing CNV based on SAD. In addition to
being fast and small it offers two important features: quantitative
statistics for predictions and, with only two user-decided parameters,
ease of use. Its speed shows little dependence on genomic profile.
Running on an average modern computer, it completes CNV analyses
for a 262 thousand-probe array in ~1 second and a 1.8 million-probe
array in 9 seconds
Abstract: The conventional GA combined with a local search
algorithm, such as the 2-OPT, forms a hybrid genetic algorithm(HGA)
for the traveling salesman problem (TSP). However, the geometric
properties which are problem specific knowledge can be used to
improve the search process of the HGA. Some tour segments (edges)
of TSPs are fine while some maybe too long to appear in a short tour.
This knowledge could constrain GAs to work out with fine tour
segments without considering long tour segments as often.
Consequently, a new algorithm is proposed, called intelligent-OPT
hybrid genetic algorithm (IOHGA), to improve the GA and the 2-OPT
algorithm in order to reduce the search time for the optimal solution.
Based on the geometric properties, all the tour segments are assigned
2-level priorities to distinguish between good and bad genes. A
simulation study was conducted to evaluate the performance of the
IOHGA. The experimental results indicate that in general the IOHGA
could obtain near-optimal solutions with less time and better accuracy
than the hybrid genetic algorithm with simulated annealing algorithm
(HGA(SA)).
Abstract: This paper presents a comparative analysis of a new
unsupervised PCA-based technique for steel plates texture segmentation
towards defect detection. The proposed scheme called Variance
Based Component Analysis or VBCA employs PCA for feature
extraction, applies a feature reduction algorithm based on variance of
eigenpictures and classifies the pixels as defective and normal. While
the classic PCA uses a clusterer like Kmeans for pixel clustering,
VBCA employs thresholding and some post processing operations to
label pixels as defective and normal. The experimental results show
that proposed algorithm called VBCA is 12.46% more accurate and
78.85% faster than the classic PCA.
Abstract: The purpose of the present study is the calculation of Gutenber-Richter parameters (a, b) and analyze the mean annual rate of exceedance of earthquake magnitude (Om ) of southern segment of the Sagaing fault and its associate components. The study area is situated about 200 km radius centered at Yangon. Earthquake data file is using from 1975 to 2006 August 31. The bounded Gutenberg- Richter recurrence law for 0 M is 4.0 and max M is 7.5.
Abstract: Segmentation in ultrasound images is challenging due to the interference from speckle noise and fuzziness of boundaries. In this paper, a segmentation scheme using fuzzy c-means (FCM) clustering incorporating both intensity and texture information of images is proposed to extract breast lesions in ultrasound images. Firstly, the nonlinear structure tensor, which can facilitate to refine the edges detected by intensity, is used to extract speckle texture. And then, a spatial FCM clustering is applied on the image feature space for segmentation. In the experiments with simulated and clinical ultrasound images, the spatial FCM clustering with both intensity and texture information gets more accurate results than the conventional FCM or spatial FCM without texture information.
Abstract: Speckled images arise when coherent microwave,
optical, and acoustic imaging techniques are used to image an object, surface or scene. Examples of coherent imaging systems include synthetic aperture radar, laser imaging systems, imaging sonar
systems, and medical ultrasound systems. Speckle noise is a form of object or target induced noise that results when the surface of the object is Rayleigh rough compared to the wavelength of the illuminating radiation. Detection and estimation in images corrupted
by speckle noise is complicated by the nature of the noise and is not
as straightforward as detection and estimation in additive noise. In
this work, we derive stochastic models for speckle noise, with an emphasis on speckle as it arises in medical ultrasound images. The
motivation for this work is the problem of segmentation and tissue classification using ultrasound imaging. Modeling of speckle in this
context involves partially developed speckle model where an underlying Poisson point process modulates a Gram-Charlier series
of Laguerre weighted exponential functions, resulting in a doubly
stochastic filtered Poisson point process. The statistical distribution of partially developed speckle is derived in a closed canonical form.
It is observed that as the mean number of scatterers in a resolution cell is increased, the probability density function approaches an
exponential distribution. This is consistent with fully developed speckle noise as demonstrated by the Central Limit theorem.
Abstract: Facial expression analysis plays a significant role for
human computer interaction. Automatic analysis of human facial
expression is still a challenging problem with many applications. In
this paper, we propose neuro-fuzzy based automatic facial expression
recognition system to recognize the human facial expressions like
happy, fear, sad, angry, disgust and surprise. Initially facial image is
segmented into three regions from which the uniform Local Binary
Pattern (LBP) texture features distributions are extracted and
represented as a histogram descriptor. The facial expressions are
recognized using Multiple Adaptive Neuro Fuzzy Inference System
(MANFIS). The proposed system designed and tested with JAFFE
face database. The proposed model reports 94.29% of classification
accuracy.
Abstract: System MEMORI automatically detects and recognizes
rotated and/or rescaled versions of the objects of a database within
digital color images with cluttered background. This task is accomplished
by means of a region grouping algorithm guided by heuristic
rules, whose parameters concern some geometrical properties and the
recognition score of the database objects. This paper focuses on the
strategies implemented in MEMORI for the estimation of the heuristic
rule parameters. This estimation, being automatic, makes the system
a self configuring and highly user-friendly tool.
Abstract: This paper presents a new growing neural network for
cluster analysis and market segmentation, which optimizes the size
and structure of clusters by iteratively checking them for multivariate
normality. We combine the recently published SGNN approach [8]
with the basic principle underlying the Gaussian-means algorithm
[13] and the Mardia test for multivariate normality [18, 19]. The new
approach distinguishes from existing ones by its holistic design and
its great autonomy regarding the clustering process as a whole. Its
performance is demonstrated by means of synthetic 2D data and by
real lifestyle survey data usable for market segmentation.
Abstract: A higher order spline interpolated contour obtained
with up-sampling of homogenously distributed coordinates for
segmentation of kidney region in different classes of ultrasound
kidney images has been developed and presented in this paper. The
performance of the proposed method is measured and compared with
modified snake model contour, Markov random field contour and
expert outlined contour. The validation of the method is made in
correspondence with expert outlined contour using maximum coordinate
distance, Hausdorff distance and mean radial distance
metrics. The results obtained reveal that proposed scheme provides
optimum contour that agrees well with expert outlined contour.
Moreover this technique helps to preserve the pixels-of-interest
which in specific defines the functional characteristic of kidney. This
explores various possibilities in implementing computer-aided
diagnosis system exclusively for US kidney images.
Abstract: The main objective of this paper is to provide an efficient tool for delineating brain tumors in three-dimensional magnetic resonance images. To achieve this goal, we use basically a level-sets approach to delineating three-dimensional brain tumors. Then we introduce a compression plan of 3D brain structures based for the meshes simplification, adapted for time to the specific needs of the telemedicine and to the capacities restricted by network communication. We present here the main stages of our system, and preliminary results which are very encouraging for clinical practice.
Abstract: Detection of player identity is challenging task in sport video content analysis. In case of soccer video player number recognition is effective and precise solution. Jersey numbers can be considered as scene text and difficulties in localization and recognition appear due to variations in orientation, size, illumination, motion etc. This paper proposed new method for player number localization and recognition. By observing hue, saturation and value for 50 different jersey examples we noticed that most often combination of low and high saturated pixels is used to separate number and jersey region. Image segmentation method based on this observation is introduced. Then, novel method for player number localization based on internal contours is proposed. False number candidates are filtered using area and aspect ratio. Before OCR processing extracted numbers are enhanced using image smoothing and rotation normalization.
Abstract: In this paper we focus on event extraction from Tamil
news article. This system utilizes a scoring scheme for extracting and
grouping event-specific sentences. Using this scoring scheme eventspecific
clustering is performed for multiple documents. Events are
extracted from each document using a scoring scheme based on
feature score and condition score. Similarly event specific sentences
are clustered from multiple documents using this scoring scheme.
The proposed system builds the Event Template based on user
specified query. The templates are filled with event specific details
like person, location and timeline extracted from the formed clusters.
The proposed system applies these methodologies for Tamil news
articles that have been enconverted into UNL graphs using a Tamil to
UNL-enconverter. The main intention of this work is to generate an
event based template.
Abstract: DNA microarray technology is widely used by
geneticists to diagnose or treat diseases through gene expression.
This technology is based on the hybridization of a tissue-s DNA
sequence into a substrate and the further analysis of the image
formed by the thousands of genes in the DNA as green, red or yellow
spots. The process of DNA microarray image analysis involves
finding the location of the spots and the quantification of the
expression level of these. In this paper, a tool to perform DNA
microarray image analysis is presented, including a spot addressing
method based on the image projections, the spot segmentation
through contour based segmentation and the extraction of relevant
information due to gene expression.
Abstract: In this paper we present an off line system for the
recognition of the handwritten numeric chains. Our work is divided
in two big parts. The first part is the realization of a recognition
system of the isolated handwritten digits. In this case the study is
based mainly on the evaluation of neural network performances,
trained with the gradient back propagation algorithm. The used
parameters to form the input vector of the neural network are
extracted on the binary images of the digits by several methods: the
distribution sequence, the Barr features and the centred moments of
the different projections and profiles. The second part is the
extension of our system for the reading of the handwritten numeric
chains constituted of a variable number of digits. The vertical
projection is used to segment the numeric chain at isolated digits and
every digit (or segment) will be presented separately to the entry of
the system achieved in the first part (recognition system of the
isolated handwritten digits). The result of the recognition of the
numeric chain will be displayed at the exit of the global system.