Abstract: This paper proposes a new design of spatial FIR
filter to automatically detect water level from a video signal of
various river surroundings. A new approach in this report applies
"addition" of frames and a "horizontal" edge detector to distinguish
water region and land region. Variance of each line of a filtered
video frame is used as a feature value. The water level is recognized
as a boundary line between the land region and the water region.
Edge detection filter essentially demarcates between two distinctly
different regions. However, the conventional filters are not
automatically adaptive to detect water level in various lighting
conditions of river scenery. An optimized filter is purposed so that
the system becomes robust to changes of lighting condition. More
reliability of the proposed system with the optimized filter is
confirmed by accuracy of water level detection.
Abstract: Despite of the preponderant role played by cement among the construction materials, it is today considered as a material destructing the environment due to the large quantities of carbon dioxide exhausted during its manufacture. Besides, global warming is now recognized worldwide as the new threat to the humankind against which advanced countries are investigating measures to reduce the current amount of exhausted gases to the half by 2050. Accordingly, efforts to reduce green gases are exerted in all industrial fields. Especially, the cement industry strives to reduce the consumption of cement through the development of alkali-activated geopolymer mortars using industrial byproducts like bottom ash. This study intends to gather basic data on the flowability and strength development characteristics of alkali-activated geopolymer mortar by examining its FT-IT features with respect to the effects and strength of the alkali-activator in order to develop bottom ash-based alkali-activated geopolymer mortar. The results show that the 35:65 mass ratio of sodium hydroxide to sodium silicate is appropriate and that a molarity of 9M for sodium hydroxide is advantageous. The ratio of the alkali-activators to bottom ash is seen to have poor effect on the strength. Moreover, the FT-IR analysis reveals that larger improvement of the strength shifts the peak from 1060 cm–1 (T-O, T=Si or Al) toward shorter wavenumber.
Abstract: Thanks to VR technology advanced, there are many
researches had used VR technology to develop a training system.
Using VR characteristics can simulate many kinds of situations to
reach our training-s goal. However, a good training system not only
considers real simulation but also considers learner-s learning
motivation. So, there are many researches started to conduct game-s
features into VR training system. We typically called this is a serious
game. It is using game-s features to engage learner-s learning
motivation. However, VR or Serious game has another important
advantage. That is simulating feature. Using this feature can create
any kinds of pressured environments. Because in the real
environment may happen any emergent situations. So, increasing the
trainees- pressure is more important when they are training. Most
pervious researches are investigated serious game-s applications and
learning performance. Seldom researches investigated how to
increase the learner-s mental workload when they are training. So, in
our study, we will introduce a real case study and create two types
training environments. Comparing the learner-s mental workload
between VR training and serious game.
Abstract: This paper presents a review on vision aided systems
and proposes an approach for visual rehabilitation using stereo vision
technology. The proposed system utilizes stereo vision, image
processing methodology and a sonification procedure to support
blind navigation. The developed system includes a wearable
computer, stereo cameras as vision sensor and stereo earphones, all
moulded in a helmet. The image of the scene infront of visually
handicapped is captured by the vision sensors. The captured images
are processed to enhance the important features in the scene in front,
for navigation assistance. The image processing is designed as model
of human vision by identifying the obstacles and their depth
information. The processed image is mapped on to musical stereo
sound for the blind-s understanding of the scene infront. The
developed method has been tested in the indoor and outdoor
environments and the proposed image processing methodology is
found to be effective for object identification.
Abstract: The Ant Colony Optimization (ACO) is a metaheuristic inspired by the behavior of real ants in their search for the shortest paths to food sources. It has recently attracted a lot of attention and has been successfully applied to a number of different optimization problems. Due to the importance of the feature selection problem and the potential of ACO, this paper presents a novel method that utilizes the ACO algorithm to implement a feature subset search procedure. Initial results obtained using the classification of speech segments are very promising.
Abstract: Elastic light single-scattering spectroscopy system
with a single optical fiber probe was employed to differentiate cancerous prostate tissue from non-cancerous prostate tissue ex-vivo just after radical prostatectomy. First, ELSSS spectra were acquired
from cancerous prostate tissue to define its spectral features. Then,
spectra were acquired from normal prostate tissue to define difference in spectral features between the cancerous and normal
prostate tissues. Of the total 66 tissue samples were evaluated from
nine patients by ELSSS system. Comparing of histopathology results
and ELSSS measurements revealed that sign of the spectral slopes of
cancerous prostate tissue is negative and non-cancerous tissue is positive in the wavelength range from 450 to 750 nm. Based on the
correlation between histopathology results and sign of the spectral
slopes, ELSSS system differentiates cancerous prostate tissue from
non- cancerous with a sensitivity of 0.95 and a specificity of 0.94.
Abstract: In our recent study, we have used ZnO nanoparticles assisted with UV light irradiation to investigate the photocatalytic degradation of Phenol Red (PR). The ZnO photocatalyst was characterized by X-ray diffraction (XRD), transmission electron microscopy (TEM), specific surface area analysis (BET) and UVvisible spectroscopy. X-ray diffractometry result for the ZnO nanoparticles exhibit normal crystalline phase features. All observed peaks can be indexed to the pure hexagonal wurtzite crystal structures, with the space group of P63mc. There are no other impurities in the diffraction peak. In addition, TEM measurement shows that most of the nanoparticles are rod-like and spherical in shape and fairly monodispersed. A significant degradation of the PR was observed when the catalyst was added into the solution even without the UV light exposure. In addition, the photodegradation increases with the photocatalyst loading. The surface area of the ZnO nanomaterials from the BET measurement was 11.9 m2/g. Besides the photocatalyst loading, the effect of some parameters on the photodegradation efficiency such as initial PR concentration and pH were also studied.
Abstract: An evolutionary method whose selection and recombination
operations are based on generalization error-bounds of
support vector machine (SVM) can select a subset of potentially
informative genes for SVM classifier very efficiently [7]. In this
paper, we will use the derivative of error-bound (first-order criteria)
to select and recombine gene features in the evolutionary process,
and compare the performance of the derivative of error-bound with
the error-bound itself (zero-order) in the evolutionary process. We
also investigate several error-bounds and their derivatives to compare
the performance, and find the best criteria for gene selection
and classification. We use 7 cancer-related human gene expression
datasets to evaluate the performance of the zero-order and first-order
criteria of error-bounds. Though both criteria have the same strategy
in theoretically, experimental results demonstrate the best criterion
for microarray gene expression data.
Abstract: Scene interpretation systems need to match (often ambiguous)
low-level input data to concepts from a high-level ontology.
In many domains, these decisions are uncertain and benefit greatly
from proper context. This paper demonstrates the use of decision
trees for estimating class probabilities for regions described by feature
vectors, and shows how context can be introduced in order to improve
the matching performance.
Abstract: In literature, there are metrics for identifying the
quality of reusable components but the framework that makes use of
these metrics to precisely predict reusability of software components
is still need to be worked out. These reusability metrics if identified
in the design phase or even in the coding phase can help us to reduce
the rework by improving quality of reuse of the software component
and hence improve the productivity due to probabilistic increase in
the reuse level. As CK metric suit is most widely used metrics for
extraction of structural features of an object oriented (OO) software;
So, in this study, tuned CK metric suit i.e. WMC, DIT, NOC, CBO
and LCOM, is used to obtain the structural analysis of OO-based
software components. An algorithm has been proposed in which the
inputs can be given to K-Means Clustering system in form of
tuned values of the OO software component and decision tree is
formed for the 10-fold cross validation of data to evaluate the in
terms of linguistic reusability value of the component. The developed
reusability model has produced high precision results as desired.
Abstract: This paper presents and evaluates a new classification
method that aims to improve classifiers performances and speed up
their training process. The proposed approach, called labeled
classification, seeks to improve convergence of the BP (Back
propagation) algorithm through the addition of an extra feature
(labels) to all training examples. To classify every new example, tests
will be carried out each label. The simplicity of implementation is the
main advantage of this approach because no modifications are
required in the training algorithms. Therefore, it can be used with
others techniques of acceleration and stabilization. In this work, two
models of the labeled classification are proposed: the LMLP
(Labeled Multi Layered Perceptron) and the LNFC (Labeled Neuro
Fuzzy Classifier). These models are tested using Iris, wine, texture
and human thigh databases to evaluate their performances.
Abstract: Texture information plays increasingly an important
role in remotely sensed imagery classification and many pattern
recognition applications. However, the selection of relevant textural
features to improve this classification accuracy is not a straightforward
task. This work investigates the effectiveness of two Mutual
Information Feature Selector (MIFS) algorithms to select salient
textural features that contain highly discriminatory information for
multispectral imagery classification. The input candidate features are
extracted from a SPOT High Resolution Visible(HRV) image using
Wavelet Transform (WT) at levels (l = 1,2).
The experimental results show that the selected textural features
according to MIFS algorithms make the largest contribution to
improve the classification accuracy than classical approaches such
as Principal Components Analysis (PCA) and Linear Discriminant
Analysis (LDA).
Abstract: This paper describes part of a project about Learningby-
Modeling (LbM). Studying complex systems is increasingly
important in teaching and learning many science domains. Many
features of complex systems make it difficult for students to develop
deep understanding. Previous research indicates that involvement
with modeling scientific phenomena and complex systems can play a
powerful role in science learning. Some researchers argue with this
view indicating that models and modeling do not contribute to
understanding complexity concepts, since these increases the
cognitive load on students. This study will investigate the effect of
different modes of involvement in exploring scientific phenomena
using computer simulation tools, on students- mental model from the
perspective of structure, behavior and function. Quantitative and
qualitative methods are used to report about 121 freshmen students
that engaged in participatory simulations about complex phenomena,
showing emergent, self-organized and decentralized patterns. Results
show that LbM plays a major role in students' concept formation
about complexity concepts.
Abstract: A typical definition of the Computer Aided Diagnosis
(CAD), found in literature, can be: A diagnosis made by a radiologist
using the output of a computerized scheme for automated image
analysis as a diagnostic aid. Often it is possible to find the expression
Computer Aided Detection (CAD or CADe): this definition
emphasizes the intent of CAD to support rather than substitute the
human observer in the analysis of radiographic images. In this article
we will illustrate the application of CAD systems and the aim of
these definitions.
Commercially available CAD systems use computerized
algorithms for identifying suspicious regions of interest. In this paper
are described the general CAD systems as an expert system
constituted of the following components: segmentation / detection,
feature extraction, and classification / decision making.
As example, in this work is shown the realization of a Computer-
Aided Detection system that is able to assist the radiologist in
identifying types of mammary tumor lesions. Furthermore this
prototype of station uses a GRID configuration to work on a large
distributed database of digitized mammographic images.
Abstract: Opinion extraction about products from customer
reviews is becoming an interesting area of research. Customer
reviews about products are nowadays available from blogs and
review sites. Also tools are being developed for extraction of opinion
from these reviews to help the user as well merchants to track the
most suitable choice of product. Therefore efficient method and
techniques are needed to extract opinions from review and blogs. As
reviews of products mostly contains discussion about the features,
functions and services, therefore, efficient techniques are required to
extract user comments about the desired features, functions and
services. In this paper we have proposed a novel idea to find features
of product from user review in an efficient way. Our focus in this
paper is to get the features and opinion-oriented words about
products from text through auxiliary verbs (AV) {is, was, are, were,
has, have, had}. From the results of our experiments we found that
82% of features and 85% of opinion-oriented sentences include AVs.
Thus these AVs are good indicators of features and opinion
orientation in customer reviews.
Abstract: The segmentation of mouth and lips is a fundamental
problem in facial image analyisis. In this paper we propose a method
for lip segmentation based on rg-color histogram. Statistical analysis
shows, using the rg-color-space is optimal for this purpose of a pure
color based segmentation. Initially a rough adaptive threshold selects
a histogram region, that assures that all pixels in that region are
skin pixels. Based on that pixels we build a gaussian model which
represents the skin pixels distribution and is utilized to obtain a
refined, optimal threshold. We are not incorporating shape or edge
information. In experiments we show the performance of our lip pixel
segmentation method compared to the ground truth of our dataset and
a conventional watershed algorithm.
Abstract: In this work, the primary compressive strength
components of human femur trabecular bone are qualitatively
assessed using image processing and wavelet analysis. The Primary
Compressive (PC) component in planar radiographic femur trabecular
images (N=50) is delineated by semi-automatic image processing
procedure. Auto threshold binarization algorithm is employed to
recognize the presence of mineralization in the digitized images. The
qualitative parameters such as apparent mineralization and total area
associated with the PC region are derived for normal and abnormal
images.The two-dimensional discrete wavelet transforms are utilized
to obtain appropriate features that quantify texture changes in medical
images .The normal and abnormal samples of the human femur are
comprehensively analyzed using Harr wavelet.The six statistical
parameters such as mean, median, mode, standard deviation, mean
absolute deviation and median absolute deviation are derived at level
4 decomposition for both approximation and horizontal wavelet
coefficients. The correlation coefficient of various wavelet derived
parameters with normal and abnormal for both approximated and
horizontal coefficients are estimated. It is seen that in almost all cases
the abnormal show higher degree of correlation than normals. Further
the parameters derived from approximation coefficient show more
correlation than those derived from the horizontal coefficients. The
parameters mean and median computed at the output of level 4 Harr
wavelet channel was found to be a useful predictor to delineate the
normal and the abnormal groups.
Abstract: Biometric measures of one kind or another have been
used to identify people since ancient times, with handwritten
signatures, facial features, and fingerprints being the traditional
methods. Of late, Systems have been built that automate the task of
recognition, using these methods and newer ones, such as hand
geometry, voiceprints and iris patterns. These systems have different
strengths and weaknesses. This work is a two-section composition. In
the starting section, we present an analytical and comparative study
of common biometric techniques. The performance of each of them
has been viewed and then tabularized as a result. The latter section
involves the actual implementation of the techniques under
consideration that has been done using a state of the art tool called,
MATLAB. This tool aids to effectively portray the corresponding
results and effects.
Abstract: In this experimental investigation shake table tests
were conducted on two reduced models that represent normal single
room building constructed by Compressed Stabilized Earth Block
(CSEB) from locally available soil. One model was constructed with
earthquake resisting features (EQRF) having sill band, lintel band and
vertical bands to control the building vibration and another one was
without Earthquake Resisting Features. To examine the seismic
capacity of the models particularly when it is subjected to long-period
ground motion by large amplitude by many cycles of repeated
loading, the test specimen was shaken repeatedly until the failure.
The test results from Hi-end Data Acquisition system show that
model with EQRF behave better than without EQRF. This modified
masonry model with new material combined with new bands is used
to improve the behavior of masonry building.
Abstract: As the web continues to grow exponentially, the idea
of crawling the entire web on a regular basis becomes less and less
feasible, so the need to include information on specific domain,
domain-specific search engines was proposed. As more information
becomes available on the World Wide Web, it becomes more difficult
to provide effective search tools for information access. Today,
people access web information through two main kinds of search
interfaces: Browsers (clicking and following hyperlinks) and Query
Engines (queries in the form of a set of keywords showing the topic
of interest) [2]. Better support is needed for expressing one's
information need and returning high quality search results by web
search tools. There appears to be a need for systems that do reasoning
under uncertainty and are flexible enough to recover from the
contradictions, inconsistencies, and irregularities that such reasoning
involves. In a multi-view problem, the features of the domain can be
partitioned into disjoint subsets (views) that are sufficient to learn the
target concept. Semi-supervised, multi-view algorithms, which
reduce the amount of labeled data required for learning, rely on the
assumptions that the views are compatible and uncorrelated. This
paper describes the use of semi-structured machine learning approach
with Active learning for the “Domain Specific Search Engines". A
domain-specific search engine is “An information access system that
allows access to all the information on the web that is relevant to a
particular domain. The proposed work shows that with the help of
this approach relevant data can be extracted with the minimum
queries fired by the user. It requires small number of labeled data and
pool of unlabelled data on which the learning algorithm is applied to
extract the required data.