Abstract: The purpose is to study the model and characteristic of
participation of the suitable community to lead to develop permanent
water marketing in Bang Noi Floating Market, Bangkonti District,
Samutsongkhram Province. A total of 342 survey questionnaire was
administered to potential respondents. The researchers interviewed
the leader of the community. Appreciation Influence Control (AIC)
was used to talk with 20 villagers on arena. The findings revealed
that overall, most people had the middle level of the participation in
developing the durable Bang Noi Floating Market, Bangkonti,
Samutsongkhram Province and in aspects of gaining benefits from
developing it with atmosphere and a beautiful view for tourism. For
example, the landscape is beautiful with public utilities. The
participation in preserving and developing Bang Noi Floating Market
remains in the former way of life. The basic factor of person affects
to the participation of people such as age, level of education, career,
and income per month. Most participants are the original hosts that
have houses and shops located in the marketing and neighbor. These
people involve with the benefits and have the power to make a water
marketing strategy, the major role to set the information database. It
also found that the leader and the villagers play the important role in
setting a five-physical database. Data include level of information
such as position of village, territory of village, road, river, and
premises. Information of culture consists of a two-level of
information, interesting point, and Itinerary. The information occurs
from presenting and practicing by the leader and villagers in the
community.All of phases are presented for listening and investigating
database together in both the leader and villagers in the process of
participation.
Abstract: In historical science and social science, the influence
of natural disaster upon society is a matter of great interest. In
recent years, some archives are made through many hands for natural
disasters, however it is inefficiency and waste. So, we suppose a
computer system to create a historical natural disaster archive. As
the target of this analysis, we consider newspaper articles. The news
articles are considered to be typical examples that prescribe the
temporal relations of affairs for natural disaster. In order to do this
analysis, we identify the occurrences in newspaper articles by some
index entries, considering the affairs which are specific to natural
disasters, and show the temporal relation between natural disasters.
We designed and implemented the automatic system of “extraction
of the occurrences of natural disaster" and “temporal relation table
for natural disaster."
Abstract: One of the most used assumptions in logic programming
and deductive databases is the so-called Closed World Assumption
(CWA), according to which the atoms that cannot be inferred
from the programs are considered to be false (i.e. a pessimistic
assumption). One of the most successful semantics of conventional
logic programs based on the CWA is the well-founded semantics.
However, the CWA is not applicable in all circumstances when
information is handled. That is, the well-founded semantics, if
conventionally defined, would behave inadequately in different cases.
The solution we adopt in this paper is to extend the well-founded
semantics in order for it to be based also on other assumptions. The
basis of (default) negative information in the well-founded semantics
is given by the so-called unfounded sets. We extend this concept
by considering optimistic, pessimistic, skeptical and paraconsistent
assumptions, used to complete missing information from a program.
Our semantics, called extended well-founded semantics, expresses
also imperfect information considered to be missing/incomplete,
uncertain and/or inconsistent, by using bilattices as multivalued
logics. We provide a method of computing the extended well-founded
semantics and show that Kripke-Kleene semantics is captured by
considering a skeptical assumption. We show also that the complexity
of the computation of our semantics is polynomial time.
Abstract: Fast retrieval of data has been a need of user in any
database application. This paper introduces a buffer based query
optimization technique in which queries are assigned weights
according to their number of execution in a query bank. These
queries and their optimized executed plans are loaded into the buffer
at the start of the database application. For every query the system
searches for a match in the buffer and executes the plan without
creating new plans.
Abstract: Cluster analysis is the name given to a diverse collection of techniques that can be used to classify objects (e.g. individuals, quadrats, species etc). While Kohonen's Self-Organizing Feature Map (SOFM) or Self-Organizing Map (SOM) networks have been successfully applied as a classification tool to various problem domains, including speech recognition, image data compression, image or character recognition, robot control and medical diagnosis, its potential as a robust substitute for clustering analysis remains relatively unresearched. SOM networks combine competitive learning with dimensionality reduction by smoothing the clusters with respect to an a priori grid and provide a powerful tool for data visualization. In this paper, SOM is used for creating a toroidal mapping of two-dimensional lattice to perform cluster analysis on results of a chemical analysis of wines produced in the same region in Italy but derived from three different cultivators, referred to as the “wine recognition data" located in the University of California-Irvine database. The results are encouraging and it is believed that SOM would make an appealing and powerful decision-support system tool for clustering tasks and for data visualization.
Abstract: This paper proposes a new method for image searches and image indexing in databases with a color temperature histogram. The color temperature histogram can be used for performance improvement of content–based image retrieval by using a combination of color temperature and histogram. The color temperature histogram can be represented by a range of 46 colors. That is more than the color histogram and the dominant color temperature. Moreover, with our method the colors that have the same color temperature can be separated while the dominant color temperature can not. The results showed that the color temperature histogram retrieved an accurate image more often than the dominant color temperature method or color histogram method. This also took less time so the color temperature can be used for indexing and searching for images.
Abstract: For complete support of Quality of Service, it is better that environment itself predicts resource requirements of a job by using special methods in the Grid computing. The exact and correct prediction causes exact matching of required resources with available resources. After the execution of each job, the used resources will be saved in the active database named "History". At first some of the attributes will be exploit from the main job and according to a defined similarity algorithm the most similar executed job will be exploited from "History" using statistic terms such as linear regression or average, resource requirements will be predicted. The new idea in this research is based on active database and centralized history maintenance. Implementation and testing of the proposed architecture results in accuracy percentage of 96.68% to predict CPU usage of jobs and 91.29% of memory usage and 89.80% of the band width usage.
Abstract: In this work a new platform for mobile-health systems is
presented. System target application is providing decision support to
rescue corps or military medical personnel in combat areas. Software
architecture relies on a distributed client-server system that manages a
wireless ad-hoc networks hierarchy in which several different types of
client operate. Each client is characterized for different hardware and
software requirements. Lower hierarchy levels rely in a network of
completely custom devices that store clinical information and patient
status and are designed to form an ad-hoc network operating in the
2.4 GHz ISM band and complying with the IEEE 802.15.4 standard
(ZigBee). Medical personnel may interact with such devices, that are
called MICs (Medical Information Carriers), by means of a PDA
(Personal Digital Assistant) or a MDA (Medical Digital Assistant),
and transmit the information stored in their local databases as well as
issue a service request to the upper hierarchy levels by using IEEE
802.11 a/b/g standard (WiFi). The server acts as a repository that
stores both medical evacuation forms and associated events (e.g., a
teleconsulting request). All the actors participating in the diagnostic
or evacuation process may access asynchronously to such repository
and update its content or generate new events. The designed system
pretends to optimise and improve information spreading and flow
among all the system components with the aim of improving both
diagnostic quality and evacuation process.
Abstract: In this paper we present an approach for 3D face
recognition based on extracting principal components of range
images by utilizing modified PCA methods namely 2DPCA and
bidirectional 2DPCA also known as (2D) 2 PCA.A preprocessing
stage was implemented on the images to smooth them using median
and Gaussian filtering. In the normalization stage we locate the nose
tip to lay it at the center of images then crop each image to a standard
size of 100*100. In the face recognition stage we extract the principal
component of each image using both 2DPCA and (2D) 2 PCA.
Finally, we use Euclidean distance to measure the minimum distance
between a given test image to the training images in the database. We
also compare the result of using both methods. The best result
achieved by experiments on a public face database shows that 83.3
percent is the rate of face recognition for a random facial expression.
Abstract: There are so many databases of various fields of life sciences available online. To find well-used databases, a survey to measure life science database citation frequency in scientific literatures is done. The survey is done by measuring how many scientific literatures which are available on PubMed Central archive cited a specific life science database. This paper presents and discusses the results of the survey.
Abstract: Facial features are frequently used to represent local
properties of a human face image in computer vision applications. In
this paper, we present a fast algorithm that can extract the facial
features online such that they can give a satisfying representation of a
face image. It includes one step for a coarse detection of each facial
feature by AdaBoost and another one to increase the accuracy of the
found points by Active Shape Models (ASM) in the regions of interest.
The resulted facial features are evaluated by matching with artificial
face models in the applications of physiognomy. The distance measure
between the features and those in the fate models from the database is
carried out by means of the Hausdorff distance. In the experiment, the
proposed method shows the efficient performance in facial feature
extractions and online system of physiognomy.
Abstract: The paper proposes an approach using genetic algorithm for computing the region based image similarity. The image is denoted using a set of segmented regions reflecting color and texture properties of an image. An image is associated with a family of image features corresponding to the regions. The resemblance of two images is then defined as the overall similarity between two families of image features, and quantified by a similarity measure, which integrates properties of all the regions in the images. A genetic algorithm is applied to decide the most plausible matching. The performance of the proposed method is illustrated using examples from an image database of general-purpose images, and is shown to produce good results.
Abstract: In this article we explore the application of a formal
proof system to verification problems in cryptography. Cryptographic
properties concerning correctness or security of some cryptographic
algorithms are of great interest. Beside some basic lemmata, we
explore an implementation of a complex function that is used in
cryptography. More precisely, we describe formal properties of this
implementation that we computer prove. We describe formalized
probability distributions (σ-algebras, probability spaces and conditional
probabilities). These are given in the formal language of the
formal proof system Isabelle/HOL. Moreover, we computer prove
Bayes- Formula. Besides, we describe an application of the presented
formalized probability distributions to cryptography. Furthermore,
this article shows that computer proofs of complex cryptographic
functions are possible by presenting an implementation of the Miller-
Rabin primality test that admits formal verification. Our achievements
are a step towards computer verification of cryptographic primitives.
They describe a basis for computer verification in cryptography.
Computer verification can be applied to further problems in cryptographic
research, if the corresponding basic mathematical knowledge
is available in a database.
Abstract: In this paper we address the problem of musical style
classification, which has a number of applications like indexing in
musical databases or automatic composition systems. Starting from
MIDI files of real-world improvisations, we extract the melody track
and cut it into overlapping segments of equal length. From these
fragments, some numerical features are extracted as descriptors of
style samples. We show that a standard Bayesian classifier can be
conveniently employed to build an effective musical style classifier,
once this set of features has been extracted from musical data.
Preliminary experimental results show the effectiveness of the
developed classifier that represents the first component of a musical
audio retrieval system
Abstract: This paper presents an algebraic approach to optimize
queries in domain-specific database management system
for protein structure data. The approach involves the introduction of
several protein structure specific algebraic operators to query the
complex data stored in an object-oriented database system. The
Protein Algebra provides an extensible set of high-level Genomic
Data Types and Protein Data Types along with a comprehensive
collection of appropriate genomic and protein functions. The paper
also presents a query translator that converts high-level query
specifications in algebra into low-level query specifications in
Protein-QL, a query language designed to query protein structure
data. The query transformation process uses a Protein Ontology that
serves the purpose of a dictionary.
Abstract: This paper presents a novel iris recognition system
using 1D log polar Gabor wavelet and Euler numbers. 1D log polar
Gabor wavelet is used to extract the textural features, and Euler
numbers are used to extract topological features of the iris. The
proposed decision strategy uses these features to authenticate an
individual-s identity while maintaining a low false rejection rate. The
algorithm was tested on CASIA iris image database and found to
perform better than existing approaches with an overall accuracy of
99.93%.
Abstract: A fusion classifier composed of two modules, one made by a hidden Markov model (HMM) and the other by a support vector machine (SVM), is proposed to recognize faces with pose variations in open-set recognition settings. The HMM module captures the evolution of facial features across a subject-s face using the subject-s facial images only, without referencing to the faces of others. Because of the captured evolutionary process of facial features, the HMM module retains certain robustness against pose variations, yielding low false rejection rates (FRR) for recognizing faces across poses. This is, however, on the price of poor false acceptance rates (FAR) when recognizing other faces because it is built upon withinclass samples only. The SVM module in the proposed model is developed following a special design able to substantially diminish the FAR and further lower down the FRR. The proposed fusion classifier has been evaluated in performance using the CMU PIE database, and proven effective for open-set face recognition with pose variations. Experiments have also shown that it outperforms the face classifier made by HMM or SVM alone.
Abstract: The statistical process control (SPC) is one of the most powerful tools developed to assist ineffective control of quality, involves collecting, organizing and interpreting data during production. This article aims to show how the use of CEP industries can control and continuously improve product quality through monitoring of production that can detect deviations of parameters representing the process by reducing the amount of off-specification products and thus the costs of production. This study aimed to conduct a technological forecasting in order to characterize the research being done related to the CEP. The survey was conducted in the databases Spacenet, WIPO and the National Institute of Industrial Property (INPI). Among the largest are the United States depositors and deposits via PCT, the classification section that was presented in greater abundance to F.
Abstract: Main Memory Database systems (MMDB) store their
data in main physical memory and provide very high-speed access.
Conventional database systems are optimized for the particular
characteristics of disk storage mechanisms. Memory resident
systems, on the other hand, use different optimizations to structure
and organize data, as well as to make it reliable.
This paper provides a brief overview on MMDBs and one of the
memory resident systems named FastDB and compares the
processing time of this system with a typical disc resident database
based on the results of the implementation of TPC benchmarks
environment on both.
Abstract: Effective evaluation of software development effort is an important aspect of successful project management. Based on a large database with 4106 projects ever developed, this study statistically examines the factors that influence development effort. The factors found to be significant for effort are project size, average number of developers that worked on the project, type of development, development language, development platform, and the use of rapid application development. Among these factors, project size is the most critical cost driver. Unsurprisingly, this study found that the use of CASE tools does not necessarily reduce development effort, which adds support to the claim that the use of tools is subtle. As many of the current estimation models are rarely or unsuccessfully used, this study proposes a parsimonious parametric model for the prediction of effort which is both simple and more accurate than previous models.