Abstract: In this presentation, we discuss the use of information technologies in the area of special education for teaching individuals with learning disabilities. Application software which was developed for this purpose is used to demonstrate the applicability of a database integrated information processing system to alleviate the burden of educators. The software allows the preparation of individualized education programs based on the predefined objectives, goals and behaviors.
Abstract: Content-based Image Retrieval (CBIR) aims at searching image databases for specific images that are similar to a given query image based on matching of features derived from the image content. This paper focuses on a low-dimensional color based indexing technique for achieving efficient and effective retrieval performance. In our approach, the color features are extracted using the mean shift algorithm, a robust clustering technique. Then the cluster (region) mode is used as representative of the image in 3-D color space. The feature descriptor consists of the representative color of a region and is indexed using a spatial indexing method that uses *R -tree thus avoiding the high-dimensional indexing problems associated with the traditional color histogram. Alternatively, the images in the database are clustered based on region feature similarity using Euclidian distance. Only representative (centroids) features of these clusters are indexed using *R -tree thus improving the efficiency. For similarity retrieval, each representative color in the query image or region is used independently to find regions containing that color. The results of these methods are compared. A JAVA based query engine supporting query-by- example is built to retrieve images by color.
Abstract: This paper presented a MATLAB-based system named Smart Access Network Testing, Analyzing and Database (SANTAD), purposely for in-service transmission surveillance and self restoration against fiber fault in fiber-to-the-home (FTTH) access network. The developed program will be installed with optical line terminal (OLT) at central office (CO) to monitor the status and detect any fiber fault that occurs in FTTH downwardly from CO towards residential customer locations. SANTAD is interfaced with optical time domain reflectometer (OTDR) to accumulate every network testing result to be displayed on a single computer screen for further analysis. This program will identify and present the parameters of each optical fiber line such as the line's status either in working or nonworking condition, magnitude of decreasing at each point, failure location, and other details as shown in the OTDR's screen. The failure status will be delivered to field engineers for promptly actions, meanwhile the failure line will be diverted to protection line to ensure the traffic flow continuously. This approach has a bright prospect to improve the survivability and reliability as well as increase the efficiency and monitoring capabilities in FTTH.
Abstract: The National Agricultural Biotechnology Information
Center (NABIC) plays a leading role in the biotechnology information
database for agricultural plants in Korea. Since 2002, we have
concentrated on functional genomics of major crops, building an
integrated biotechnology database for agro-biotech information that
focuses on bioinformatics of major agricultural resources such as rice,
Chinese cabbage, and microorganisms. In the NABIC,
integration-based biotechnology database provides useful information
through a user-friendly web interface that allows analysis of genome
infrastructure, multiple plants, microbial resources, and living
modified organisms.
Abstract: With the development of Internet and databases application techniques, the demand that lots of databases in the Internet are permitted to remote query and access for authorized users becomes common, and the problem that how to protect the copyright of relational databases arises. This paper simply introduces the knowledge of cloud model firstly, includes cloud generators and similar cloud. And then combined with the property of the cloud, a method of protecting relational databases copyright with cloud watermark is proposed according to the idea of digital watermark and the property of relational databases. Meanwhile, the corresponding watermark algorithms such as cloud watermark embedding algorithm and detection algorithm are proposed. Then, some experiments are run and the results are analyzed to validate the correctness and feasibility of the watermark scheme. In the end, the foreground of watermarking relational database and its research direction are prospected.
Abstract: Today's business environment requires that companies have access to highly relevant information in a matter of seconds.
Modern Business Intelligence tools rely on data structured mostly in traditional dimensional database schemas, typically represented by
star schemas. Dimensional modeling is already recognized as a
leading industry standard in the field of data warehousing although
several drawbacks and pitfalls were reported. This paper focuses on
the analysis of another data warehouse modeling technique - the
anchor modeling, and its characteristics in context with the standardized dimensional modeling technique from a query performance perspective. The results of the analysis show
information about performance of queries executed on database
schemas structured according to principles of each database modeling
technique.
Abstract: Visualizing sound and noise often help us to determine
an appropriate control over the source localization. Near-field acoustic
holography (NAH) is a powerful tool for the ill-posed problem.
However, in practice, due to the small finite aperture size, the discrete
Fourier transform, FFT based NAH couldn-t predict the activeregion-
of-interest (AROI) over the edges of the plane. Theoretically
few approaches were proposed for solving finite aperture problem.
However most of these methods are not quite compatible for the
practical implementation, especially near the edge of the source. In
this paper, a zip-stuffing extrapolation approach has suggested with
2D Kaiser window. It is operated on wavenumber complex space
to localize the predicted sources. We numerically form a practice
environment with touch impact databases to test the localization of
sound source. It is observed that zip-stuffing aperture extrapolation
and 2D window with evanescent components provide more accuracy
especially in the small aperture and its derivatives.
Abstract: The identification and classification of the spine deformity play an important role when considering surgical planning for adolescent patients with idiopathic scoliosis. The subject of this article is the Lenke classification of scoliotic spines using Cobb angle measurements. The purpose is two-fold: (1) design a rulebased diagram to assist clinicians in the classification process and (2) investigate a computer classifier which improves the classification time and accuracy. The rule-based diagram efficiency was evaluated in a series of scoliotic classifications by 10 clinicians. The computer classifier was tested on a radiographic measurement database of 603 patients. Classification accuracy was 93% using the rule-based diagram and 99% for the computer classifier. Both the computer classifier and the rule based diagram can efficiently assist clinicians in their Lenke classification of spine scoliosis.
Abstract: The network traffic data provided for the design of
intrusion detection always are large with ineffective information and
enclose limited and ambiguous information about users- activities.
We study the problems and propose a two phases approach in our
intrusion detection design. In the first phase, we develop a
correlation-based feature selection algorithm to remove the worthless
information from the original high dimensional database. Next, we
design an intrusion detection method to solve the problems of
uncertainty caused by limited and ambiguous information. In the
experiments, we choose six UCI databases and DARPA KDD99
intrusion detection data set as our evaluation tools. Empirical studies
indicate that our feature selection algorithm is capable of reducing the
size of data set. Our intrusion detection method achieves a better
performance than those of participating intrusion detectors.
Abstract: This paper is mainly concerned with the application of
a novel technique of data interpretation for classifying measurements
of plasma columns in Tokamak reactors for nuclear fusion
applications. The proposed method exploits several concepts derived
from soft computing theory. In particular, Artificial Neural Networks
and Multi-Class Support Vector Machines have been exploited to
classify magnetic variables useful to determine shape and position of
the plasma with a reduced computational complexity. The proposed
technique is used to analyze simulated databases of plasma equilibria
based on ITER geometry configuration. As well as demonstrating the
successful recovery of scalar equilibrium parameters, we show that
the technique can yield practical advantages compared with earlier
methods.
Abstract: This paper presented a new approach for centralized
monitoring and self-protected against fiber fault in fiber-to-the-home
(FTTH) access network by using Smart Access Network Testing,
Analyzing and Database (SANTAD). SANTAD will be installed
with optical line terminal (OLT) at central office (CO) for in-service
transmission surveillance and fiber fault localization within FTTH
with point-to-multipoint (P2MP) configuration downwardly from CO
towards customer residential locations based on the graphical user
interface (GUI) processing capabilities of MATLAB software.
SANTAD is able to detect any fiber fault as well as identify the
failure location in the network system. SANTAD enable the status of
each optical network unit (ONU) connected line is displayed onto
one screen with capability to configure the attenuation and detect the
failure simultaneously. The analysis results and information will be
delivered to the field engineer for promptly actions, meanwhile the
failure line will be diverted to protection line to ensure the traffic
flow continuously. This approach has a bright prospect to improve
the survivability and reliability as well as increase the efficiency and
monitoring capabilities in FTTH.
Abstract: The data is available in abundance in any business
organization. It includes the records for finance, maintenance,
inventory, progress reports etc. As the time progresses, the data keep
on accumulating and the challenge is to extract the information from
this data bank. Knowledge discovery from these large and complex
databases is the key problem of this era. Data mining and machine
learning techniques are needed which can scale to the size of the
problems and can be customized to the application of business. For
the development of accurate and required information for particular
problem, business analyst needs to develop multidimensional models
which give the reliable information so that they can take right
decision for particular problem. If the multidimensional model does
not possess the advance features, the accuracy cannot be expected.
The present work involves the development of a Multidimensional
data model incorporating advance features. The criterion of
computation is based on the data precision and to include slowly
change time dimension. The final results are displayed in graphical
form.
Abstract: The uses of road map in daily activities are numerous
but it is a hassle to construct and update a road map whenever there
are changes. In Universiti Malaysia Sarawak, research on Automatic
Road Extraction (ARE) was explored to solve the difficulties in
updating road map. The research started with using Satellite Image
(SI), or in short, the ARE-SI project. A Hybrid Simple Colour Space
Segmentation & Edge Detection (Hybrid SCSS-EDGE) algorithm
was developed to extract roads automatically from satellite-taken
images. In order to extract the road network accurately, the satellite
image must be analyzed prior to the extraction process. The
characteristics of these elements are analyzed and consequently the
relationships among them are determined. In this study, the road
regions are extracted based on colour space elements and edge details
of roads. Besides, edge detection method is applied to further filter
out the non-road regions. The extracted road regions are validated by
using a segmentation method. These results are valuable for building
road map and detecting the changes of the existing road database.
The proposed Hybrid Simple Colour Space Segmentation and Edge
Detection (Hybrid SCSS-EDGE) algorithm can perform the tasks
fully automatic, where the user only needs to input a high-resolution
satellite image and wait for the result. Moreover, this system can
work on complex road network and generate the extraction result in
seconds.
Abstract: Mel Frequency Cepstral Coefficient (MFCC) features
are widely used as acoustic features for speech recognition as well
as speaker recognition. In MFCC feature representation, the Mel frequency
scale is used to get a high resolution in low frequency region,
and a low resolution in high frequency region. This kind of processing
is good for obtaining stable phonetic information, but not suitable
for speaker features that are located in high frequency regions. The
speaker individual information, which is non-uniformly distributed
in the high frequencies, is equally important for speaker recognition.
Based on this fact we proposed an admissible wavelet packet based
filter structure for speaker identification. Multiresolution capabilities
of wavelet packet transform are used to derive the new features.
The proposed scheme differs from previous wavelet based works,
mainly in designing the filter structure. Unlike others, the proposed
filter structure does not follow Mel scale. The closed-set speaker
identification experiments performed on the TIMIT database shows
improved identification performance compared to other commonly
used Mel scale based filter structures using wavelets.
Abstract: this paper presented a survey analysis subjected on
network bandwidth management from published papers referred in
IEEE Explorer database in three years from 2009 to 2011. Network
Bandwidth Management is discussed in today-s issues for computer
engineering applications and systems. Detailed comparison is
presented between published papers to look further in the IP based
network critical research area for network bandwidth management.
Important information such as the network focus area, a few
modeling in the IP Based Network and filtering or scheduling used in
the network applications layer is presented. Many researches on
bandwidth management have been done in the broad network area
but fewer are done in IP Based network specifically at the
applications network layer. A few researches has contributed new
scheme or enhanced modeling but still the issue of bandwidth
management still arise at the applications network layer. This survey
is taken as a basic research towards implementations of network
bandwidth management technique, new framework model and
scheduling scheme or algorithm in an IP Based network which will
focus in a control bandwidth mechanism in prioritizing the network
traffic the applications layer.
Abstract: The company-s ability to draw on a range of external
sources to meet their needs for innovation, has been termed 'open
innovation' (OI). Very few empirical analyses have been conducted
on Small and Medium Enterprises (SMEs) to the extent that they
describe and understand the characteristics and implications of this
new paradigm.
The study's objective is to identify and characterize different
modes of OI, (considering innovation process phases and the variety
and breadth of the collaboration), determinants, barriers and
motivations in SMEs. Therefore a survey was carried out among
Italian manufacturing firms and a database of 105 companies was
obtained. With regard to data elaboration, a factorial and cluster
analysis has been conducted and three different OI modes have
emerged: selective low open, unselective open upstream, and mid-
partners integrated open. The different behaviours of the three
clusters in terms of determinants factors, performance, firm-s
technology intensity, barriers and motivations have been analyzed
and discussed.
Abstract: Using a texture database, a statistical estimation of
spring-back was conducted in this study on the basis of statistical
analysis. Both spring-back in bending deformation and experimental
data related to the crystal orientation show significant dispersion.
Therefore, a probabilistic statistical approach was established for the
proper quantification of these values. Correlation was examined
among the parameters F(x) of spring-back, F(x) of the buildup fraction
to three orientations after 92° bending, and F(x) at an as-received part
on the basis of the three-parameter Weibull distribution. Consequent
spring-back estimation using a texture database yielded excellent
estimates compared with experimental values.
Abstract: A new approach for timestamp ordering problem in
serializable schedules is presented. Since the number of users using
databases is increasing rapidly, the accuracy and needing high
throughput are main topics in database area. Strict 2PL does not
allow all possible serializable schedules and so does not result high
throughput. The main advantages of the approach are the ability to
enforce the execution of transaction to be recoverable and the high
achievable performance of concurrent execution in central databases.
Comparing to Strict 2PL, the general structure of the algorithm is
simple, free deadlock, and allows executing all possible serializable
schedules which results high throughput. Various examples which
include different orders of database operations are discussed.
Abstract: The hospital and the health-care center of a
community, as a place for people-s life-care and health-care settings,
must provide more and better services for patients or residents. After
Establishing Electronic Medical Record (EMR) system -which is a
necessity- in the hospital, providing pervasive services is a further
step. Our objective in this paper is to use pervasive computing in a
case study of healthcare, based on EMR database that coordinates
application services over network to form a service environment for
medical and health-care. Our method also categorizes the hospital
spaces into 3 spaces: Public spaces, Private spaces and Isolated
spaces. Although, there are many projects about using pervasive
computing in healthcare, but all of them concentrate on the disease
recognition, designing smart cloths, or provide services only for
patient. The proposed method is implemented in a hospital. The
obtained results show that it is suitable for our purpose.
Abstract: Current systems for face recognition techniques often
use either SVM or Adaboost techniques for face detection part and use
PCA for face recognition part. In this paper, we offer a novel method
for not only a powerful face detection system based on
Six-segment-filters (SSR) and Adaboost learning algorithms but also
for a face recognition system. A new exclusive face detection
algorithm has been developed and connected with the recognition
algorithm. As a result of it, we obtained an overall high-system
performance compared with current systems. The proposed algorithm
was tested on CMU, FERET, UNIBE, MIT face databases and
significant performance has obtained.