Abstract: In recent years, we have seen an increasing importance of research and study on knowledge source, decision support systems, data mining and procedure of knowledge discovery in data bases and it is considered that each of these aspects affects the others. In this article, we have merged information source and knowledge source to suggest a knowledge based system within limits of management based on storing and restoring of knowledge to manage information and improve decision making and resources. In this article, we have used method of data mining and Apriori algorithm in procedure of knowledge discovery one of the problems of Apriori algorithm is that, a user should specify the minimum threshold for supporting the regularity. Imagine that a user wants to apply Apriori algorithm for a database with millions of transactions. Definitely, the user does not have necessary knowledge of all existing transactions in that database, and therefore cannot specify a suitable threshold. Our purpose in this article is to improve Apriori algorithm. To achieve our goal, we tried using fuzzy logic to put data in different clusters before applying the Apriori algorithm for existing data in the database and we also try to suggest the most suitable threshold to the user automatically.
Abstract: The knowledge of the relationship between characters can help readers to understand the overall story or plot of the literary fiction. In this paper, we present a method for extracting the specific relationship between characters from a Korean literary fiction. Generally, methods for extracting relationships between characters in text are statistical or computational methods based on the sentence distance between characters without considering Korean linguistic features. Furthermore, it is difficult to extract the relationship with direction from text, such as one-sided love, because they consider only the weight of relationship, without considering the direction of the relationship. Therefore, in order to identify specific relationships between characters, we propose a statistical method considering linguistic features, such as syntactic patterns and speech verbs in Korean. The result of our method is represented by a weighted directed graph of the relationship between the characters. Furthermore, we expect that proposed method could be applied to the relationship analysis between characters of other content like movie or TV drama.
Abstract: Large scale computing infrastructures have been widely
developed with the core objective of providing a suitable platform
for high-performance and high-throughput computing. These systems
are designed to support resource-intensive and complex applications,
which can be found in many scientific and industrial areas. Currently,
large scale data-intensive applications are hindered by the high
latencies that result from the access to vastly distributed data.
Recent works have suggested that improving data locality is key to
move towards exascale infrastructures efficiently, as solutions to this
problem aim to reduce the bandwidth consumed in data transfers, and
the overheads that arise from them. There are several techniques that
attempt to move computations closer to the data. In this survey we
analyse the different mechanisms that have been proposed to provide
data locality for large scale high-performance and high-throughput
systems. This survey intends to assist scientific computing community
in understanding the various technical aspects and strategies that
have been reported in recent literature regarding data locality. As a
result, we present an overview of locality-oriented techniques, which
are grouped in four main categories: application development, task
scheduling, in-memory computing and storage platforms. Finally, the
authors include a discussion on future research lines and synergies
among the former techniques.
Abstract: Recently, Automatic Speech Recognition (ASR) systems were used to assist children in language acquisition as it has the ability to detect human speech signal. Despite the benefits offered by the ASR system, there is a lack of ASR systems for Malay-speaking children. One of the contributing factors for this is the lack of continuous speech database for the target users. Though cross-lingual adaptation is a common solution for developing ASR systems for under-resourced language, it is not viable for children as there are very limited speech databases as a source model. In this research, we propose a two-stage adaptation for the development of ASR system for Malay-speaking children using a very limited database. The two stage adaptation comprises the cross-lingual adaptation (first stage) and cross-age adaptation. For the first stage, a well-known speech database that is phonetically rich and balanced, is adapted to the medium-sized Malay adults using supervised MLLR. The second stage adaptation uses the speech acoustic model generated from the first adaptation, and the target database is a small-sized database of the target users. We have measured the performance of the proposed technique using word error rate, and then compare them with the conventional benchmark adaptation. The two stage adaptation proposed in this research has better recognition accuracy as compared to the benchmark adaptation in recognizing children’s speech.
Abstract: Location selection presents a crucial decision problem in today’s business world where strategic decision making processes have critical importance. Thus, location selection has strategic importance for companies in boosting their strength regarding competition, increasing corporate performances and efficiency in addition to lowering production and transportation costs. A right choice in location selection has a direct impact on companies’ commercial success. In this study, a store location selection problem of Carglass Turkey which operates in vehicle glass branch is handled. As this problem includes both tangible and intangible criteria, Analytic Network Process (ANP) was accepted as the main methodology. The model consists of control hierarchy and BOCR subnetworks which include clusters of actors, alternatives and criteria. In accordance with the management’s choices, five different locations were selected. In addition to the literature review, a strict cooperation with the actor group was ensured and maintained while determining the criteria and during whole process. Obtained results were presented to the management as a report and its feasibility was confirmed accordingly.
Abstract: Parallel hybrid storage systems consist of a hierarchy of different storage devices that vary in terms of data reading speed performance. As we ascend in the hierarchy, data reading speed becomes faster. Thus, migrating the application’ important data that will be accessed in the near future to the uppermost level will reduce the application I/O waiting time; hence, reducing its execution elapsed time. In this research, we implement trace-driven two-levels parallel hybrid storage system prototype that consists of HDDs and SSDs. The prototype uses data mining techniques to classify application’ data in order to determine its near future data accesses in parallel with the its on-demand request. The important data (i.e. the data that the application will access in the near future) are continuously migrated to the uppermost level of the hierarchy. Our simulation results show that our data migration approach integrated with data mining techniques reduces the application execution elapsed time when using variety of traces in at least to 22%.
Abstract: This paper describes the Message Passing Interface
(MPI) implementation of ADETRAN language, and its evaluation
on SX-ACE supercomputers. ADETRAN language includes pdo
statement that specifies the data distribution and parallel computations
and pass statement that specifies the redistribution of arrays. Two
methods for implementation of pass statement are discussed and the
performance evaluation using Splitting-Up CG method is presented.
The effectiveness of the parallelization is evaluated and the advantage
of one dimensional distribution is empirically confirmed by using the
results of experiments.
Abstract: Nowadays, network is an essential need in almost every part of human daily activities. People now can seamlessly connect to others through the Internet. With advanced technology, our personal data now can be more easily accessed. One of many components we are concerned for delivering the best network is a security issue. This paper is proposing a method that provides more options for security. This research aims to improve network security by focusing on the physical layer which is the first layer of the OSI model. The layer consists of the basic networking hardware transmission technologies of a network. With the use of observation method, the research produces a schematic design for enhancing the network security through the gray code converter.
Abstract: Intrusion Detection Systems are an essential tool for
network security infrastructure. However, IDSs have a serious
problem which is the generating of massive number of alerts, most of
them are false positive ones which can hide true alerts and make the
analyst confused to analyze the right alerts for report the true attacks.
The purpose behind this paper is to present a formalism model to
perform correlation engine by the reduction of false positive alerts
basing on vulnerability contextual information. For that, we propose
a formalism model based on non-monotonic JClassicδє description
logic augmented with a default (δ) and an exception (є) operator that
allows a dynamic inference according to contextual information.
Abstract: A torsional piezoelectric ultrasonic transducer design
is proposed to measure shear moduli in soft tissue with direct
access availability, using shear wave elastography technique. The
measurement of shear moduli of tissues is a challenging problem,
mainly derived from a) the difficulty of isolating a pure shear wave,
given the interference of multiple waves of different types (P, S,
even guided) emitted by the transducers and reflected in geometric
boundaries, and b) the highly attenuating nature of soft tissular
materials. An immediate application, overcoming these drawbacks,
is the measurement of changes in cervix stiffness to estimate the
gestational age at delivery. The design has been optimized using
a finite element model (FEM) and a semi-analytical estimator of
the probability of detection (POD) to determine a suitable geometry,
materials and generated waves. The technique is based on the time
of flight measurement between emitter and receiver, to infer shear
wave velocity. Current research is centered in prototype testing and
validation. The geometric optimization of the transducer was able
to annihilate the compressional wave emission, generating a quite
pure shear torsional wave. Currently, mechanical and electromagnetic
coupling between emitter and receiver signals are being the research
focus. Conclusions: the design overcomes the main described
problems. The almost pure shear torsional wave along with the short
time of flight avoids the possibility of multiple wave interference.
This short propagation distance reduce the effect of attenuation, and
allow the emission of very low energies assuring a good biological
security for human use.
Abstract: Most of self-tuning fuzzy systems, which are
automatically constructed from learning data, are based on the
steepest descent method (SDM). However, this approach often
requires a large convergence time and gets stuck into a shallow
local minimum. One of its solutions is to use fuzzy rule modules
with a small number of inputs such as DIRMs (Double-Input Rule
Modules) and SIRMs (Single-Input Rule Modules). In this paper,
we consider a (generalized) DIRMs model composed of double
and single-input rule modules. Further, in order to reduce the
redundant modules for the (generalized) DIRMs model, pruning and
generative learning algorithms for the model are suggested. In order
to show the effectiveness of them, numerical simulations for function
approximation, Box-Jenkins and obstacle avoidance problems are
performed.
Abstract: This paper describes a method for AWGN (Additive White Gaussian Noise) variance estimation in noisy stochastic signals, referred to as Multiplicative-Noising Variance Estimation (MNVE). The aim was to develop an estimation algorithm with minimal number of assumptions on the original signal structure. The provided MATLAB simulation and results analysis of the method applied on speech signals showed more accuracy than standardized AR (autoregressive) modeling noise estimation technique. In addition, great performance was observed on very low signal-to-noise ratios, which in general represents the worst case scenario for signal denoising methods. High execution time appears to be the only disadvantage of MNVE. After close examination of all the observed features of the proposed algorithm, it was concluded it is worth of exploring and that with some further adjustments and improvements can be enviably powerful.
Abstract: The current trends in affect recognition research are
to consider continuous observations from spontaneous natural
interactions in people using multiple feature modalities, and to
represent affect in terms of continuous dimensions, incorporate
spatio-temporal correlation among affect dimensions, and provide
fast affect predictions. These research efforts have been propelled
by a growing effort to develop affect recognition system that
can be implemented to enable seamless real-time human-computer
interaction in a wide variety of applications. Motivated by these
desired attributes of an affect recognition system, in this work
a multi-dimensional affect prediction approach is proposed by
integrating multivariate Relevance Vector Machine (MVRVM) with
a recently developed Output-associative Relevance Vector Machine
(OARVM) approach. The resulting approach can provide fast
continuous affect predictions by jointly modeling the multiple affect
dimensions and their correlations. Experiments on the RECOLA
database show that the proposed approach performs competitively
with the OARVM while providing faster predictions during testing.
Abstract: Web applications are an integral part of modem life. They are mostly based upon the HyperText Markup Language (HTML). While HTML meets the basic needs, there are some shortcomings. For example, applications can cease to work once user goes offline, real-time updates may be lagging, and user interface can freeze on computationally intensive tasks. The latest language specification HTML5 attempts to rectify the situation with new tools and protocols. This paper studies the new Web Storage, Geolocation, Web Worker, Canvas, and Web Socket APIs, and presents applications to test their features and efficiencies.
Abstract: Web service adaptation involves the creation of adapters that solve Web services incompatibilities known as mismatches. Since the importance of Web services adaptation is increasing because of the frequent implementation and use of online Web services, this paper presents a literature review of web services to investigate the main methods of adaptation, their theoretical underpinnings and the metrics used to measure adapters performance. Eighteen publications were reviewed independently by two researchers. We found that adaptation techniques are needed to solve different types of problems that may arise due to incompatibilities in Web service interfaces, including protocols, messages, data and semantics that affect the interoperability of the services. Although adapters are non-invasive methods that can improve Web services interoperability and there are current approaches for service adaptation; there is, however, not yet one solution that fits all types of mismatches. Our results also show that only a few research projects incorporate theoretical frameworks and that metrics to measure adapters’ performance are very limited. We conclude that further research on software adaptation should improve current adaptation methods in different layers of the service interoperability and that an adaptation theoretical framework that incorporates a theoretical underpinning and measures of qualitative and quantitative performance needs to be created.
Abstract: Effective statistical feature extraction and classification are important in image-based automatic inspection and analysis. An automatic wood species recognition system is designed to perform wood inspection at custom checkpoints to avoid mislabeling of timber which will results to loss of income to the timber industry. The system focuses on analyzing the statistical pores properties of the wood images. This paper proposed a fuzzy-based feature extractor which mimics the experts’ knowledge on wood texture to extract the properties of pores distribution from the wood surface texture. The proposed feature extractor consists of two steps namely pores extraction and fuzzy pores management. The total number of statistical features extracted from each wood image is 38 features. Then, a backpropagation neural network is used to classify the wood species based on the statistical features. A comprehensive set of experiments on a database composed of 5200 macroscopic images from 52 tropical wood species was used to evaluate the performance of the proposed feature extractor. The advantage of the proposed feature extraction technique is that it mimics the experts’ interpretation on wood texture which allows human involvement when analyzing the wood texture. Experimental results show the efficiency of the proposed method.