Abstract: In this study, the Scots pine (Pinus sylvestris L.) C
needles (i.e. the current-year-needles) were used as bioindicators in
determining the aerial distribution pattern of sulphur emissions
around industrial point sources at Kemi, Northern Finland. The
average sulphur concentration in the C needles was 897 mg/kg
(d.w.), with a standard deviation of 118 mg/kg (d.w.) and range 740 –
1350 mg/kg (d.w.). According to results in this study, Scots pine
needles (Pinus sylvestris L.) appear to be an ideal bioindicators for
identifying atmospheric sulphur pollution derived from industrial
plants and can complement the information provided by plant
mapping studies around industrial plants.
Abstract: Sequential mining methods efficiently discover all frequent sequential patterns included in sequential data. These methods use the support, which is the previous criterion that satisfies the Apriori property, to evaluate the frequency. However, the discovered patterns do not always correspond to the interests of analysts, because the patterns are common and the analysts cannot get new knowledge from the patterns. The paper proposes a new criterion, namely, the sequential interestingness, to discover sequential patterns that are more attractive for the analysts. The paper shows that the criterion satisfies the Apriori property and how the criterion is related to the support. Also, the paper proposes an efficient sequential mining method based on the proposed criterion. Lastly, the paper shows the effectiveness of the proposed method by applying the method to two kinds of sequential data.
Abstract: Texture information plays increasingly an important
role in remotely sensed imagery classification and many pattern
recognition applications. However, the selection of relevant textural
features to improve this classification accuracy is not a straightforward
task. This work investigates the effectiveness of two Mutual
Information Feature Selector (MIFS) algorithms to select salient
textural features that contain highly discriminatory information for
multispectral imagery classification. The input candidate features are
extracted from a SPOT High Resolution Visible(HRV) image using
Wavelet Transform (WT) at levels (l = 1,2).
The experimental results show that the selected textural features
according to MIFS algorithms make the largest contribution to
improve the classification accuracy than classical approaches such
as Principal Components Analysis (PCA) and Linear Discriminant
Analysis (LDA).
Abstract: This paper describes part of a project about Learningby-
Modeling (LbM). Studying complex systems is increasingly
important in teaching and learning many science domains. Many
features of complex systems make it difficult for students to develop
deep understanding. Previous research indicates that involvement
with modeling scientific phenomena and complex systems can play a
powerful role in science learning. Some researchers argue with this
view indicating that models and modeling do not contribute to
understanding complexity concepts, since these increases the
cognitive load on students. This study will investigate the effect of
different modes of involvement in exploring scientific phenomena
using computer simulation tools, on students- mental model from the
perspective of structure, behavior and function. Quantitative and
qualitative methods are used to report about 121 freshmen students
that engaged in participatory simulations about complex phenomena,
showing emergent, self-organized and decentralized patterns. Results
show that LbM plays a major role in students' concept formation
about complexity concepts.
Abstract: Nowadays the asynchronous learning has granted the permission to the anywhere and anything learning via the technology and E-media which give the learner more convenient. This research is about the design of the blended and online learning for the asynchronous learning of the process management subject in order to create the prototype of this subject asynchronous learning which will create the easiness and increase capability in the learning. The pattern of learning is the integration between the in-class learning and online learning via the internet. This research is mainly focused on the online learning and the online learning can be divided into 5 parts which are virtual classroom, online content, collaboration, assessment and reference material. After the system design was finished, it was evaluated and tested by 5 experts in blended learning design and 10 students which the user’s satisfaction level is good. The result is as good as the assumption so the system can be used in the process management subject for a real usage.
Abstract: Web usage mining is an interesting application of data
mining which provides insight into customer behaviour on the Internet. An important technique to discover user access and navigation trails is based on sequential patterns mining. One of the
key challenges for web access patterns mining is tackling the problem
of mining richly structured patterns. This paper proposes a novel
model called Web Access Patterns Graph (WAP-Graph) to represent all of the access patterns from web mining graphically. WAP-Graph
also motivates the search for new structural relation patterns, i.e. Concurrent Access Patterns (CAP), to identify and predict more
complex web page requests. Corresponding CAP mining and modelling methods are proposed and shown to be effective in the
search for and representation of concurrency between access patterns
on the web. From experiments conducted on large-scale synthetic
sequence data as well as real web access data, it is demonstrated that
CAP mining provides a powerful method for structural knowledge discovery, which can be visualised through the CAP-Graph model.
Abstract: In this article we explore how computer assisted exercises may allow for bridging the traditional gap between theory and practice in professional education. To educate officers able to master the complexity of the battlefield the Norwegian Military Academy needs to develop a learning environment that allows for creating viable connections between the educational environment and the field of practice. In response to this challenge we explore the conditions necessary to make computer assisted training systems (CATS) a useful tool to create structural similarities between an educational context and the field of military practice. Although, CATS may facilitate work procedures close to real life situations, this case do demonstrate how professional competence also must build on viable learning theories and environments. This paper explores the conditions that allow for using simulators to facilitate professional competence from within an educational setting. We develop a generic didactic model that ascribes learning to participation in iterative cycles of action and reflection. The development of this model is motivated by the need to develop an interdisciplinary professional education rooted in the pattern of military practice.
Abstract: During recent years, the traditional learning
approaches have undergone fundamental changes due to the
emergence of new technologies such as multimedia, hypermedia and
telecommunication. E-learning is a modern world phenomenon that
has come into existence in the information age and in a knowledgebased
society. E-learning has developed significantly within a short
period of time. Thus it is of a great significant to secure information,
allow a confident access and prevent unauthorized accesses. Making
use of individuals- physiologic or behavioral (biometric) properties is
a confident method to make the information secure. Among the
biometrics, fingerprint is more acceptable and most countries use it as
an efficient methods of identification. This article provides a new
method to compare the fingerprint comparison by pattern recognition
and image processing techniques. To verify fingerprint, the shortest
distance method is used together with perceptronic multilayer neural
network functioning based on minutiae. This method is highly
accurate in the extraction of minutiae and it accelerates comparisons
due to elimination of false minutiae and is more reliable compared
with methods that merely use directional images.
Abstract: Many natural language expressions are ambiguous, and
need to draw on other sources of information to be interpreted.
Interpretation of the e word تعاون to be considered as a noun or a verb
depends on the presence of contextual cues. To interpret words we
need to be able to discriminate between different usages. This paper
proposes a hybrid of based- rules and a machine learning method for
tagging Arabic words. The particularity of Arabic word that may be
composed of stem, plus affixes and clitics, a small number of rules
dominate the performance (affixes include inflexional markers for
tense, gender and number/ clitics include some prepositions,
conjunctions and others). Tagging is closely related to the notion of
word class used in syntax. This method is based firstly on rules (that
considered the post-position, ending of a word, and patterns), and
then the anomaly are corrected by adopting a memory-based learning
method (MBL). The memory_based learning is an efficient method to
integrate various sources of information, and handling exceptional
data in natural language processing tasks. Secondly checking the
exceptional cases of rules and more information is made available to
the learner for treating those exceptional cases. To evaluate the
proposed method a number of experiments has been run, and in
order, to improve the importance of the various information in
learning.
Abstract: Video-on-demand (VOD) is designed by using content delivery networks (CDN) to minimize the overall operational cost and to maximize scalability. Estimation of the viewing pattern (i.e., the relationship between the number of viewings and the ranking of VOD contents) plays an important role in minimizing the total operational cost and maximizing the performance of the VOD systems. In this paper, we have analyzed a large body of commercial VOD viewing data and found that the viewing rank distribution fits well with the parabolic fractal distribution. The weighted linear model fitting function is used to estimate the parameters (coefficients) of the parabolic fractal distribution. This paper presents an analytical basis for designing an optimal hierarchical VOD contents distribution system in terms of its cost and performance.
Abstract: Biometric measures of one kind or another have been
used to identify people since ancient times, with handwritten
signatures, facial features, and fingerprints being the traditional
methods. Of late, Systems have been built that automate the task of
recognition, using these methods and newer ones, such as hand
geometry, voiceprints and iris patterns. These systems have different
strengths and weaknesses. This work is a two-section composition. In
the starting section, we present an analytical and comparative study
of common biometric techniques. The performance of each of them
has been viewed and then tabularized as a result. The latter section
involves the actual implementation of the techniques under
consideration that has been done using a state of the art tool called,
MATLAB. This tool aids to effectively portray the corresponding
results and effects.
Abstract: This paper focuses on Land Use and Land Cover Changes (LULCC) occurred in the urban coastal regions of the Mediterranean basin in the last thirty years. LULCC were assessed diachronically (1975-2006) in two urban areas, Rome (Italy) and Athens (Greece), by using CORINE land cover maps. In strictly coastal territories a persistent growth of built-up areas at the expenses of both agricultural and forest land uses was found. On the contrary, a different pattern was observed in the surrounding inland areas, where a high conversion rate of the agricultural land uses to both urban and forest land uses was recorded. The impact of city growth on the complex pattern of coastal LULCC is finally discussed.
Abstract: All Text processing systems allow their users to
search a pattern of string from a given text. String matching is
fundamental to database and text processing applications. Every text
editor must contain a mechanism to search the current document for
arbitrary strings. Spelling checkers scan an input text for words in the
dictionary and reject any strings that do not match. We store our
information in data bases so that later on we can retrieve the same
and this retrieval can be done by using various string matching
algorithms. This paper is describing a new string matching algorithm
for various applications. A new algorithm has been designed with the
help of Rabin Karp Matcher, to improve string matching process.
Abstract: Due to the three- dimensional flow pattern interacting with bed material, the process of local scour around bridge piers is complex. Modeling 3D flow field and scour hole evolution around a bridge pier is more feasible nowadays because the computational cost and computational time have significantly decreased. In order to evaluate local flow and scouring around a bridge pier, a completely three-dimensional numerical model, SSIIM program, was used. The model solves 3-D Navier-Stokes equations and a bed load conservation equation. The model was applied to simulate local flow and scouring around a bridge pier in a large natural river with four piers. Computation for 1 day of flood condition was carried out to predict the maximum local scour depth. The results show that the SSIIM program can be used efficiently for simulating the scouring in natural rivers. The results also showed that among the various turbulence models, the k-ω model gives more reasonable results.
Abstract: The paper presents the study of synthetic transmit
aperture method applying the Golay coded transmission for medical
ultrasound imaging. Longer coded excitation allows to increase the
total energy of the transmitted signal without increasing the peak
pressure. Signal-to-noise ratio and penetration depth are improved
maintaining high ultrasound image resolution.
In the work the 128-element linear transducer array with 0.3 mm
inter-element spacing excited by one cycle and the 8 and 16-bit
Golay coded sequences at nominal frequencies 4 MHz was used.
Single element transmission aperture was used to generate a spherical
wave covering the full image region and all the elements received the
echo signals. The comparison of 2D ultrasound images of the wire
phantom as well as of the tissue mimicking phantom is presented to
demonstrate the benefits of the coded transmission. The results were
obtained using the synthetic aperture algorithm with transmit and
receive signals correction based on a single element directivity
function.
Abstract: National Biodiversity Database System (NBIDS) has
been developed for collecting Thai biodiversity data. The goal of this
project is to provide advanced tools for querying, analyzing,
modeling, and visualizing patterns of species distribution for
researchers and scientists. NBIDS data record two types of datasets:
biodiversity data and environmental data. Biodiversity data are
specie presence data and species status. The attributes of biodiversity
data can be further classified into two groups: universal and projectspecific
attributes. Universal attributes are attributes that are common
to all of the records, e.g. X/Y coordinates, year, and collector name.
Project-specific attributes are attributes that are unique to one or a
few projects, e.g., flowering stage. Environmental data include
atmospheric data, hydrology data, soil data, and land cover data
collecting by using GLOBE protocols. We have developed webbased
tools for data entry. Google Earth KML and ArcGIS were used
as tools for map visualization. webMathematica was used for simple
data visualization and also for advanced data analysis and
visualization, e.g., spatial interpolation, and statistical analysis.
NBIDS will be used by park rangers at Khao Nan National Park, and
researchers.
Abstract: Pattern matching based on regular tree grammars have been widely used in many areas of computer science. In this paper, we propose a pattern matcher within the framework of code generation, based on a generic and a formalized approach. According to this approach, parsers for regular tree grammars are adapted to a general pattern matching solution, rather than adapting the pattern matching according to their parsing behavior. Hence, we first formalize the construction of the pattern matches respective to input trees drawn from a regular tree grammar in a form of the so-called match trees. Then, we adopt a recently developed generic parser and tightly couple its parsing behavior with such construction. In addition to its generality, the resulting pattern matcher is characterized by its soundness and efficient implementation. This is demonstrated by the proposed theory and by the derived algorithms for its implementation. A comparison with similar and well-known approaches, such as the ones based on tree automata and LR parsers, has shown that our pattern matcher can be applied to a broader class of grammars, and achieves better approximation of pattern matches in one pass. Furthermore, its use as a machine code selector is characterized by a minimized overhead, due to the balanced distribution of the cost computations into static ones, during parser generation time, and into dynamic ones, during parsing time.
Abstract: This paper proposes a novel architecture for developing decision support systems. Unlike conventional decision support systems, the proposed architecture endeavors to reveal the decision-making process such that humans' subjectivity can be incorporated into a computerized system and, at the same time, to preserve the capability of the computerized system in processing information objectively. A number of techniques used in developing the decision support system are elaborated to make the decisionmarking process transparent. These include procedures for high dimensional data visualization, pattern classification, prediction, and evolutionary computational search. An artificial data set is first employed to compare the proposed approach with other methods. A simulated handwritten data set and a real data set on liver disease diagnosis are then employed to evaluate the efficacy of the proposed approach. The results are analyzed and discussed. The potentials of the proposed architecture as a useful decision support system are demonstrated.
Abstract: Keystroke authentication is a new access control system
to identify legitimate users via their typing behavior. In this paper,
machine learning techniques are adapted for keystroke authentication.
Seven learning methods are used to build models to differentiate user
keystroke patterns. The selected classification methods are Decision
Tree, Naive Bayesian, Instance Based Learning, Decision Table, One
Rule, Random Tree and K-star. Among these methods, three of them
are studied in more details. The results show that machine learning
is a feasible alternative for keystroke authentication. Compared to
the conventional Nearest Neighbour method in the recent research,
learning methods especially Decision Tree can be more accurate. In
addition, the experiment results reveal that 3-Grams is more accurate
than 2-Grams and 4-Grams for feature extraction. Also, combination
of attributes tend to result higher accuracy.
Abstract: Sequential pattern mining is a challenging task in data mining area with large applications. One among those applications is mining patterns from weblog. Recent times, weblog is highly dynamic and some of them may become absolute over time. In addition, users may frequently change the threshold value during the data mining process until acquiring required output or mining interesting rules. Some of the recently proposed algorithms for mining weblog, build the tree with two scans and always consume large time and space. In this paper, we build Revised PLWAP with Non-frequent Items (RePLNI-tree) with single scan for all items. While mining sequential patterns, the links related to the nonfrequent items are not considered. Hence, it is not required to delete or maintain the information of nodes while revising the tree for mining updated transactions. The algorithm supports both incremental and interactive mining. It is not required to re-compute the patterns each time, while weblog is updated or minimum support changed. The performance of the proposed tree is better, even the size of incremental database is more than 50% of existing one. For evaluation purpose, we have used the benchmark weblog dataset and found that the performance of proposed tree is encouraging compared to some of the recently proposed approaches.