Abstract: A numerical study on the turbulent flow and heat
transfer characteristics in the rectangular channel with different types
of baffles is carried out. The inclined baffles have the width of 19.8
cm, the square diamond type hole having one side length of 2.55 cm,
and the inclination angle of 5o. Reynolds number is varied between
23,000 and 57,000. The SST turbulence model is applied in the
calculation. The validity of the numerical results is examined by the
experimental data. The numerical results of the flow field depict that
the flow patterns around the different baffle type are entirely different
and these significantly affect the local heat transfer characteristics.
The heat transfer and friction factor characteristics are significantly
affected by the perforation density of the baffle plate. It is found that
the heat transfer enhancement of baffle type II (3 hole baffle) has the
best values.
Abstract: This paper investigates vortex shedding processes
occurring at the end of a stack of parallel plates, due to an oscillating
flow induced by an acoustic standing wave within an acoustic
resonator. Here, Particle Image Velocimetry (PIV) is used to quantify
the vortex shedding processes within an acoustic cycle
phase-by-phase, in particular during the “ejection" of the fluid out of
the stack. Standard hot-wire anemometry measurement is also applied
to detect the velocity fluctuations near the end of the stack.
Combination of these two measurement techniques allowed a detailed
analysis of the vortex shedding phenomena. The results obtained show
that, as the Reynolds number varies (by varying the plate thickness
and drive ratio), different flow patterns of vortex shedding are
observed by the PIV measurement. On the other hand, the
time-dependent hot-wire measurements allow obtaining detailed
frequency spectra of the velocity signal, used for calculating
characteristic Strouhal numbers. The impact of the plate thickness and
the Reynolds number on the vortex shedding pattern has been
discussed. Furthermore, a detailed map of the relationship between the
Strouhal number and Reynolds number has been obtained and
discussed.
Abstract: The iris recognition technology is the most accurate,
fast and less invasive one compared to other biometric techniques
using for example fingerprints, face, retina, hand geometry, voice or
signature patterns. The system developed in this study has the
potential to play a key role in areas of high-risk security and can
enable organizations with means allowing only to the authorized
personnel a fast and secure way to gain access to such areas. The
paper aim is to perform the iris region detection and iris inner and
outer boundaries localization. The system was implemented on
windows platform using Visual C# programming language. It is easy
and efficient tool for image processing to get great performance
accuracy. In particular, the system includes two main parts. The first
is to preprocess the iris images by using Canny edge detection
methods, segments the iris region from the rest of the image and
determine the location of the iris boundaries by applying Hough
transform. The proposed system tested on 756 iris images from 60
eyes of CASIA iris database images.
Abstract: Pattern recognition is the research area of Artificial
Intelligence that studies the operation and design of systems that
recognize patterns in the data. Important application areas are image
analysis, character recognition, fingerprint classification, speech
analysis, DNA sequence identification, man and machine
diagnostics, person identification and industrial inspection. The
interest in improving the classification systems of data analysis is
independent from the context of applications. In fact, in many
studies it is often the case to have to recognize and to distinguish
groups of various objects, which requires the need for valid
instruments capable to perform this task. The objective of this article
is to show several methodologies of Artificial Intelligence for data
classification applied to biomedical patterns. In particular, this work
deals with the realization of a Computer-Aided Detection system
(CADe) that is able to assist the radiologist in identifying types of
mammary tumor lesions. As an additional biomedical application of
the classification systems, we present a study conducted on blood
samples which shows how these methods may help to distinguish
between carriers of Thalassemia (or Mediterranean Anaemia) and
healthy subjects.
Abstract: This paper proposes a method that discovers sequential patterns corresponding to user-s interests from sequential data. This method expresses the interests as constraint patterns. The constraint patterns can define relationships among attributes of the items composing the data. The method recursively decomposes the constraint patterns into constraint subpatterns. The method evaluates the constraint subpatterns in order to efficiently discover sequential patterns satisfying the constraint patterns. Also, this paper applies the method to the sequential data composed of stock price indexes and verifies its effectiveness through comparing it with a method without using the constraint patterns.
Abstract: If there exists a nonempty, proper subset S of the set of all (n+1)(n+2)/2 inertias such that S Ôèå i(A) is sufficient for any n×n zero-nonzero pattern A to be inertially arbitrary, then S is called a critical set of inertias for zero-nonzero patterns of order n. If no proper subset of S is a critical set, then S is called a minimal critical set of inertias. In [Kim, Olesky and Driessche, Critical sets of inertias for matrix patterns, Linear and Multilinear Algebra, 57 (3) (2009) 293-306], identifying all minimal critical sets of inertias for n×n zero-nonzero patterns with n ≥ 3 and the minimum cardinality of such a set are posed as two open questions by Kim, Olesky and Driessche. In this note, the minimum cardinality of all critical sets of inertias for 4 × 4 irreducible zero-nonzero patterns is identified.
Abstract: Nowadays, cardiac disease is one of the most common
cause of death. Each year almost one million of angioplasty interventions and stents implantations are made all over the world.
Unfortunately, in 20-30% of cases neointimal proliferations leads to
restenosis occurring within the following period of 3-6 months. Three major factors are believed to contribute mostly to the edge
restenosis: (a) mechanical damage of the artery-s wall caused by the
stent implantation, (b) interaction between the stent and the blood constituents and (c) endothelial growth stimulation by small (lower
that 1.5 Pa) and oscillating wall shear stress. Assuming that this last actor is particularly important, a numerical model of restenosis
basing on wall shear stress distribution in the stented artery was elaborated. A numerical simulations of the development of in-stent
restenosis have been performed and realistic geometric patterns of a
progressing lumen reduction have been obtained
Abstract: The mitigation of crop loss due to damaging freezes
requires accurate air temperature prediction models. Previous work
established that the Ward-style artificial neural network (ANN) is a
suitable tool for developing such models. The current research
focused on developing ANN models with reduced average prediction
error by increasing the number of distinct observations used in
training, adding additional input terms that describe the date of an
observation, increasing the duration of prior weather data included in
each observation, and reexamining the number of hidden nodes used
in the network. Models were created to predict air temperature at
hourly intervals from one to 12 hours ahead. Each ANN model,
consisting of a network architecture and set of associated parameters,
was evaluated by instantiating and training 30 networks and
calculating the mean absolute error (MAE) of the resulting networks
for some set of input patterns. The inclusion of seasonal input terms,
up to 24 hours of prior weather information, and a larger number of
processing nodes were some of the improvements that reduced
average prediction error compared to previous research across all
horizons. For example, the four-hour MAE of 1.40°C was 0.20°C, or
12.5%, less than the previous model. Prediction MAEs eight and 12
hours ahead improved by 0.17°C and 0.16°C, respectively,
improvements of 7.4% and 5.9% over the existing model at these
horizons. Networks instantiating the same model but with different
initial random weights often led to different prediction errors. These
results strongly suggest that ANN model developers should consider
instantiating and training multiple networks with different initial
weights to establish preferred model parameters.
Abstract: Sequential mining methods efficiently discover all frequent sequential patterns included in sequential data. These methods use the support, which is the previous criterion that satisfies the Apriori property, to evaluate the frequency. However, the discovered patterns do not always correspond to the interests of analysts, because the patterns are common and the analysts cannot get new knowledge from the patterns. The paper proposes a new criterion, namely, the sequential interestingness, to discover sequential patterns that are more attractive for the analysts. The paper shows that the criterion satisfies the Apriori property and how the criterion is related to the support. Also, the paper proposes an efficient sequential mining method based on the proposed criterion. Lastly, the paper shows the effectiveness of the proposed method by applying the method to two kinds of sequential data.
Abstract: This paper describes part of a project about Learningby-
Modeling (LbM). Studying complex systems is increasingly
important in teaching and learning many science domains. Many
features of complex systems make it difficult for students to develop
deep understanding. Previous research indicates that involvement
with modeling scientific phenomena and complex systems can play a
powerful role in science learning. Some researchers argue with this
view indicating that models and modeling do not contribute to
understanding complexity concepts, since these increases the
cognitive load on students. This study will investigate the effect of
different modes of involvement in exploring scientific phenomena
using computer simulation tools, on students- mental model from the
perspective of structure, behavior and function. Quantitative and
qualitative methods are used to report about 121 freshmen students
that engaged in participatory simulations about complex phenomena,
showing emergent, self-organized and decentralized patterns. Results
show that LbM plays a major role in students' concept formation
about complexity concepts.
Abstract: Web usage mining is an interesting application of data
mining which provides insight into customer behaviour on the Internet. An important technique to discover user access and navigation trails is based on sequential patterns mining. One of the
key challenges for web access patterns mining is tackling the problem
of mining richly structured patterns. This paper proposes a novel
model called Web Access Patterns Graph (WAP-Graph) to represent all of the access patterns from web mining graphically. WAP-Graph
also motivates the search for new structural relation patterns, i.e. Concurrent Access Patterns (CAP), to identify and predict more
complex web page requests. Corresponding CAP mining and modelling methods are proposed and shown to be effective in the
search for and representation of concurrency between access patterns
on the web. From experiments conducted on large-scale synthetic
sequence data as well as real web access data, it is demonstrated that
CAP mining provides a powerful method for structural knowledge discovery, which can be visualised through the CAP-Graph model.
Abstract: Many natural language expressions are ambiguous, and
need to draw on other sources of information to be interpreted.
Interpretation of the e word تعاون to be considered as a noun or a verb
depends on the presence of contextual cues. To interpret words we
need to be able to discriminate between different usages. This paper
proposes a hybrid of based- rules and a machine learning method for
tagging Arabic words. The particularity of Arabic word that may be
composed of stem, plus affixes and clitics, a small number of rules
dominate the performance (affixes include inflexional markers for
tense, gender and number/ clitics include some prepositions,
conjunctions and others). Tagging is closely related to the notion of
word class used in syntax. This method is based firstly on rules (that
considered the post-position, ending of a word, and patterns), and
then the anomaly are corrected by adopting a memory-based learning
method (MBL). The memory_based learning is an efficient method to
integrate various sources of information, and handling exceptional
data in natural language processing tasks. Secondly checking the
exceptional cases of rules and more information is made available to
the learner for treating those exceptional cases. To evaluate the
proposed method a number of experiments has been run, and in
order, to improve the importance of the various information in
learning.
Abstract: Biometric measures of one kind or another have been
used to identify people since ancient times, with handwritten
signatures, facial features, and fingerprints being the traditional
methods. Of late, Systems have been built that automate the task of
recognition, using these methods and newer ones, such as hand
geometry, voiceprints and iris patterns. These systems have different
strengths and weaknesses. This work is a two-section composition. In
the starting section, we present an analytical and comparative study
of common biometric techniques. The performance of each of them
has been viewed and then tabularized as a result. The latter section
involves the actual implementation of the techniques under
consideration that has been done using a state of the art tool called,
MATLAB. This tool aids to effectively portray the corresponding
results and effects.
Abstract: All Text processing systems allow their users to
search a pattern of string from a given text. String matching is
fundamental to database and text processing applications. Every text
editor must contain a mechanism to search the current document for
arbitrary strings. Spelling checkers scan an input text for words in the
dictionary and reject any strings that do not match. We store our
information in data bases so that later on we can retrieve the same
and this retrieval can be done by using various string matching
algorithms. This paper is describing a new string matching algorithm
for various applications. A new algorithm has been designed with the
help of Rabin Karp Matcher, to improve string matching process.
Abstract: National Biodiversity Database System (NBIDS) has
been developed for collecting Thai biodiversity data. The goal of this
project is to provide advanced tools for querying, analyzing,
modeling, and visualizing patterns of species distribution for
researchers and scientists. NBIDS data record two types of datasets:
biodiversity data and environmental data. Biodiversity data are
specie presence data and species status. The attributes of biodiversity
data can be further classified into two groups: universal and projectspecific
attributes. Universal attributes are attributes that are common
to all of the records, e.g. X/Y coordinates, year, and collector name.
Project-specific attributes are attributes that are unique to one or a
few projects, e.g., flowering stage. Environmental data include
atmospheric data, hydrology data, soil data, and land cover data
collecting by using GLOBE protocols. We have developed webbased
tools for data entry. Google Earth KML and ArcGIS were used
as tools for map visualization. webMathematica was used for simple
data visualization and also for advanced data analysis and
visualization, e.g., spatial interpolation, and statistical analysis.
NBIDS will be used by park rangers at Khao Nan National Park, and
researchers.
Abstract: Keystroke authentication is a new access control system
to identify legitimate users via their typing behavior. In this paper,
machine learning techniques are adapted for keystroke authentication.
Seven learning methods are used to build models to differentiate user
keystroke patterns. The selected classification methods are Decision
Tree, Naive Bayesian, Instance Based Learning, Decision Table, One
Rule, Random Tree and K-star. Among these methods, three of them
are studied in more details. The results show that machine learning
is a feasible alternative for keystroke authentication. Compared to
the conventional Nearest Neighbour method in the recent research,
learning methods especially Decision Tree can be more accurate. In
addition, the experiment results reveal that 3-Grams is more accurate
than 2-Grams and 4-Grams for feature extraction. Also, combination
of attributes tend to result higher accuracy.
Abstract: Sequential pattern mining is a challenging task in data mining area with large applications. One among those applications is mining patterns from weblog. Recent times, weblog is highly dynamic and some of them may become absolute over time. In addition, users may frequently change the threshold value during the data mining process until acquiring required output or mining interesting rules. Some of the recently proposed algorithms for mining weblog, build the tree with two scans and always consume large time and space. In this paper, we build Revised PLWAP with Non-frequent Items (RePLNI-tree) with single scan for all items. While mining sequential patterns, the links related to the nonfrequent items are not considered. Hence, it is not required to delete or maintain the information of nodes while revising the tree for mining updated transactions. The algorithm supports both incremental and interactive mining. It is not required to re-compute the patterns each time, while weblog is updated or minimum support changed. The performance of the proposed tree is better, even the size of incremental database is more than 50% of existing one. For evaluation purpose, we have used the benchmark weblog dataset and found that the performance of proposed tree is encouraging compared to some of the recently proposed approaches.
Abstract: The fault detection and diagnosis of complicated
production processes is one of essential tasks needed to run the process
safely with good final product quality. Unexpected events occurred in
the process may have a serious impact on the process. In this work,
triangular representation of process measurement data obtained in an
on-line basis is evaluated using simulation process. The effect of using
linear and nonlinear reduced spaces is also tested. Their diagnosis
performance was demonstrated using multivariate fault data. It has
shown that the nonlinear technique based diagnosis method produced
more reliable results and outperforms linear method. The use of
appropriate reduced space yielded better diagnosis performance. The
presented diagnosis framework is different from existing ones in that it
attempts to extract the fault pattern in the reduced space, not in the
original process variable space. The use of reduced model space helps
to mitigate the sensitivity of the fault pattern to noise.
Abstract: Bandung city center can be deemed as economic, social and cultural center. However the city center suffers from deterioration. The retail activities tend to shift outward the city center. Numerous idyllic residences changed into business premises in two villages situated in the north part of the city during 1990s, especially after a new highway and flyover opened. According to space syntax theory, the pattern of spatial integration in the urban grid is a prime determinant of movement patterns in the system. The syntactic analysis results show the flyover has insignificant influence on street network in the city center. However the flyover has been generating a major difference in the new commercial area since it has become relatively as strategic as the city center. Besides street network, local government policy, rapid private motorization and particular condition of each site also played important roles in encouraging the current commercial areas to flourish.
Abstract: Background Contact lens (CL) wear can cause
changes in blinking and corneal staining. Aims and Objectives To
determine the effects of CL materials (HEMA and SiHy) on
spontaneous blink rate, blinking patterns and corneal staining after 2
months of wear. Methods Ninety subjects in 3 groups (control,
HEMA and SiHy) were assessed at baseline and 2-months. Blink rate
was recorded using a video camera. Blinking patterns were assessed
with digital camera and slit lamp biomicroscope. Corneal staining
was graded using IER grading scale Results There were no significant
differences in all parameters at baseline. At 2 months, CL wearers
showed significant increment in average blink rate (F1.626, 47.141 =
7.250, p = 0.003; F2,58 = 6.240, p = 0.004) and corneal staining (χ2
2,
n=30 = 31.921, p < 0.001; χ2
2, n=30 = 26.909, p < 0.001). Conclusion
Blinking characteristics and corneal staining were not influence by
soft CL materials.