Abstract: Intelligibility is an essential characteristic of a speech
signal, which is used to help in the understanding of information in
speech signal. Background noise in the environment can deteriorate
the intelligibility of a recorded speech. In this paper, we presented a
simple variance subtracted - variable level discrete wavelet transform,
which improve the intelligibility of speech. The proposed algorithm
does not require an explicit estimation of noise, i.e., prior knowledge
of the noise; hence, it is easy to implement, and it reduces the
computational burden. The proposed algorithm decides a separate
decomposition level for each frame based on signal dominant and
dominant noise criteria. The performance of the proposed algorithm
is evaluated with speech intelligibility measure (STOI), and results
obtained are compared with Universal Discrete Wavelet Transform
(DWT) thresholding and Minimum Mean Square Error (MMSE)
methods. The experimental results revealed that the proposed scheme
outperformed competing methods
Abstract: The purpose of the present research is to equate two
test forms as part of a study to evaluate the educational effectiveness
of the ARTé: Mecenas art history learning game. The researcher
applied Item Response Theory (IRT) procedures to calculate item,
test, and mean-sigma equating parameters. With the sample size
n=134, test parameters indicated “good” model fit but low Test
Information Functions and more acute than expected equating
parameters. Therefore, the researcher applied equipercentile equating
and linear equating to raw scores and compared the equated form
parameters and effect sizes from each method. Item scaling in IRT
enables the researcher to select a subset of well-discriminating items.
The mean-sigma step produces a mean-slope adjustment from the
anchor items, which was used to scale the score on the new form
(Form R) to the reference form (Form Q) scale. In equipercentile
equating, scores are adjusted to align the proportion of scores in each
quintile segment. Linear equating produces a mean-slope adjustment,
which was applied to all core items on the new form. The study
followed a quasi-experimental design with purposeful sampling of
students enrolled in a college level art history course (n=134) and
counterbalancing design to distribute both forms on the pre- and posttests.
The Experimental Group (n=82) was asked to play ARTé:
Mecenas online and complete Level 4 of the game within a two-week
period; 37 participants completed Level 4. Over the same period, the
Control Group (n=52) did not play the game. The researcher
examined between group differences from post-test scores on test
Form Q and Form R by full-factorial Two-Way ANOVA. The raw
score analysis indicated a 1.29% direct effect of form, which was
statistically non-significant but may be practically significant. The
researcher repeated the between group differences analysis with all
three equating methods. For the IRT mean-sigma adjusted scores,
form had a direct effect of 8.39%. Mean-sigma equating with a small
sample may have resulted in inaccurate equating parameters.
Equipercentile equating aligned test means and standard deviations,
but resultant skewness and kurtosis worsened compared to raw score
parameters. Form had a 3.18% direct effect. Linear equating
produced the lowest Form effect, approaching 0%. Using linearly
equated scores, the researcher conducted an ANCOVA to examine
the effect size in terms of prior knowledge. The between group effect
size for the Control Group versus Experimental Group participants
who completed the game was 14.39% with a 4.77% effect size
attributed to pre-test score. Playing and completing the game
increased art history knowledge, and individuals with low prior
knowledge tended to gain more from pre- to post test. Ultimately,
researchers should approach test equating based on their theoretical
stance on Classical Test Theory and IRT and the respective assumptions. Regardless of the approach or method, test equating
requires a representative sample of sufficient size. With small sample
sizes, the application of a range of equating approaches can expose
item and test features for review, inform interpretation, and identify
paths for improving instruments for future study.
Abstract: Clinical education is one of the most important components of a nursing curriculum as it develops the students’ cognitive, psychomotor and affective skills. Clinical teaching ensures the integration of knowledge into practice. As the numbers of students increase in the field of nursing coupled with the faculty shortage, clinical preceptors are the best choice to ensure student learning in the clinical settings. The clinical preceptor role has been introduced in the undergraduate nursing programme. In Pakistan, this role emerged due to a faculty shortage. Initially, two clinical preceptors were hired. This study will explore clinical preceptors views and experiences of precepting Bachelor of Science in Nursing (BScN) students in an undergraduate program. A case study design was used. As case studies explore a single unit of study such as a person or very small number of subjects; the two clinical preceptors were fundamental to the study and served as a single case. Qualitative data were obtained through an iterative process using in depth interviews and written accounts from reflective journals that were kept by the clinical preceptors. The findings revealed that the clinical preceptors were dedicated to their roles and responsibilities. Another, key finding was that clinical preceptors’ prior knowledge and clinical experience were valuable assets to perform their role effectively. The clinical preceptors found their new role innovative and challenging; it was stressful at the same time. Findings also revealed that in the clinical agencies there were unclear expectations and role ambiguity. Furthermore, clinical preceptors had difficulty integrating theory into practice in the clinical area and they had difficulty in giving feedback to the students. Although this study is localized to one university, generalizations can be drawn from the results. The key findings indicate that the role of a clinical preceptor is demanding and stressful. Clinical preceptors need preparation prior to precepting students on clinicals. Also, institutional support is fundamental for their acceptance. This paper focuses on the views and experiences of clinical preceptors undertaking a newly established role and resonates with the literature. The following recommendations are drawn to strengthen the role of the clinical preceptors: A structured program for clinical preceptors is needed along with mentorship. Clinical preceptors should be provided with formal training in teaching and learning with emphasis on clinical teaching and giving feedback to students. Additionally, for improving integration of theory into practice, clinical modules should be provided ahead of the clinical. In spite of all the challenges, ten more clinical preceptors have been hired as the faculty shortage continues to persist.
Abstract: Matching high dimensional features between images is computationally expensive for exhaustive search approaches in computer vision. Although the dimension of the feature can be degraded by simplifying the prior knowledge of homography, matching accuracy may degrade as a tradeoff. In this paper, we present a feature matching method based on k-means algorithm that reduces the matching cost and matches the features between images instead of using a simplified geometric assumption. Experimental results show that the proposed method outperforms the previous linear exhaustive search approaches in terms of the inlier ratio of matched pairs.
Abstract: In this paper, we study a distributed control algorithm
for the problem of unknown area coverage by a network of robots.
The coverage objective is to locate a set of targets in the area and
to minimize the robots’ energy consumption. The robots have no
prior knowledge about the location and also about the number of the
targets in the area. One efficient approach that can be used to relax
the robots’ lack of knowledge is to incorporate an auxiliary learning
algorithm into the control scheme. A learning algorithm actually
allows the robots to explore and study the unknown environment
and to eventually overcome their lack of knowledge. The control
algorithm itself is modeled based on game theory where the network
of the robots use their collective information to play a non-cooperative
potential game. The algorithm is tested via simulations to verify its
performance and adaptability.
Abstract: Human action is recognized directly from the video sequences. The objective of this work is to recognize various human actions like run, jump, walk etc. Human action recognition requires some prior knowledge about actions namely, the motion estimation, foreground and background estimation. Region of interest (ROI) is extracted to identify the human in the frame. Then, optical flow technique is used to extract the motion vectors. Using the extracted features similarity measure based classification is done to recognize the action. From experimentations upon the Weizmann database, it is found that the proposed method offers a high accuracy.
Abstract: Prior literature in the field of adaptive and
personalized learning sequence in e-learning have proposed and
implemented various mechanisms to improve the learning process
such as individualization and personalization, but complex to
implement due to expensive algorithmic programming and need of
extensive and prior data. The main objective of personalizing
learning sequence is to maximize learning by dynamically selecting
the closest teaching operation in order to achieve the learning
competency of learner. In this paper, a revolutionary technique has
been proposed and tested to perform individualization and
personalization using modified reversed roulette wheel selection
algorithm that runs at O(n). The technique is simpler to implement
and is algorithmically less expensive compared to other revolutionary
algorithms since it collects the dynamic real time performance matrix
such as examinations, reviews, and study to form the RWSA single
numerical fitness value. Results show that the implemented system is
capable of recommending new learning sequences that lessens time
of study based on student's prior knowledge and real performance
matrix.
Abstract: Inference plays an important role in the learning
process and it can lead to a rapid acquisition of a second language.
When learning a non-native language i.e., a critical language like
Arabic, the students depend on the teacher’s support most of the time
to learn new concepts. The students focus on memorizing the new
vocabulary and stress on learning all the grammatical rules. Hence,
the students became mechanical and cannot produce the language
easily. As a result, they are unable to predicate the meaning of words
in the context by relying heavily on the teacher, in that they cannot
link their prior knowledge or even identify the meaning of the words
without the support of the teacher. This study explores how the
teacher guides students learning during the inference process and
what are the processes of learning that can direct student’s inference.
Abstract: Supermarkets are the most electricity-intensive type of
commercial buildings. The unsuitable indoor environment of a
supermarket provided by abnormal HVAC operations incurs waste
energy consumption in refrigeration systems. This current study
briefly describes significantly solid backgrounds and proposes easyto-
use analysis terminology for investigating the impact of HVAC
operations on refrigeration power consumption using the field-test
data obtained from building automation system (BAS). With solid
backgrounds and prior knowledge, expected energy interactions
between HVAC and refrigeration systems are proposed through
Pearson’s correlation analysis (R value) by considering correlations
between equipment power consumption and dominantly independent
variables (driving force conditions).The R value can be conveniently
utilized to evaluate how strong relations between equipment
operations and driving force parameters are. The calculated R values
obtained from field data are compared to expected ranges of R values
computed by energy interaction methodology. The comparisons can
separate the operational conditions of equipment into faulty and
normal conditions. This analysis can simply investigate the condition
of equipment operations or building sensors because equipment could
be abnormal conditions due to routine operations or faulty
commissioning processes in field tests. With systematically solid and
easy-to-use backgrounds of interactions provided in the present
article, the procedures can be utilized as a tool to evaluate the proper
commissioning and routine operations of HVAC and refrigeration
systems to detect simple faults (e.g. sensors and driving force
environment of refrigeration systems and equipment set-point) and
optimize power consumption in supermarket buildings. Moreover,
the analysis will be used to further study the FDD research for
supermarkets in future.
Abstract: Constructivism, the latest teaching and learning theory in western countries which is based on the premise that cognition (learning) is the result of "mental construction", lays emphasis on the learner's active learning. Guided by constructivism, this thesis discusses the teaching plan and its application in extensive reading course. In extensive reading classroom, emphasis should be laid on the activation of students' prior knowledge, grasping the skills of fast reading and the combination of reading and writing to check extracurricular reading. With three factors supplementing each other, students' English reading ability can be improved effectively.
Abstract: The use of anatomical landmarks as a basis for image to patient registration is appealing because the registration may be performed retrospectively. We have previously proposed the use of two anatomical soft tissue landmarks of the head, the canthus (corner of the eye) and the tragus (a small, pointed, cartilaginous flap of the ear), as a registration basis for an automated CT image to patient registration system, and described their localization in patient space using close range photogrammetry. In this paper, the automatic localization of these landmarks in CT images, based on their curvature saliency and using a rule based system that incorporates prior knowledge of their characteristics, is described. Existing approaches to landmark localization in CT images are predominantly semi-automatic and primarily for localizing internal landmarks. To validate our approach, the positions of the landmarks localized automatically and manually in near isotropic CT images of 102 patients were compared. The average difference was 1.2mm (std = 0.9mm, max = 4.5mm) for the medial canthus and 0.8mm (std = 0.6mm, max = 2.6mm) for the tragus. The medial canthus and tragus can be automatically localized in CT images, with performance comparable to manual localization, based on the approach presented.
Abstract: Conventional controller’s usually required a prior knowledge of mathematical modelling of the process. The inaccuracy of mathematical modelling degrades the performance of the process, especially for non-linear and complex control problem. The process used is Water-Bath system, which is most widely used and nonlinear to some extent. For Water-Bath system, it is necessary to attain desired temperature within a specified period of time to avoid the overshoot and absolute error, with better temperature tracking capability, else the process is disturbed.
To overcome above difficulties intelligent controllers, Fuzzy Logic (FL) and Adaptive Neuro-Fuzzy Inference System (ANFIS), are proposed in this paper. The Fuzzy controller is designed to work with knowledge in the form of linguistic control rules. But the translation of these linguistic rules into the framework of fuzzy set theory depends on the choice of certain parameters, for which no formal method is known. To design ANFIS, Fuzzy-Inference-System is combined with learning capability of Neural-Network.
It is analyzed that ANFIS is best suitable for adaptive temperature control of above system. As compared to PID and FLC, ANFIS produces a stable control signal. It has much better temperature tracking capability with almost zero overshoot and minimum absolute error.
Abstract: In this paper performance of Puma 560
manipulator is being compared for hybrid gradient descent
and least square method learning based ANFIS controller with
hybrid Genetic Algorithm and Generalized Pattern Search
tuned radial basis function based Neuro-Fuzzy controller.
ANFIS which is based on Takagi Sugeno type Fuzzy
controller needs prior knowledge of rule base while in radial
basis function based Neuro-Fuzzy rule base knowledge is not
required. Hybrid Genetic Algorithm with generalized Pattern
Search is used for tuning weights of radial basis function
based Neuro- fuzzy controller. All the controllers are checked
for butterfly trajectory tracking and results in the form of
Cartesian and joint space errors are being compared. ANFIS
based controller is showing better performance compared to
Radial Basis Function based Neuro-Fuzzy Controller but rule
base independency of RBF based Neuro-Fuzzy gives it an
edge over ANFIS
Abstract: It-s known that incorporating prior knowledge into support
vector regression (SVR) can help to improve the approximation
performance. Most of researches are concerned with the incorporation
of knowledge in form of numerical relationships. Little work,
however, has been done to incorporate the prior knowledge on the
structural relationships among the variables (referred as to Structural
Prior Knowledge, SPK). This paper explores the incorporation of SPK
in SVR by constructing appropriate admissible support vector kernel
(SV kernel) based on the properties of reproducing kernel (R.K).
Three-levels specifications of SPK are studies with the corresponding
sub-levels of prior knowledge that can be considered for the method.
These include Hierarchical SPK (HSPK), Interactional SPK (ISPK)
consisting of independence, global and local interaction, Functional
SPK (FSPK) composed of exterior-FSPK and interior-FSPK. A
convenient tool for describing the SPK, namely Description Matrix
of SPK is introduced. Subsequently, a new SVR, namely Motivated
Support Vector Regression (MSVR) whose structure is motivated
in part by SPK, is proposed. Synthetic examples show that it is
possible to incorporate a wide variety of SPK and helpful to improve
the approximation performance in complex cases. The benefits of
MSVR are finally shown on a real-life military application, Air-toground
battle simulation, which shows great potential for MSVR to
the complex military applications.
Abstract: In this paper, algorithms for the automatic localisation
of two anatomical soft tissue landmarks of the head the medial
canthus (inner corner of the eye) and the tragus (a small, pointed,
cartilaginous flap of the ear), in CT images are describet. These
landmarks are to be used as a basis for an automated image-to-patient
registration system we are developing. The landmarks are localised
on a surface model extracted from CT images, based on surface
curvature and a rule based system that incorporates prior knowledge
of the landmark characteristics. The approach was tested on a dataset
of near isotropic CT images of 95 patients. The position of the
automatically localised landmarks was compared to the position of
the manually localised landmarks. The average difference was 1.5
mm and 0.8 mm for the medial canthus and tragus, with a maximum
difference of 4.5 mm and 2.6 mm respectively.The medial canthus
and tragus can be automatically localised in CT images, with
performance comparable to manual localisation
Abstract: The main aim of this study was to examine whether
people understand indicative conditionals on the basis of syntactic
factors or on the basis of subjective conditional probability. The
second aim was to investigate whether the conditional probability of
q given p depends on the antecedent and consequent sizes or derives
from inductive processes leading to establish a link of plausible cooccurrence
between events semantically or experientially associated.
These competing hypotheses have been tested through a 3 x 2 x 2 x 2
mixed design involving the manipulation of four variables: type of
instructions (“Consider the following statement to be true", “Read the
following statement" and condition with no conditional statement);
antecedent size (high/low); consequent size (high/low); statement
probability (high/low). The first variable was between-subjects, the
others were within-subjects. The inferences investigated were Modus
Ponens and Modus Tollens. Ninety undergraduates of the Second
University of Naples, without any prior knowledge of logic or
conditional reasoning, participated in this study.
Results suggest that people understand conditionals in a syntactic
way rather than in a probabilistic way, even though the perception of
the conditional probability of q given p is at least partially involved in
the conditionals- comprehension. They also showed that, in presence
of a conditional syllogism, inferences are not affected by the
antecedent or consequent sizes. From a theoretical point of view these
findings suggest that it would be inappropriate to abandon the idea
that conditionals are naturally understood in a syntactic way for the
idea that they are understood in a probabilistic way.
Abstract: Segmentation of Magnetic Resonance Imaging (MRI) images is the most challenging problems in medical imaging. This paper compares the performances of Seed-Based Region Growing (SBRG), Adaptive Network-Based Fuzzy Inference System (ANFIS) and Fuzzy c-Means (FCM) in brain abnormalities segmentation. Controlled experimental data is used, which designed in such a way that prior knowledge of the size of the abnormalities are known. This is done by cutting various sizes of abnormalities and pasting it onto normal brain tissues. The normal tissues or the background are divided into three different categories. The segmentation is done with fifty seven data of each category. The knowledge of the size of the abnormalities by the number of pixels are then compared with segmentation results of three techniques proposed. It was proven that the ANFIS returns the best segmentation performances in light abnormalities, whereas the SBRG on the other hand performed well in dark abnormalities segmentation.
Abstract: Extracting in-play scenes in sport videos is essential for
quantitative analysis and effective video browsing of the sport
activities. Game analysis of badminton as of the other racket sports
requires detecting the start and end of each rally period in an
automated manner. This paper describes an automatic serve scene
detection method employing cubic higher-order local auto-correlation
(CHLAC) and multiple regression analysis (MRA). CHLAC can
extract features of postures and motions of multiple persons without
segmenting and tracking each person by virtue of shift-invariance and
additivity, and necessitate no prior knowledge. Then, the specific
scenes, such as serve, are detected by linear regression (MRA) from
the CHLAC features. To demonstrate the effectiveness of our method,
the experiment was conducted on video sequences of five badminton
matches captured by a single ceiling camera. The averaged precision
and recall rates for the serve scene detection were 95.1% and 96.3%,
respectively.
Abstract: The development of Internet technology in recent years has led to a more active role of users in creating Web content. This has significant effects both on individual learning and collaborative knowledge building. This paper will present an integrative framework model to describe and explain learning and knowledge building with shared digital artifacts on the basis of Luhmann-s systems theory and Piaget-s model of equilibration. In this model, knowledge progress is based on cognitive conflicts resulting from incongruities between an individual-s prior knowledge and the information which is contained in a digital artifact. Empirical support for the model will be provided by 1) applying it descriptively to texts from Wikipedia, 2) examining knowledge-building processes using a social network analysis, and 3) presenting a survey of a series of experimental laboratory studies.
Abstract: In this paper, the statistical properties of filtered or convolved signals are considered by deriving the resulting density functions as well as the exact mean and variance expressions given a prior knowledge about the statistics of the individual signals in the filtering or convolution process. It is shown that the density function after linear convolution is a mixture density, where the number of density components is equal to the number of observations of the shortest signal. For circular convolution, the observed samples are characterized by a single density function, which is a sum of products.