Abstract: Engagement is one of the most important factors in determining successful outcomes and deep learning in students. Existing approaches to detect student engagement involve periodic human observations that are subject to inter-rater reliability. Our solution uses real-time multimodal multisensor data labeled by objective performance outcomes to infer the engagement of students. The study involves four students with a combined diagnosis of cerebral palsy and a learning disability who took part in a 3-month trial over 59 sessions. Multimodal multisensor data were collected while they participated in a continuous performance test. Eye gaze, electroencephalogram, body pose, and interaction data were used to create a model of student engagement through objective labeling from the continuous performance test outcomes. In order to achieve this, a type of continuous performance test is introduced, the Seek-X type. Nine features were extracted including high-level handpicked compound features. Using leave-one-out cross-validation, a series of different machine learning approaches were evaluated. Overall, the random forest classification approach achieved the best classification results. Using random forest, 93.3% classification for engagement and 42.9% accuracy for disengagement were achieved. We compared these results to outcomes from different models: AdaBoost, decision tree, k-Nearest Neighbor, naïve Bayes, neural network, and support vector machine. We showed that using a multisensor approach achieved higher accuracy than using features from any reduced set of sensors. We found that using high-level handpicked features can improve the classification accuracy in every sensor mode. Our approach is robust to both sensor fallout and occlusions. The single most important sensor feature to the classification of engagement and distraction was shown to be eye gaze. It has been shown that we can accurately predict the level of engagement of students with learning disabilities in a real-time approach that is not subject to inter-rater reliability, human observation or reliant on a single mode of sensor input. This will help teachers design interventions for a heterogeneous group of students, where teachers cannot possibly attend to each of their individual needs. Our approach can be used to identify those with the greatest learning challenges so that all students are supported to reach their full potential.
Abstract: Diabetes is a medical condition that can lead to various diseases such as stroke, heart disease, blindness and obesity. In clinical practice, the concern of the diabetic patients towards the blood glucose examination is rather alarming as some of the individual describing it as something painful with pinprick and pinch. As for some patient with high level of glucose level, pricking the fingers multiple times a day with the conventional glucose meter for close monitoring can be tiresome, time consuming and painful. With these concerns, several non-invasive techniques were used by researchers in measuring the glucose level in blood, including ultrasonic sensor implementation, multisensory systems, absorbance of transmittance, bio-impedance, voltage intensity, and thermography. This paper is discussing the application of the near-infrared (NIR) spectroscopy as a non-invasive method in measuring the glucose level and the implementation of the linear system identification model in predicting the output data for the NIR measurement. In this study, the wavelengths considered are at the 1450 nm and 1950 nm. Both of these wavelengths showed the most reliable information on the glucose presence in blood. Then, the linear Autoregressive Moving Average Exogenous model (ARMAX) model with both un-regularized and regularized methods was implemented in predicting the output result for the NIR measurement in order to investigate the practicality of the linear system in this study. However, the result showed only 50.11% accuracy obtained from the system which is far from the satisfying results that should be obtained.
Abstract: Both Lidars and Radars are sensors for obstacle
detection. While Lidars are very accurate on obstacles positions
and less accurate on their velocities, Radars are more precise on
obstacles velocities and less precise on their positions. Sensor
fusion between Lidar and Radar aims at improving obstacle
detection using advantages of the two sensors. The present
paper proposes a real-time Lidar/Radar data fusion algorithm
for obstacle detection and tracking based on the global nearest
neighbour standard filter (GNN). This algorithm is implemented
and embedded in an automative vehicle as a component generated
by a real-time multisensor software. The benefits of data fusion
comparing with the use of a single sensor are illustrated through
several tracking scenarios (on a highway and on a bend) and
using real-time kinematic sensors mounted on the ego and tracked
vehicles as a ground truth.
Abstract: The current trend of organizations offering their workers open-office spaces and co-working offices has been primed for stimulating teamwork and collaboration. However, this is not always valid as these kinds of spaces bring other types of challenges that compromise workers productivity and creativity. We present an approach for improving creativity and productivity at the workspace by redesigning an office chair that incorporates subtle technological elements that help users focus, relax and being more productive and creative. This sheds light on how we can better design interactive furniture for such popular contexts, as we develop this new chair through a multidisciplinary approach using ergonomics, interior design, interaction design, hardware and software engineering and psychology.
Abstract: The aim of this study was to evaluate the role of multisensory elements in enhancing and facilitating foreign language acquisition among adult students in a language classroom. The use of multisensory elements enables the creation of a student-centered classroom, where the focus is on individual learner’s language learning process, perceptions and motivation. Multisensory language learning is a pedagogical approach where the language learner uses all the senses more effectively than in a traditional in-class environment. Language learning is facilitated due to multisensory stimuli which increase the number of cognitive connections in the learner and take into consideration different types of learners. A living lab called Multisensory Space creates a relaxed and receptive state in the learners through various multisensory stimuli, and thus promotes their natural foreign language acquisition. Qualitative and quantitative data were collected in two questionnaire inquiries among the Finnish students of a higher education institute at the end of their basic French courses in December 2014 and 2016. The inquiries discussed the effects of multisensory elements on the students’ motivation to study French as well as their learning outcomes. The results show that the French classes in the Multisensory Space provide the students with an encouraging and pleasant learning environment, which has a positive impact on their motivation to study the foreign language as well as their language learning outcomes.
Abstract: New sensors and technologies – such as microphones,
touchscreens or infrared sensors – are currently making their
appearance in the automotive sector, introducing new kinds of
Human-Machine Interfaces (HMIs). The interactions with such tools
might be cognitively expensive, thus unsuitable for driving tasks.
It could for instance be dangerous to use touchscreens with a
visual feedback while driving, as it distracts the driver’s visual
attention away from the road. Furthermore, new technologies in
car cockpits modify the interactions of the users with the central
system. In particular, touchscreens are preferred to arrays of buttons
for space improvement and design purposes. However, the buttons’
tactile feedback is no more available to the driver, which makes
such interfaces more difficult to manipulate while driving. Gestures
combined with an auditory feedback might therefore constitute an
interesting alternative to interact with the HMI. Indeed, gestures can
be performed without vision, which means that the driver’s visual
attention can be totally dedicated to the driving task. In fact, the
auditory feedback can both inform the driver with respect to the task
performed on the interface and on the performed gesture, which might
constitute a possible solution to the lack of tactile information. As
audition is a relatively unused sense in automotive contexts, gesture
sonification can contribute to reducing the cognitive load thanks
to the proposed multisensory exploitation. Our approach consists
in using a virtual object (VO) to sonify the consequences of the
gesture rather than the gesture itself. This approach is motivated
by an ecological point of view: Gestures do not make sound, but
their consequences do. In this experiment, the aim was to identify
efficient sound strategies, to transmit dynamic information of VOs to
users through sound. The swipe gesture was chosen for this purpose,
as it is commonly used in current and new interfaces. We chose
two VO parameters to sonify, the hand-VO distance and the VO
velocity. Two kinds of sound parameters can be chosen to sonify the
VO behavior: Spectral or temporal parameters. Pitch and brightness
were tested as spectral parameters, and amplitude modulation as a
temporal parameter. Performances showed a positive effect of sound
compared to a no-sound situation, revealing the usefulness of sounds
to accomplish the task.
Abstract: This paper discusses the question whether a person
diagnosed with dyslexia will necessarily have difficulty in reading
musical notes. The author specifies the characteristics of alphabet
reading in comparison to musical notation reading, and concludes
that there should be no contra-indication for teaching standard music
reading to children with dyslexia if an appropriate process is offered.
This conclusion is based on a long term case study and relies on two
main characteristics of music reading: (1) musical notation system is
a systematic, logical, relative set of symbols written on a staff; and
(2) music reading learning connected with playing a musical
instrument is a multi-sensory activity that combines sight, hearing,
touch, and movement. The paper describes music reading teaching
procedures, using soprano recorders, and provides unique teaching
methods that have been found to be effective for students who were
diagnosed with dyslexia. It provides theoretical explanations in
addition to guidelines for music education practices.
Abstract: Two multisensor system architectures for navigation
and guidance of small Unmanned Aircraft (UA) are presented and
compared. The main objective of our research is to design a compact,
light and relatively inexpensive system capable of providing the
required navigation performance in all phases of flight of small UA,
with a special focus on precision approach and landing, where Vision
Based Navigation (VBN) techniques can be fully exploited in a
multisensor integrated architecture. Various existing techniques for
VBN are compared and the Appearance-Based Navigation (ABN)
approach is selected for implementation. Feature extraction and
optical flow techniques are employed to estimate flight parameters
such as roll angle, pitch angle, deviation from the runway centreline
and body rates. Additionally, we address the possible synergies of
VBN, Global Navigation Satellite System (GNSS) and MEMS-IMU
(Micro-Electromechanical System Inertial Measurement Unit)
sensors, and the use of Aircraft Dynamics Model (ADM) to provide
additional information suitable to compensate for the shortcomings of
VBN and MEMS-IMU sensors in high-dynamics attitude
determination tasks. An Extended Kalman Filter (EKF) is developed
to fuse the information provided by the different sensors and to
provide estimates of position, velocity and attitude of the UA
platform in real-time. The key mathematical models describing the
two architectures i.e., VBN-IMU-GNSS (VIG) system and VIGADM
(VIGA) system are introduced. The first architecture uses VBN
and GNSS to augment the MEMS-IMU. The second mode also
includes the ADM to provide augmentation of the attitude channel.
Simulation of these two modes is carried out and the performances of
the two schemes are compared in a small UA integration scheme (i.e.,
AEROSONDE UA platform) exploring a representative cross-section
of this UA operational flight envelope, including high dynamics
manoeuvres and CAT-I to CAT-III precision approach tasks.
Simulation of the first system architecture (i.e., VIG system) shows
that the integrated system can reach position, velocity and attitude
accuracies compatible with the Required Navigation Performance
(RNP) requirements. Simulation of the VIGA system also shows
promising results since the achieved attitude accuracy is higher using
the VBN-IMU-ADM than using VBN-IMU only. A comparison of
VIG and VIGA system is also performed and it shows that the
position and attitude accuracy of the proposed VIG and VIGA
systems are both compatible with the RNP specified in the various
UA flight phases, including precision approach down to CAT-II.
Abstract: This paper proposes a hierarchical hidden Markov model (HHMM) to model the detection of M vehicles in a wireless sensor network (WSN). The HHMM model contains an extra level of hidden Markov model to model the temporal transitions of each
state of the first HMM. By modeling the temporal transitions, only those hypothesis with nonzero transition probabilities needs to be tested. Thus, this method efficiently reduces the computation load, which is preferable in WSN applications.This paper integrates several techniques to optimize the detection performance. The output of the states of the first HMM is modeled as Gaussian Mixture Model (GMM), where the number of states and the number of Gaussians are experimentally determined, while the other parameters are estimated using Expectation Maximization (EM). HHMM is used to model the sequence of the local decisions which are based on multiple hypothesis testing with maximum likelihood approach. The states in the HHMM represent various combinations of vehicles of different types. Due to the statistical advantages of multisensor data fusion, we propose a heuristic based on fuzzy weighted majority voting to enhance cooperative classification of moving vehicles within a region that is monitored by a wireless sensor network. A fuzzy inference system weighs each local decision based on the signal to noise
ratio of the acoustic signal for target detection and the signal to noise ratio of the radio signal for sensor communication. The spatial correlation among the observations of neighboring sensor nodes is efficiently utilized as well as the temporal correlation. Simulation results demonstrate the efficiency of this scheme.
Abstract: Interactive installations for public spaces are a
particular kind of interactive systems, the design of which has been
the subject of several research studies. Sensor-based applications are
becoming increasingly popular, but the human-computer interaction
community is still far from reaching sound, effective large-scale
interactive installations for public spaces. The 6DSpaces project is
described in this paper as a research approach based on studying the
role of multisensory interactivity and how it can be effectively used
to approach people to digital, scientific contents. The design of an
entire scientific exhibition is described and the result was evaluated
in the real world context of a Science Centre. Conclusions bring
insight into how the human-computer interaction should be designed
in order to maximize the overall experience.
Abstract: A new data fusion method called joint probability density matrix (JPDM) is proposed, which can associate and fuse measurements from spatially distributed heterogeneous sensors to identify the real target in a surveillance region. Using the probabilistic grids representation, we numerically combine the uncertainty regions of all the measurements in a general framework. The NP-hard multisensor data fusion problem has been converted to a peak picking problem in the grids map. Unlike most of the existing data fusion method, the JPDM method dose not need association processing, and will not lead to combinatorial explosion. Its convergence to the CRLB with a diminishing grid size has been proved. Simulation results are presented to illustrate the effectiveness of the proposed technique.
Abstract: This paper reports on a receding horizon filtering for
mobile robot systems with cross-correlated sensor noises and
uncertainties. Also, the effect of uncertain parameters in the state of
the tracking error model performance is considered. A distributed
fusion receding horizon filter is proposed. The distributed fusion
filtering algorithm represents the optimal linear combination of the
local filters under the minimum mean square error criterion. The
derivation of the error cross-covariances between the local receding
horizon filters is the key of this paper. Simulation results of the
tracking mobile robot-s motion demonstrate high accuracy and
computational efficiency of the distributed fusion receding horizon
filter.
Abstract: To solve the problem of multisensor data fusion under
non-Gaussian channel noise. The advanced M-estimates are known
to be robust solution while trading off some accuracy. In order to
improve the estimation accuracy while still maintaining the equivalent
robustness, a two-stage robust fusion algorithm is proposed using
preliminary rejection of outliers then an optimal linear fusion. The
numerical experiments show that the proposed algorithm is equivalent
to the M-estimates in the case of uncorrelated local estimates and
significantly outperforms the M-estimates when local estimates are
correlated.
Abstract: In this paper we propose a framework for
multisensor intrusion detection called Fuzzy Agent-Based Intrusion
Detection System. A unique feature of this model is that the agent
uses data from multiple sensors and the fuzzy logic to process log
files. Use of this feature reduces the overhead in a distributed
intrusion detection system. We have developed an agent
communication architecture that provides a prototype
implementation. This paper discusses also the issues of combining
intelligent agent technology with the intrusion detection domain.