Abstract: Mobile learning is a new learning landscape that offers opportunity for collaborative, personal, informal, and students’ centered learning environment. In implementing any learning system such as a mobile learning environment, learners’ expectations should be taken into consideration. However, there is a lack of studies on this aspect, particularly in the context of Kuwait higher education (HE) institutions. This study focused on how students perceive the use of mobile devices in learning. Although m-learning is considered as an effective educational tool in developed countries, it is not yet fully utilized in Kuwait. The study reports on the results of a survey conducted on 623 HE students in Kuwait to a better understand students' perceptions and opinions about the effectiveness of using mobile learning systems. An analysis of quantitative survey data is presented. The findings indicated that Kuwait HE students are very familiar with mobile devices and its applications. The results also reveal that students have positive perceptions of m-learning, and believe that video-based social media applications enhance the teaching and learning process.
Abstract: Human movement in the real world provides
important information for developing human behaviour models and
simulations. However, it is difficult to assess ‘real’ human behaviour
since there is no established method available. As part of the AUNTSUE
(Accessibility and User Needs in Transport – Sustainable Urban
Environments) project, this research aimed to propose a method to
assess human movement and behaviour in crowded areas. The
method is based on the three major steps of video recording,
conceptual behavior modelling and video analysis. The focus is on
individual human movement and behaviour in normal situations
(panic situations are not considered) and the interactions between
individuals in localized areas. Emphasis is placed on gaining
knowledge of characteristics of human movement and behaviour in
the real world that can be modelled in the virtual environment.
Abstract: We present a dedicated video-based monitoring
system for quantification of patient’s attention to visual feedback
during robot assisted gait rehabilitation. Two different approaches for
eye gaze and head pose tracking are tested and compared. Several
metrics for assessment of patient’s attention are also presented.
Experimental results with healthy volunteers demonstrate that
unobtrusive video-based gaze tracking during the robot-assisted gait
rehabilitation is possible and is sufficiently robust for quantification
of patient’s attention and assessment of compliance with the
rehabilitation therapy.
Abstract: This paper proposes a video-based framework for face recognition to identify which faces appear in a video sequence. Our basic idea is like a tracking task - to track a selection of person candidates over time according to the observing visual features of face images in video frames. Hence, we employ the state-space model to formulate video-based face recognition by dividing this problem into two parts: the likelihood and the transition measures. The likelihood measure is to recognize whose face is currently being observed in video frames, for which two-dimensional linear discriminant analysis is employed. The transition measure estimates the probability of changing from an incorrect recognition at the previous stage to the correct person at the current stage. Moreover, extra nodes associated with head nodes are incorporated into our proposed state-space model. The experimental results are also provided to demonstrate the robustness and efficiency of our proposed approach.
Abstract: During the past several years, face recognition in video
has received significant attention. Not only the wide range of
commercial and law enforcement applications, but also the availability
of feasible technologies after several decades of research contributes
to the trend. Although current face recognition systems have reached a
certain level of maturity, their development is still limited by the
conditions brought about by many real applications. For example,
recognition images of video sequence acquired in an open
environment with changes in illumination and/or pose and/or facial
occlusion and/or low resolution of acquired image remains a largely
unsolved problem. In other words, current algorithms are yet to be
developed. This paper provides an up-to-date survey of video-based
face recognition research. To present a comprehensive survey, we
categorize existing video based recognition approaches and present
detailed descriptions of representative methods within each category.
In addition, relevant topics such as real time detection, real time
tracking for video, issues such as illumination, pose, 3D and low
resolution are covered.
Abstract: Mixed-traffic (e.g., pedestrians, bicycles, and vehicles)
data at an intersection is one of the essential factors for intersection
design and traffic control. However, some data such as pedestrian
volume cannot be directly collected by common detectors (e.g.
inductive loop, sonar and microwave sensors). In this paper, a video
based detection algorithm is proposed for mixed-traffic data collection
at intersections using surveillance cameras. The algorithm is derived
from Gaussian Mixture Model (GMM), and uses a mergence time
adjustment scheme to improve the traditional algorithm. Real-world
video data were selected to test the algorithm. The results show that
the proposed algorithm has the faster processing speed and more
accuracy than the traditional algorithm. This indicates that the
improved algorithm can be applied to detect mixed-traffic at
signalized intersection, even when conflicts occur.
Abstract: Extracting in-play scenes in sport videos is essential for
quantitative analysis and effective video browsing of the sport
activities. Game analysis of badminton as of the other racket sports
requires detecting the start and end of each rally period in an
automated manner. This paper describes an automatic serve scene
detection method employing cubic higher-order local auto-correlation
(CHLAC) and multiple regression analysis (MRA). CHLAC can
extract features of postures and motions of multiple persons without
segmenting and tracking each person by virtue of shift-invariance and
additivity, and necessitate no prior knowledge. Then, the specific
scenes, such as serve, are detected by linear regression (MRA) from
the CHLAC features. To demonstrate the effectiveness of our method,
the experiment was conducted on video sequences of five badminton
matches captured by a single ceiling camera. The averaged precision
and recall rates for the serve scene detection were 95.1% and 96.3%,
respectively.
Abstract: This paper presents a system for tracking the movement of laparoscopic instruments which is based on an orthogonal system of webcams and video image processing. The movements are captured with two webcams placed orthogonally inside of the physical trainer. On the image, the instruments were detected by using color markers placed on the distal tip of each instrument. The 3D position of the tip of the instrument within the work space was obtained by linear triangulation method. Preliminary results showed linearity and repeatability in the motion tracking with a resolution of 0.616 mm in each axis; the accuracy of the system showed a 3D instrument positioning error of 1.009 ± 0.101 mm. This tool is a portable and low-cost alternative to traditional tracking devices and a trustable method for the objective evaluation of the surgeon’s surgical skills.
Abstract: Understanding road features such as lanes, the color
of lanes, and sidewalks in a live video captured from a moving
vehicle is essential to build video-based navigation systems. In this
paper, we present a novel idea to understand the road features using
support vector machines. Various feature vectors including color
components of road markings and the difference between two
regions, i.e., chosen AOIs, and so on are fed into SVM, deciding
colors of lanes and sidewalks robustly. Experimental results are
provided to show the robustness of the proposed idea.
Abstract: Freeways are originally designed to provide high
mobility to road users. However, the increase in population and
vehicle numbers has led to increasing congestions around the world.
Daily recurrent congestion substantially reduces the freeway capacity
when it is most needed. Building new highways and expanding the
existing ones is an expensive solution and impractical in many
situations. Intelligent and vision-based techniques can, however, be
efficient tools in monitoring highways and increasing the capacity of
the existing infrastructures. The crucial step for highway monitoring
is vehicle detection. In this paper, we propose one of such
techniques. The approach is based on artificial neural networks
(ANN) for vehicles detection and counting. The detection process
uses the freeway video images and starts by automatically extracting
the image background from the successive video frames. Once the
background is identified, subsequent frames are used to detect
moving objects through image subtraction. The result is segmented
using Sobel operator for edge detection. The ANN is, then, used in
the detection and counting phase. Applying this technique to the
busiest freeway in Riyadh (King Fahd Road) achieved higher than
98% detection accuracy despite the light intensity changes, the
occlusion situations, and shadows.