Abstract: The purpose of this paper is to analyze the cooperative learning behavior pattern based on the data of students' movement. The study firstly reviewed the cooperative learning theory and its research status, and briefly introduced the k-means clustering algorithm. Then, it used clustering algorithm and mathematical statistics theory to analyze the activity rhythm of individual student and groups in different functional areas, according to the movement data provided by 10 first-year graduate students. It also focused on the analysis of students' behavior in the learning area and explored the law of cooperative learning behavior. The research result showed that the cooperative learning behavior analysis method based on movement data proposed in this paper is feasible. From the results of data analysis, the characteristics of behavior of students and their cooperative learning behavior patterns could be found.
Abstract: In response to widely used wearable medical devices equipped with a continuous glucose monitor (CGM) and insulin pump, the advanced control methods are still demanding to get the full benefit of these devices. Unlike costly clinical trials, implementing effective insulin-glucose control strategies can provide significant contributions to the patients suffering from chronic diseases such as diabetes. This study deals with a key role of two-layer insulin-glucose regulator based on model-predictive-control (MPC) scheme so that the patient’s predicted glucose profile is in compliance with the insulin level injected through insulin pump automatically. It is achieved by iterative optimization algorithm which is called an integrated perturbation analysis and sequential quadratic programming (IPA-SQP) solver for handling uncertainties due to unexpected variations in glucose-insulin values and body’s characteristics. The feasibility evaluation of the discussed control approach is also studied by means of numerical simulations of two case scenarios via measured data. The obtained results are presented to verify the superior and reliable performance of the proposed control scheme with no negative impact on patient safety.
Abstract: This paper presents an optimization method based
on genetic algorithm for the energy management inside buildings
developed in the frame of the project Smart Living Lab (SLL)
in Fribourg (Switzerland). This algorithm optimizes the interaction
between renewable energy production, storage systems and energy
consumers. In comparison with standard algorithms, the innovative
aspect of this project is the extension of the smart regulation
over three simultaneous criteria: the energy self-consumption, the
decrease of greenhouse gas emissions and operating costs. The
genetic algorithm approach was chosen due to the large quantity
of optimization variables and the non-linearity of the optimization
function. The optimization process includes also real time data of the
building as well as weather forecast and users habits. This information
is used by a physical model of the building energy resources to predict
the future energy production and needs, to select the best energetic
strategy, to combine production or storage of energy in order to
guarantee the demand of electrical and thermal energy. The principle
of operation of the algorithm as well as typical output example of
the algorithm is presented.
Abstract: We here describe the theoretical and philosophical understanding of a long term use and development of algorithmic computer-based tools applied to music composition. The findings of our research lead us to interrogate some specific processes and systems of communication engaged in the discovery of specific cultural artworks: artistic creation in the sono-musical domain. Our hypothesis is that the patterns of auditory learning cannot be only understood in terms of social transmission but would gain to be questioned in the way they rely on various ranges of acoustic stimuli modes of consciousness and how the different types of memories engaged in the percept-action expressive systems of our cultural communities also relies on these shadowy conscious entities we named “Reduced Descriptive Structures”.
Abstract: In the present work we developed an image processing
algorithm to measure water droplets characteristics during dropwise
condensation on pillared surfaces. The main problem in this process is
the similarity between shape and size of water droplets and the pillars.
The developed method divides droplets into four main groups based
on their size and applies the corresponding algorithm to segment each
group. These algorithms generate binary images of droplets based
on both their geometrical and intensity properties. The information
related to droplets evolution during time including mean radius and
drops number per unit area are then extracted from the binary images.
The developed image processing algorithm is verified using manual
detection and applied to two different sets of images corresponding
to two kinds of pillared surfaces.
Abstract: Structural design and analysis is an important and time-consuming process, particularly at the conceptual design stage. Decisions made at this stage can have an enormous effect on the entire project, as it becomes ever costlier and more difficult to alter the choices made early on in the construction process. Hence, optimisation of the early stages of structural design can provide important efficiencies in terms of cost and time. This paper suggests a structural design optimisation (SDO) framework in which Genetic Algorithms (GAs) may be used to semi-automate the production and optimisation of early structural design alternatives. This framework has the potential to leverage conceptual structural design innovation in Architecture, Engineering and Construction (AEC) projects. Moreover, this framework improves the collaboration between the architectural stage and the structural stage. It will be shown that this SDO framework can make this achievable by generating the structural model based on the extracted data from the architectural model. At the moment, the proposed SDO framework is in the process of validation, involving the distribution of an online questionnaire among structural engineers in the UK.
Abstract: Feature selection and attribute reduction are crucial
problems, and widely used techniques in the field of machine
learning, data mining and pattern recognition to overcome the
well-known phenomenon of the Curse of Dimensionality. This paper
presents a feature selection method that efficiently carries out attribute
reduction, thereby selecting the most informative features of a dataset.
It consists of two components: 1) a measure for feature subset
evaluation, and 2) a search strategy. For the evaluation measure,
we have employed the fuzzy-rough dependency degree (FRFDD)
of the lower approximation-based fuzzy-rough feature selection
(L-FRFS) due to its effectiveness in feature selection. As for the
search strategy, a modified version of a binary shuffled frog leaping
algorithm is proposed (B-SFLA). The proposed feature selection
method is obtained by hybridizing the B-SFLA with the FRDD. Nine
classifiers have been employed to compare the proposed approach
with several existing methods over twenty two datasets, including
nine high dimensional and large ones, from the UCI repository.
The experimental results demonstrate that the B-SFLA approach
significantly outperforms other metaheuristic methods in terms of the
number of selected features and the classification accuracy.
Abstract: One of the major shortcomings of widely used
scientometric indicators is that different disciplines cannot be
compared with each other. The issue of cross-disciplinary
normalization has been long discussed, but even the classification
of publications into scientific domains poses problems. Structural
properties of citation networks offer new possibilities, however, the
large size and constant growth of these networks asks for precaution.
Here we present a new tool that in order to perform cross-field
normalization of scientometric indicators of individual publications
relays on the structural properties of citation networks. Due to the
large size of the networks, a systematic procedure for identifying
scientific domains based on a local community detection algorithm
is proposed. The algorithm is tested with different benchmark
and real-world networks. Then, by the use of this algorithm, the
mechanism of the scientometric indicator normalization process is
shown for a few indicators like the citation number, P-index and
a local version of the PageRank indicator. The fat-tail trend of the
article indicator distribution enables us to successfully perform the
indicator normalization process.
Abstract: Human motion recognition has been extensively increased in recent years due to its importance in a wide range of applications, such as human-computer interaction, intelligent surveillance, augmented reality, content-based video compression and retrieval, etc. However, it is still regarded as a challenging task especially in realistic scenarios. It can be seen as a general machine learning problem which requires an effective human motion representation and an efficient learning method. In this work, we introduce a descriptor based on Laban Movement Analysis technique, a formal and universal language for human movement, to capture both quantitative and qualitative aspects of movement. We use Discrete Hidden Markov Model (DHMM) for training and classification motions. We improve the classification algorithm by proposing two DHMMs for each motion class to process the motion sequence in two different directions, forward and backward. Such modification allows avoiding the misclassification that can happen when recognizing similar motions. Two experiments are conducted. In the first one, we evaluate our method on a public dataset, the Microsoft Research Cambridge-12 Kinect gesture data set (MSRC-12) which is a widely used dataset for evaluating action/gesture recognition methods. In the second experiment, we build a dataset composed of 10 gestures(Introduce yourself, waving, Dance, move, turn left, turn right, stop, sit down, increase velocity, decrease velocity) performed by 20 persons. The evaluation of the system includes testing the efficiency of our descriptor vector based on LMA with basic DHMM method and comparing the recognition results of the modified DHMM with the original one. Experiment results demonstrate that our method outperforms most of existing methods that used the MSRC-12 dataset, and a near perfect classification rate in our dataset.
Abstract: Classification of high resolution polarimetric Synthetic Aperture Radar (PolSAR) images plays an important role in land cover and land use management. Recently, classification algorithms based on Bag of Visual Words (BOVW) model have attracted significant interest among scholars and researchers in and out of the field of remote sensing. In this paper, BOVW model with pixel based low-level features has been implemented to classify a subset of San Francisco bay PolSAR image, acquired by RADARSAR 2 in C-band. We have used segment-based decision-making strategy and compared the result with the result of traditional Support Vector Machine (SVM) classifier. 90.95% overall accuracy of the classification with the proposed algorithm has shown that the proposed algorithm is comparable with the state-of-the-art methods. In addition to increase in the classification accuracy, the proposed method has decreased undesirable speckle effect of SAR images.
Abstract: Over the past decade, there has been a steep rise in
the data-driven analysis in major areas of medicine, such as clinical
decision support system, survival analysis, patient similarity analysis,
image analytics etc. Most of the data in the field are well-structured
and available in numerical or categorical formats which can be used
for experiments directly. But on the opposite end of the spectrum,
there exists a wide expanse of data that is intractable for direct
analysis owing to its unstructured nature which can be found in the
form of discharge summaries, clinical notes, procedural notes which
are in human written narrative format and neither have any relational
model nor any standard grammatical structure. An important step
in the utilization of these texts for such studies is to transform
and process the data to retrieve structured information from the
haystack of irrelevant data using information retrieval and data mining
techniques. To address this problem, the authors present Q-Map in
this paper, which is a simple yet robust system that can sift through
massive datasets with unregulated formats to retrieve structured
information aggressively and efficiently. It is backed by an effective
mining technique which is based on a string matching algorithm
that is indexed on curated knowledge sources, that is both fast
and configurable. The authors also briefly examine its comparative
performance with MetaMap, one of the most reputed tools for medical
concepts retrieval and present the advantages the former displays over
the latter.
Abstract: The present work proposes the development of an adaptive control system which enables the suppression of Pilot Induced Oscillations (PIO) in Digital Fly-By-Wire (DFBW) aircrafts. The proposed system consists of a Modified Model Reference Adaptive Control (M-MRAC) integrated with the Gain Scheduling technique. The PIO oscillations are detected using a Real Time Oscillation Verifier (ROVER) algorithm, which then enables the system to switch between two reference models; one in PIO condition, with low proneness to the phenomenon and another one in normal condition, with high (or medium) proneness. The reference models are defined in a closed loop condition using the Linear Quadratic Regulator (LQR) control methodology for Multiple-Input-Multiple-Output (MIMO) systems. The implemented algorithms are simulated in software implementations with state space models and commercial flight simulators as the controlled elements and with pilot dynamics models. A sequence of pitch angles is considered as the reference signal, named as Synthetic Task (Syntask), which must be tracked by the pilot models. The initial outcomes show that the proposed system can detect and suppress (or mitigate) the PIO oscillations in real time before it reaches high amplitudes.
Abstract: This paper has critically examined the use of Machine Learning procedures in curbing unauthorized access into valuable areas of an organization. The use of passwords, pin codes, user’s identification in recent times has been partially successful in curbing crimes involving identities, hence the need for the design of a system which incorporates biometric characteristics such as DNA and pattern recognition of variations in facial expressions. The facial model used is the OpenCV library which is based on the use of certain physiological features, the Raspberry Pi 3 module is used to compile the OpenCV library, which extracts and stores the detected faces into the datasets directory through the use of camera. The model is trained with 50 epoch run in the database and recognized by the Local Binary Pattern Histogram (LBPH) recognizer contained in the OpenCV. The training algorithm used by the neural network is back propagation coded using python algorithmic language with 200 epoch runs to identify specific resemblance in the exclusive OR (XOR) output neurons. The research however confirmed that physiological parameters are better effective measures to curb crimes relating to identities.
Abstract: A Markov model defines a system of states, composed
by the feasible transition paths between those states, and the
parameters of those transitions. The paths and parameters may be
a representative way to address healthcare issues, such as to identify
the most likely sequence of patient health states given the sequence
of observations. Furthermore estimating the length of stay (LoS) of
patients in hospitalization is one of the challenges that Markov models
allow us to solve. However, finding the maximum probability of
any path that gets to state at time t, can have high computational
cost. A quantum approach allows us to take advantage of quantum
computation since the calculated probabilities can be in several states,
ending up to outperform classical computing due to the possible
superposition of states when handling large amounts of data. The
aid of quantum physics-based architectures and machine learning
techniques are therefore appropriated to address the complexity of
healthcare.
Abstract: Both Lidars and Radars are sensors for obstacle
detection. While Lidars are very accurate on obstacles positions
and less accurate on their velocities, Radars are more precise on
obstacles velocities and less precise on their positions. Sensor
fusion between Lidar and Radar aims at improving obstacle
detection using advantages of the two sensors. The present
paper proposes a real-time Lidar/Radar data fusion algorithm
for obstacle detection and tracking based on the global nearest
neighbour standard filter (GNN). This algorithm is implemented
and embedded in an automative vehicle as a component generated
by a real-time multisensor software. The benefits of data fusion
comparing with the use of a single sensor are illustrated through
several tracking scenarios (on a highway and on a bend) and
using real-time kinematic sensors mounted on the ego and tracked
vehicles as a ground truth.
Abstract: This paper will consider the problem of sequential
mining patterns embedded in a database by handling the time
constraints as defined in the GSP algorithm (level wise algorithms).
We will compare two previous approaches GTC and PSP, that
resumes the general principles of GSP. Furthermore this paper will
discuss PG-hybrid algorithm, that using PSP and GTC. The results
show that PSP and GTC are more efficient than GSP. On the other
hand, the GTC algorithm performs better than PSP. The PG-hybrid
algorithm use PSP algorithm for the two first passes on the database,
and GTC approach for the following scans. Experiments show that
the hybrid approach is very efficient for short, frequent sequences.
Abstract: Road traffic accidents are among the principal causes of
traffic congestion, causing human losses, damages to health and the
environment, economic losses and material damages. Studies about
traditional road traffic accidents in urban zones represents very high
inversion of time and money, additionally, the result are not current.
However, nowadays in many countries, the crowdsourced GPS based
traffic and navigation apps have emerged as an important source
of information to low cost to studies of road traffic accidents and
urban congestion caused by them. In this article we identified the
zones, roads and specific time in the CDMX in which the largest
number of road traffic accidents are concentrated during 2016. We
built a database compiling information obtained from the social
network known as Waze. The methodology employed was Discovery
of knowledge in the database (KDD) for the discovery of patterns
in the accidents reports. Furthermore, using data mining techniques
with the help of Weka. The selected algorithms was the Maximization
of Expectations (EM) to obtain the number ideal of clusters for the
data and k-means as a grouping method. Finally, the results were
visualized with the Geographic Information System QGIS.
Abstract: Any industrial company needs to determine the amount of variation that exists within its measurement process and guarantee the reliability of their data, studying the performance of their measurement system, in terms of linearity, bias, repeatability and reproducibility and stability. This issue is critical for automotive industry suppliers, who are required to be certified by the 16949:2016 standard (replaces the ISO/TS 16949) of International Automotive Task Force, defining the requirements of a quality management system for companies in the automotive industry. Measurement System Analysis (MSA) is one of the mandatory tools. Frequently, the measurement system in companies is not connected to the equipment and do not incorporate the methods proposed by the Automotive Industry Action Group (AIAG). To address these constraints, an R&D project is in progress, whose objective is to develop a web and cloud-based MSA tool. This MSA tool incorporates Industry 4.0 concepts, such as, Internet of Things (IoT) protocols to assure the connection with the measuring equipment, cloud computing, artificial intelligence, statistical tools, and advanced mathematical algorithms. This paper presents the preliminary findings of the project. The web and cloud-based MSA tool is innovative because it implements all statistical tests proposed in the MSA-4 reference manual from AIAG as well as other emerging methods and techniques. As it is integrated with the measuring devices, it reduces the manual input of data and therefore the errors. The tool ensures traceability of all performed tests and can be used in quality laboratories and in the production lines. Besides, it monitors MSAs over time, allowing both the analysis of deviations from the variation of the measurements performed and the management of measurement equipment and calibrations. To develop the MSA tool a ten-step approach was implemented. Firstly, it was performed a benchmarking analysis of the current competitors and commercial solutions linked to MSA, concerning Industry 4.0 paradigm. Next, an analysis of the size of the target market for the MSA tool was done. Afterwards, data flow and traceability requirements were analysed in order to implement an IoT data network that interconnects with the equipment, preferably via wireless. The MSA web solution was designed under UI/UX principles and an API in python language was developed to perform the algorithms and the statistical analysis. Continuous validation of the tool by companies is being performed to assure real time management of the ‘big data’. The main results of this R&D project are: MSA Tool, web and cloud-based; Python API; New Algorithms to the market; and Style Guide of UI/UX of the tool. The MSA tool proposed adds value to the state of the art as it ensures an effective response to the new challenges of measurement systems, which are increasingly critical in production processes. Although the automotive industry has triggered the development of this innovative MSA tool, other industries would also benefit from it. Currently, companies from molds and plastics, chemical and food industry are already validating it.
Abstract: The cardiopulmonary signal monitoring, without the
usage of contact electrodes or any type of in-body sensors, has
several applications such as sleeping monitoring and continuous
monitoring of vital signals in bedridden patients. This system has
also applications in the vehicular environment to monitor the driver,
in order to avoid any possible accident in case of cardiac failure.
Thus, the bio-radar system proposed in this paper, can measure vital
signals accurately by using the Doppler effect principle that relates
the received signal properties with the distance change between the
radar antennas and the person’s chest-wall. Once the bio-radar aim
is to monitor subjects in real-time and during long periods of time,
it is impossible to guarantee the patient immobilization, hence their
random motion will interfere in the acquired signals. In this paper,
a mathematical model of the bio-radar is presented, as well as its
simulation in MATLAB. The used algorithm for breath rate extraction
is explained and a method for DC offsets removal based in a motion
detection system is proposed. Furthermore, experimental tests were
conducted with a view to prove that the unavoidable random motion
can be used to estimate the DC offsets accurately and thus remove
them successfully.
Abstract: In traditional integrated berth allocation with quay crane assignment models, time dimension is usually assumed in hourly based. However, nowadays, transshipment becomes the main business to many container terminals, especially in Southeast Asia (e.g. Hong Kong and Singapore). In these terminals, vessel arrivals are usually very frequent with small handling volume and very short staying time. Therefore, the traditional hourly-based modeling approach may cause significant berth and quay crane idling, and consequently cannot meet their practical needs. In this connection, a 15-minute-based modeling approach is requested by industrial practitioners. Accordingly, a Three-level Genetic Algorithm (3LGA) with Quay Crane (QC) shifting heuristics is designed to fulfill the research gap. The objective function here is to minimize the total service time. Preliminary numerical results show that the proposed 15-minute-based approach can reduce the berth and QC idling significantly.