Abstract: Deep reinforcement learning (deep RL) algorithms leverage the symbolic power of complex controllers by automating it by mapping sensory inputs to low-level actions. Deep RL eliminates the complex robot dynamics with minimal engineering. Deep RL provides high-risk involvement by directly implementing it in real-world scenarios and also high sensitivity towards hyperparameters. Tuning of hyperparameters on a pneumatic quadruped robot becomes very expensive through trial-and-error learning. This paper presents an automated learning control for a pneumatic quadruped robot using sample efficient deep Q learning, enabling minimal tuning and very few trials to learn the neural network. Long training hours may degrade the pneumatic cylinder due to jerk actions originated through stochastic weights. We applied this method to the pneumatic quadruped robot, which resulted in a hopping gait. In our process, we eliminated the use of a simulator and acquired a stable gait. This approach evolves so that the resultant gait matures more sturdy towards any stochastic changes in the environment. We further show that our algorithm performed very well as compared to programmed gait using robot dynamics.
Abstract: A vibration isolation technology for precise position
control of a rotary system powered by two permanent magnet DC
(PMDC) motors is proposed, where this system is mounted on an
oscillatory frame. To achieve vibration isolation for this system,
active damping and disturbance rejection (ADDR) technology
is presented which introduces a cooperation of a main and
an auxiliary PMDC, controlled by discrete-time sliding mode
control (DTSMC) based schemes. The controller of the main
actuator tracks a desired position and the auxiliary actuator
simultaneously isolates the induced vibration, as its controller
follows a torque trend. To determine this torque trend, a
combination of two algorithms is introduced by the ADDR
technology. The first torque-trend producing algorithm rejects
the disturbance by counteracting the perturbation, estimated
using a model-based observer. The second torque trend applies
active variable damping to minimize the oscillation of the output
shaft. In this practice, the presented technology is implemented
on a rotary system with a pendulum attached, mounted on a
linear actuator simulating an oscillation-transmitting structure.
In addition, the obtained results illustrate the functionality of the
proposed technology.
Abstract: Dynamic Voltage and Frequency Scaling (DVFS)
multicore platforms are promising execution platforms that enable
high computational performance, less energy consumption and
flexibility in scheduling the system processes. However, the
resulting interleaving and memory interference together with per-core
frequency tuning make real-time guarantees hard to be delivered.
Besides, energy consumption represents a strong constraint for the
deployment of such systems on energy-limited settings. Identifying
the system configurations that would achieve a high performance and
consume less energy while guaranteeing the system schedulability is
a complex task in the design of modern embedded systems. This work
studies the trade-off between energy consumption, cores utilization
and memory bottleneck and their impact on the schedulability of
DVFS multicore time-critical systems with a hierarchy of shared
memories. We build a model-based framework using Parametrized
Timed Automata of UPPAAL to analyze the mutual impact of
performance, energy consumption and schedulability of DVFS
multicore systems, and demonstrate the trade-off on an actual case
study.
Abstract: Problem-based learning (PBL) is a student-centered pedagogy that originated in the medical field and has also been used extensively in other knowledge disciplines with recognized advantages and limitations. PBL has been used in various undergraduate engineering programs with mixed outcomes. The current fourth industrial revolution (digital era or Industry 4.0) has made it essential for many science and engineering students to receive effective training in advanced courses such as industrial automation and robotics. This paper presents a case study at Assumption University of Thailand, where a PBL-like approach was used to teach some aspects of automation and robotics to selected groups of undergraduate engineering students. These students were given some basic level training in automation prior to participating in a subsequent training session in order to solve technical problems with increased complexity. The participating students’ evaluation of the training sessions in terms of learning effectiveness, skills enhancement, and incremental knowledge following the problem-solving session was captured through a follow-up survey consisting of 14 questions and a 5-point scoring system. From the most recent training event, an overall 70% of the respondents indicated that their skill levels were enhanced to a much greater level than they had had before the training, whereas 60.4% of the respondents from the same event indicated that their incremental knowledge following the session was much greater than what they had prior to the training. The instructor-facilitator involved in the training events suggested that this method of learning was more suitable for senior/advanced level students than those at the freshmen level as certain skills to effectively participate in such problem-solving sessions are acquired over a period of time, and not instantly.
Abstract: Building system is highly vulnerable to different kinds
of faults and human misbehaviors. Energy efficiency and user comfort
are directly targeted due to abnormalities in building operation. The
available fault diagnosis tools and methodologies particularly rely on
rules or pure model-based approaches. It is assumed that model or
rule-based test could be applied to any situation without taking into
account actual testing contexts. Contextual tests with validity domain
could reduce a lot of the design of detection tests. The main objective of this paper is to consider fault validity when
validate the test model considering the non-modeled events such
as occupancy, weather conditions, door and window openings and
the integration of the knowledge of the expert on the state of the
system. The concept of heterogeneous tests is combined with test
validity to generate fault diagnoses. A combination of rules, range
and model-based tests known as heterogeneous tests are proposed
to reduce the modeling complexity. Calculation of logical diagnoses
coming from artificial intelligence provides a global explanation
consistent with the test result. An application example shows the efficiency of the proposed
technique: an office setting at Grenoble Institute of Technology.
Abstract: Model-based development approach is gaining more support and acceptance. Its higher abstraction level brings simplification of systems’ description that allows domain experts to do their best without particular knowledge in programming. The different levels of simulation support the rapid prototyping, verifying and validating the product even before it exists physically. Nowadays model-based approach is beneficial for modelling of complex embedded systems as well as a generation of code for many different hardware platforms. Moreover, it is possible to be applied in safety-relevant industries like automotive, which brings extra automation of the expensive device certification process and especially in the software qualification. Using it, some companies report about cost savings and quality improvements, but there are others claiming no major changes or even about cost increases. This publication demonstrates the level of maturity and autonomy of model-based approach for code generation. It is based on a real live automotive seat heater (ASH) module, developed using The Mathworks, Inc. tools. The model, created with Simulink, Stateflow and Matlab is used for automatic generation of C code with Embedded Coder. To prove the maturity of the process, Code generation advisor is used for automatic configuration. All additional configuration parameters are set to auto, when applicable, leaving the generation process to function autonomously. As a result of the investigation, the publication compares the quality of generated embedded code and a manually developed one. The measurements show that generally, the code generated by automatic approach is not worse than the manual one. A deeper analysis of the technical parameters enumerates the disadvantages, part of them identified as topics for our future work.
Abstract: Accurate prediction of NOx emission is a continuous challenge in the field of diesel engine-out emission modeling. Performing experiments for each conditions and scenario cost significant amount of money and man hours, therefore model-based development strategy has been implemented in order to solve that issue. NOx formation is highly dependent on the burn gas temperature and the O2 concentration inside the cylinder. The current empirical models are developed by calibrating the parameters representing the engine operating conditions with respect to the measured NOx. This makes the prediction of purely empirical models limited to the region where it has been calibrated. An alternative solution to that is presented in this paper, which focus on the utilization of in-cylinder combustion parameters to form a predictive semi-empirical NOx model. The result of this work is shown by developing a fast and predictive NOx model by using the physical parameters and empirical correlation. The model is developed based on the steady state data collected at entire operating region of the engine and the predictive combustion model, which is developed in Gamma Technology (GT)-Power by using Direct Injected (DI)-Pulse combustion object. In this approach, temperature in both burned and unburnt zone is considered during the combustion period i.e. from Intake Valve Closing (IVC) to Exhaust Valve Opening (EVO). Also, the oxygen concentration consumed in burnt zone and trapped fuel mass is also considered while developing the reported model. Several statistical methods are used to construct the model, including individual machine learning methods and ensemble machine learning methods. A detailed validation of the model on multiple diesel engines is reported in this work. Substantial numbers of cases are tested for different engine configurations over a large span of speed and load points. Different sweeps of operating conditions such as Exhaust Gas Recirculation (EGR), injection timing and Variable Valve Timing (VVT) are also considered for the validation. Model shows a very good predictability and robustness at both sea level and altitude condition with different ambient conditions. The various advantages such as high accuracy and robustness at different operating conditions, low computational time and lower number of data points requires for the calibration establishes the platform where the model-based approach can be used for the engine calibration and development process. Moreover, the focus of this work is towards establishing a framework for the future model development for other various targets such as soot, Combustion Noise Level (CNL), NO2/NOx ratio etc.
Abstract: The prevision of post-impact conditions and the behavior of the bodies during the impact have been object of several collision models. The formulation from Hertz’s theory is generally used dated from the 19th century. These models consider the repulsive force as proportional to the deformation of the bodies under contact and may consider it proportional to the rate of deformation. The objective of the present work is to analyze the behavior of the bodies during impact using the Finite Element Method (FEM) with elastic and plastic material models. The main parameters to evaluate are, the contact force, the time of contact and the deformation of the bodies. An advantage of using the FEM approach is the possibility to apply a plastic deformation to the model according to the material definition: there will be used Johnson–Cook plasticity model whose parameters are obtained through empirical tests of real materials. This model allows analyzing the permanent deformation caused by impact, phenomenon observed in real world depending on the forces applied to the body. These results are compared between them and with the model-based Hertz theory.
Abstract: The Earth system generates different phenomena that are observable at the surface of the Earth such as mass deformations and displacements leading to plate tectonics, earthquakes, and volcanism. The dynamic processes associated with the interior, surface, and atmosphere of the Earth affect the three pillars of geodesy: shape of the Earth, its gravity field, and its rotation. Geodesy establishes a characteristic structure in order to define, monitor, and predict of the whole Earth system. The traditional and new instruments, observables, and techniques in geodesy are related to the gravity field. Therefore, the geodesy monitors the gravity field and its temporal variability in order to transform the geodetic observations made on the physical surface of the Earth into the geometrical surface in which positions are mathematically defined. In this paper, the main components of the gravity field modeling, (Free-air and Bouguer) gravity anomalies are calculated via recent global models (EGM2008, EIGEN6C4, and GECO) over a selected study area. The model-based gravity anomalies are compared with the corresponding terrestrial gravity data in terms of standard deviation (SD) and root mean square error (RMSE) for determining the best fit global model in the study area at a regional scale in Turkey. The least SD (13.63 mGal) and RMSE (15.71 mGal) were obtained by EGM2008 for the Free-air gravity anomaly residuals. For the Bouguer gravity anomaly residuals, EIGEN6C4 provides the least SD (8.05 mGal) and RMSE (8.12 mGal). The results indicated that EIGEN6C4 can be a useful tool for modeling the gravity field of the Earth over the study area.
Abstract: Quality measurement and reporting systems are used in healthcare internationally. In Australia, the Australian Council on Healthcare Standards records and reports hundreds of clinical indicators (CIs) nationally across the healthcare system. These CIs are measures of performance in the clinical setting, and are used as a screening tool to help assess whether a standard of care is being met. Existing analysis and reporting of these CIs incorporate Bayesian methods to address sampling variation; however, such assessments are retrospective in nature, reporting upon the previous six or twelve months of data. The use of Bayesian methods within statistical process control for monitoring systems is an important pursuit to support more timely decision-making. Our research has developed and assessed a new graphical monitoring tool, similar to a control chart, based on the beta-binomial posterior predictive (BBPP) distribution to facilitate the real-time assessment of health care organizational performance via CIs. The BBPP charts have been compared with the traditional Bernoulli CUSUM (BC) chart by simulation. The more traditional “central” and “highest posterior density” (HPD) interval approaches were each considered to define the limits, and the multiple charts were compared via in-control and out-of-control average run lengths (ARLs), assuming that the parameter representing the underlying CI rate (proportion of cases with an event of interest) required estimation. Preliminary results have identified that the BBPP chart with HPD-based control limits provides better out-of-control run length performance than the central interval-based and BC charts. Further, the BC chart’s performance may be improved by using Bayesian parameter estimation of the underlying CI rate.
Abstract: The aeronautics sector is currently living an unprecedented growth largely due to innovative projects. In several cases, such innovative developments are being carried out by Small and Medium sized-Enterprises (SMEs). For instance, in Europe, a handful of SMEs are leading projects like airships, large civil drones, or flying cars. These SMEs have all limited resources, must make strategic decisions, take considerable financial risks and in the same time must take into account the constraints of safety, cost, time and performance as any commercial organization in this industry. Moreover, today, no international regulations fully exist for the development and certification of this kind of projects. The absence of such a precise and sufficiently detailed regulatory framework requires a very close contact with regulatory instances. But, SMEs do not always have sufficient resources and internal knowledge to handle this complexity and to discuss these issues. This poses additional challenges for those SMEs that have system integration responsibilities and that must provide all the necessary means of compliance to demonstrate their ability to design, produce, and operate airships with the expected level of safety and reliability. The final objective of our research is thus to provide a methodological framework supporting SMEs in their development taking into account recent innovation and institutional rules of the sector. We aim to provide a contribution to the problematic by developing a specific Model-Based Systems Engineering (MBSE) approach. Airspace regulation, aeronautics standards and international norms on systems engineering are taken on board to be formalized in a set of models. This paper presents the on-going research project combining Systems Engineering and Project Management process modeling and taking into account the metamodeling problematic.
Abstract: Robotic rovers which are designed to work in
extra-terrestrial environments present a unique challenge in terms
of the reliability and availability of systems throughout the mission.
Should some fault occur, with the nearest human potentially millions
of kilometres away, detection and identification of the fault must
be performed solely by the robot and its subsystems. Faults in
the system sensors are relatively straightforward to detect, through
the residuals produced by comparison of the system output with
that of a simple model. However, faults in the input, that is, the
actuators of the system, are harder to detect. A step change in
the input signal, caused potentially by the loss of an actuator,
can propagate through the system, resulting in complex residuals
in multiple outputs. These residuals can be difficult to isolate or
distinguish from residuals caused by environmental disturbances.
While a more complex fault detection method or additional sensors
could be used to solve these issues, an alternative is presented here.
Using inverse simulation (InvSim), the inputs and outputs of the
mathematical model of the rover system are reversed. Thus, for a
desired trajectory, the corresponding actuator inputs are obtained.
A step fault near the input then manifests itself as a step change
in the residual between the system inputs and the input trajectory
obtained through inverse simulation. This approach avoids the need
for additional hardware on a mass- and power-critical system such
as the rover. The InvSim fault detection method is applied to a
simple four-wheeled rover in simulation. Additive system faults and
an external disturbance force and are applied to the vehicle in turn,
such that the dynamic response and sensor output of the rover
are impacted. Basic model-based fault detection is then employed
to provide output residuals which may be analysed to provide
information on the fault/disturbance. InvSim-based fault detection
is then employed, similarly providing input residuals which provide
further information on the fault/disturbance. The input residuals are
shown to provide clearer information on the location and magnitude
of an input fault than the output residuals. Additionally, they can
allow faults to be more clearly discriminated from environmental
disturbances.
Abstract: Real-time optimization has been considered an effective approach for improving energy efficient operation of heating, ventilation, and air-conditioning (HVAC) systems. In model-based real-time optimization, model mismatches cannot be avoided. When model mismatches are significant, the performance of the real-time optimization will be impaired and hence the expected energy saving will be reduced. In this paper, the model mismatches for chiller plant on real-time optimization are considered. In the real-time optimization of the chiller plant, simplified semi-physical or grey box model of chiller is always used, which should be identified using available operation data. To overcome the model mismatches associated with the chiller model, hybrid Genetic Algorithms (HGAs) method is used for online real-time training of the chiller model. HGAs combines Genetic Algorithms (GAs) method (for global search) and traditional optimization method (i.e. faster and more efficient for local search) to avoid conventional hit and trial process of GAs. The identification of model parameters is synthesized as an optimization problem; and the objective function is the Least Square Error between the output from the model and the actual output from the chiller plant. A case study is used to illustrate the implementation of the proposed method. It has been shown that the proposed approach is able to provide reliability in decision making, enhance the robustness of the real-time optimization strategy and improve on energy performance.
Abstract: This paper investigates an efficient combustion modeling for cycle simulation of internal combustion engine (ICE) studies. The term “efficient model” means that the models must generate desired simulation results while having fast simulation time. In other words, the efficient model is defined based on the application of the model. The objective of this study is to develop math-based models for control applications or shortly control-oriented models. This study compares different modeling approaches used to model the ICEs such as mean-value models, zero dimensional, quasi-dimensional, and multi-dimensional models for control applications. Mean-value models have been widely used for model-based control applications, but recently by developing advanced simulation tools (e.g. Maple/MapleSim) the higher order models (more complex) could be considered as control-oriented models. This paper presents the enhanced zero-dimensional cycle-by-cycle modeling and simulation of a spark ignition engine with a two-zone combustion model. The simulation results are cross-validated against the simulation results from GT-Power package and show a good agreement in terms of trends and values.
Abstract: Model updating method has received increasing
attention in damage detection structures based on measured modal
parameters. Therefore, a probability-based damage detection
(PBDD) procedure based on a model updating procedure is
presented in this paper, in which a one-stage model-based damage
identification technique based on the dynamic features of a structure
is investigated. The presented framework uses a finite element
updating method with a Monte Carlo simulation that considers the
uncertainty caused by measurement noise. Enhanced ideal gas
molecular movement (EIGMM) is used as the main algorithm for
model updating. Ideal gas molecular movement (IGMM) is a multiagent
algorithm based on the ideal gas molecular movement. Ideal
gas molecules disperse rapidly in different directions and cover all
the space inside. This is embedded in the high speed of molecules,
collisions between them and with the surrounding barriers. In IGMM
algorithm to accomplish the optimal solutions, the initial population
of gas molecules is randomly generated and the governing equations
related to the velocity of gas molecules and collisions between those
are utilized. In this paper, an enhanced version of IGMM, which
removes unchanged variables after specified iterations, is developed.
The proposed method is implemented on two numerical examples in
the field of structural damage detection. The results show that the
proposed method can perform well and competitive in PBDD of
structures.
Abstract: In the present case study we examined the development and testing methods of systems that contain safety-critical elements in different industrial fields. Consequentially, we observed the classical object-oriented development and testing environment, as both medical technology and automobile industry approaches the development of safety critical elements that way. Subsequently, we examined model-based development. We introduce the quality parameters that define development and testing. While taking modern agile methodology (scrum) into consideration, we examined whether and to what extent the methodologies we found fit into this environment.
Abstract: ISO/IEC/IEEE 15288: 2015, Systems and Software Engineering - System Life Cycle Processes is an international standard that provides generic top-level process descriptions to support systems engineering (SE). However, the processes defined in the standard needs improvement to lift integrity and consistency. The goal of this research is to explore the way by building an ontology model for the SE standard to manage the knowledge of SE. The ontology model gives a whole picture of the SE knowledge domain by building connections between SE concepts. Moreover, it creates a hierarchical classification of the concepts to fulfil different requirements of displaying and analysing SE knowledge.
Abstract: Autism spectrum disorder is a complex developmental disability. It is defined by a certain set of behaviors. Persons with Autism Spectrum Disorders (ASD) frequently engage in stereotyped and repetitive motor movements. The objective of this article is to propose a method to automatically detect this unusual behavior. Our study provides a clinical tool which facilitates for doctors the diagnosis of ASD. We focus on automatic identification of five repetitive gestures among autistic children in real time: body rocking, hand flapping, fingers flapping, hand on the face and hands behind back. In this paper, we present a gesture recognition system for children with autism, which consists of three modules: model-based movement tracking, feature extraction, and gesture recognition using artificial neural network (ANN). The first one uses the Microsoft Kinect sensor, the second one chooses points of interest from the 3D skeleton to characterize the gestures, and the last one proposes a neural connectionist model to perform the supervised classification of data. The experimental results show that our system can achieve above 93.3% recognition rate.
Abstract: 3D model-based vehicle matching provides a new way
for vehicle recognition, localization and tracking. Its key is to
construct an evaluation function, also called fitness function, to
measure the degree of vehicle matching. The existing fitness functions
often poorly perform when the clutter and occlusion exist in traffic
scenarios. In this paper, we present a practical and efficient fitness
function. Unlike the existing evaluation functions, the proposed
fitness function is to study the vehicle matching problem from
both local and global perspectives, which exploits the pixel gradient
information as well as the silhouette information. In view of the
discrepancy between 3D vehicle model and real vehicle, a weighting
strategy is introduced to differently treat the fitting of the model’s
wireframes. Additionally, a normalization operation for the model’s
projection is performed to improve the accuracy of the matching.
Experimental results on real traffic videos reveal that the proposed
fitness function is efficient and robust to the cluttered background
and partial occlusion.
Abstract: The design of distribution logistics has a decisive impact on a company's logistics costs and performance. Hence, such solutions make an essential contribution to corporate success. This article describes a decision support system for analyzing the potential of distribution logistics in terms of logistics costs and performance. In contrast to previous procedures of business process re-engineering (BPR), this method maps distribution logistics holistically under variable distribution structures. Combined with qualitative measures the decision support system will contribute to a more efficient design of distribution logistics.