Abstract: During the initial phase of cognitive development,
infants exhibit amazing abilities to generate novel behaviors in
unfamiliar situations, and explore actively to learn the best while
lacking extrinsic rewards from the environment. These abilities
set them apart from even the most advanced autonomous robots.
This work seeks to contribute to understand and replicate some of
these abilities. We propose the Bottom-up hiErarchical sequential
Learning algorithm with Constructivist pAradigm (BEL-CA) to
design agents capable of learning autonomously and continuously
through interactions. The algorithm implements no assumption about
the semantics of input and output data. It does not rely upon a
model of the world given a priori in the form of a set of states
and transitions as well. Besides, we propose a toolkit to analyze the
learning process at run time called GAIT (Generating and Analyzing
Interaction Traces). We use GAIT to report and explain the detailed
learning process and the structured behaviors that the agent has
learned on each decision making. We report an experiment in which
the agent learned to successfully interact with its environment and to
avoid unfavorable interactions using regularities discovered through
Abstract: MIMO-OFDM communication system presents a key
solution for the next generation of mobile communication due
to its high spectral efficiency, high data rate and robustness
against multi-path fading channels. However, MIMO-OFDM system
requires a perfect knowledge of the channel state information and
a good synchronization between the transmitter and the receiver
to achieve the expected performances. Recently, we have proposed
two algorithms for channel estimation and timing synchronization
with good performances and very low implementation complexity
compared to those proposed in the literature. In order to validate and
evaluate the efficiency of these algorithms in real environments, this
paper presents in detail the implementation of 2 × 2 MIMO-OFDM
system based on LabVIEW and USRP 2920. Implementation results
show a good agreement with the simulation results under different
Abstract: In an attempt to enrich the lives of billions of people by providing proper information, security and a way of communicating with others, the need for efficient and improved satellites is constantly growing. Thus, there is an increasing demand for better error detection and correction (EDAC) schemes, which are capable of protecting the data onboard the satellites. The paper is aimed towards detecting and correcting such errors using a special algorithm called the Hamming Code, which uses the concept of parity and parity bits to prevent single-bit errors onboard a satellite in Low Earth Orbit. This paper focuses on the study of Low Earth Orbit satellites and the process of generating the Hamming Code matrix to be used for EDAC using computer programs. The most effective version of Hamming Code generated was the Hamming (16, 11, 4) version using MATLAB, and the paper compares this particular scheme with other EDAC mechanisms, including other versions of Hamming Codes and Cyclic Redundancy Check (CRC), and the limitations of this scheme. This particular version of the Hamming Code guarantees single-bit error corrections as well as double-bit error detections. Furthermore, this version of Hamming Code has proved to be fast with a checking time of 5.669 nanoseconds, that has a relatively higher code rate and lower bit overhead compared to the other versions and can detect a greater percentage of errors per length of code than other EDAC schemes with similar capabilities. In conclusion, with the proper implementation of the system, it is quite possible to ensure a relatively uncorrupted satellite storage system.
Abstract: Nowadays, most universities use the course enrollment system considering students’ registration orders. However, the students’ preference level to certain courses is also one important factor to consider. In this research, the possibility of applying a preference-first system has been discussed and analyzed compared to the order-first system. A bipartite graph is applied to resemble the relationship between students and courses they tend to register. With the graph set up, we apply Ford-Fulkerson (F.F.) Algorithm to maximize parings between two sets of nodes, in our case, students and courses. Two models are proposed in this paper: the one considered students’ order first, and the one considered students’ preference first. By comparing and contrasting the two models, we highlight the usability of models which potentially leads to better designs for school course registration systems.
Abstract: Task Assignment and Scheduling is a challenging Operations Research problem when there is a limited number of resources and comparatively higher number of tasks. The Cost Management team at Cummins needs to assign tasks based on a deadline and must prioritize some of the tasks as per business requirements. Moreover, there is a constraint on the resources that assignment of tasks should be done based on an individual skill level, that may vary for different tasks. Another constraint is for scheduling the tasks that should be evenly distributed in terms of number of working hours, which adds further complexity to this problem. The proposed greedy approach to solve assignment and scheduling problem first assigns the task based on management priority and then by the closest deadline. This is followed by an iterative selection of an available resource with the least allocated total working hours for a task, i.e. finding the local optimal choice for each task with the goal of determining the global optimum. The greedy approach task allocation is compared with a variant of Hungarian Algorithm, and it is observed that the proposed approach gives an equal allocation of working hours among the resources. The comparative study of the proposed approach is also done with manual task allocation and it is noted that the visibility of the task timeline has increased from 2 months to 6 months. An interactive dashboard app is created for the greedy assignment and scheduling approach and the tasks with more than 2 months horizon that were waiting in a queue without a delivery date initially are now analyzed effectively by the business with expected timelines for completion.
Abstract: Multimodal image registration is a profoundly complex
task which is why deep learning has been used widely to address it in
recent years. However, two main challenges remain: Firstly, the lack
of ground truth data calls for an unsupervised learning approach,
which leads to the second challenge of defining a feasible loss
function that can compare two images of different modalities to judge
their level of alignment. To avoid this issue altogether we implement a
generative adversarial network consisting of two registration networks
GAB, GBA and two discrimination networks DA, DB connected by
spatial transformation layers. GAB learns to generate a deformation
field which registers an image of the modality B to an image of the
modality A. To do that, it uses the feedback of the discriminator DB
which is learning to judge the quality of alignment of the registered
image B. GBA and DA learn a mapping from modality A to modality
B. Additionally, a cycle-consistency loss is implemented. For this,
both registration networks are employed twice, therefore resulting in
images ˆA, ˆB which were registered to ˜B, ˜A which were registered
to the initial image pair A, B. Thus the resulting and initial images
of the same modality can be easily compared. A dataset of liver
CT and MRI was used to evaluate the quality of our approach and
to compare it against learning and non-learning based registration
algorithms. Our approach leads to dice scores of up to 0.80 ± 0.01
and is therefore comparable to and slightly more successful than
algorithms like SimpleElastix and VoxelMorph.
Abstract: This paper presents an iteration method for the numerical solutions of a one-dimensional problem of generalized thermoelasticity with one relaxation time under given initial and boundary conditions. The thermoelastic material with variable properties as a power functional graded has been considered. Adomian’s decomposition techniques have been applied to the governing equations. The numerical results have been calculated by using the iterations method with a certain algorithm. The numerical results have been represented in figures, and the figures affirm that Adomian’s decomposition method is a successful method for modeling thermoelastic problems. Moreover, the empirical parameter of the functional graded, and the lattice design parameter have significant effects on the temperature increment, the strain, the stress, the displacement.
Abstract: Since the vision system application in industrial environment for autonomous purposes is required intensely, the image recognition technique becomes an important research topic. Here, deep learning algorithm is employed in image system to recognize the industrial object and integrate with a 7A6 Series Manipulator for object automatic gripping task. PC and Graphic Processing Unit (GPU) are chosen to construct the 3D Vision Recognition System. Depth Camera (Intel RealSense SR300) is employed to extract the image for object recognition and coordinate derivation. The YOLOv2 scheme is adopted in Convolution neural network (CNN) structure for object classification and center point prediction. Additionally, image processing strategy is used to find the object contour for calculating the object orientation angle. Then, the specified object location and orientation information are sent to robotic controller. Finally, a six-axis manipulator can grasp the specific object in a random environment based on the user command and the extracted image information. The experimental results show that YOLOv2 has been successfully employed to detect the object location and category with confidence near 0.9 and 3D position error less than 0.4 mm. It is useful for future intelligent robotic application in industrial 4.0 environment.
Abstract: Counterfeit goods and documents are a global problem, which needs more and more sophisticated methods of resolving it. Existing techniques using watermarking or embedding symbols on objects are not suitable for all use cases. To address those special needs, we created complete system allowing authentication of paper documents and physical objects with flat surface. Objects are marked using orientation independent and resistant to camera noise 2D graphic codes, named DotAuth. Based on the identifier stored in 2D code, the system is able to perform basic authentication and allows to conduct more sophisticated analysis methods, e.g., relying on augmented reality and physical properties of the object. In this paper, we present the complete architecture, algorithms and applications of the proposed system. Results of the features comparison of the proposed solution and other products are presented as well, pointing to the existence of many advantages that increase usability and efficiency in the means of protecting physical objects.
Abstract: Electric Vehicles (EV) appear to be gaining increasing patronage as a feasible alternative to Internal Combustion Engine Vehicles (ICEVs) for having low emission and high operation efficiency. The EV energy storage systems are required to handle high energy and power density capacity constrained by limited space, operating temperature, weight and cost. The choice of strategies for energy storage evaluation, monitoring and control remains a challenging task. This paper presents review of various energy storage technologies and recent researches in battery evaluation techniques used in EV applications. It also underscores strategies for the hybrid energy storage management and control schemes for the improvement of EV stability and reliability. The study reveals that despite the advances recorded in battery technologies there is still no cell which possess both the optimum power and energy densities among other requirements, for EV application. However combination of two or more energy storages as hybrid and allowing the advantageous attributes from each device to be utilized is a promising solution. The review also reveals that State-of-Charge (SoC) is the most crucial method for battery estimation. The conventional method of SoC measurement is however questioned in the literature and adaptive algorithms that include all model of disturbances are being proposed. The review further suggests that heuristic-based approach is commonly adopted in the development of strategies for hybrid energy storage system management. The alternative approach which is optimization-based is found to be more accurate but is memory and computational intensive and as such not recommended in most real-time applications.
Abstract: Acoustic Emission (AE) is one of the most effective non-destructive tests that can be used to detect the defect process as it is occurring. AE techniques can be used to monitor a wide range of structures and materials such as metals, non-metals and combinations of these when load is applied. The current work investigates the effectiveness and accuracy of TOA method in AE tests involving reinforced composite concrete-mortar structures. A series of experimental tests were performed using the Hsu-Neilson (H-N) source to study 2-D location accuracy using this method on concrete-mortar (400×400 mm) specimens. Four AE sensors (R3I – resonant frequency 30 kHz) were mounted to the mortar surface and six sources were performed at each point of preselected locations on the upper surface of the mortar. Results show that the TOA method can be used effectively to locate signals on composite concrete/mortar specimen and has high accuracy.
Abstract: This research presents the first constant approximation
algorithm to the p-median network design problem with multiple
cable types. This problem was addressed with a single cable type and
there is a bifactor approximation algorithm for the problem. To the
best of our knowledge, the algorithm proposed in this paper is the first
constant approximation algorithm for the p-median network design
with multiple cable types. The addressed problem is a combination of
two well studied problems which are p-median problem and network
design problem. The introduced algorithm is a random sampling
approximation algorithm of constant factor which is conceived by
using some random sampling techniques form the literature. It is
based on a redistribution Lemma from the literature and a steiner tree
problem as a subproblem. This algorithm is simple, and it relies on the
notions of random sampling and probability. The proposed approach
gives an approximation solution with one constant ratio without
violating any of the constraints, in contrast to the one proposed in the
literature. This paper provides a (21 + 2)-approximation algorithm
for the p-median network design problem with multiple cable types
using random sampling techniques.
Abstract: The aim of this paper is to compare and discuss better classifier algorithm options for credit risk assessment by applying different Machine Learning techniques. Using records from a Brazilian financial institution, this study uses a database of 5,432 companies that are clients of the bank, where 2,600 clients are classified as non-defaulters, 1,551 are classified as defaulters and 1,281 are temporarily defaulters, meaning that the clients are overdue on their payments for up 180 days. For each case, a total of 15 attributes was considered for a one-against-all assessment using four different techniques: Artificial Neural Networks Multilayer Perceptron (ANN-MLP), Artificial Neural Networks Radial Basis Functions (ANN-RBF), Logistic Regression (LR) and finally Support Vector Machines (SVM). For each method, different parameters were analyzed in order to obtain different results when the best of each technique was compared. Initially the data were coded in thermometer code (numerical attributes) or dummy coding (for nominal attributes). The methods were then evaluated for each parameter and the best result of each technique was compared in terms of accuracy, false positives, false negatives, true positives and true negatives. This comparison showed that the best method, in terms of accuracy, was ANN-RBF (79.20% for non-defaulter classification, 97.74% for defaulters and 75.37% for the temporarily defaulter classification). However, the best accuracy does not always represent the best technique. For instance, on the classification of temporarily defaulters, this technique, in terms of false positives, was surpassed by SVM, which had the lowest rate (0.07%) of false positive classifications. All these intrinsic details are discussed considering the results found, and an overview of what was presented is shown in the conclusion of this study.
Abstract: We propose to record Activities of Daily Living
(ADLs) of elderly people using a vision-based system so as to provide
better assistive and personalization technologies. Current ADL-related
research is based on data collected with help from non-elderly subjects
in laboratory environments and the activities performed are predetermined
for the sole purpose of data collection. To obtain more
realistic datasets for the application, we recorded ADLs for the elderly
with data collected from real-world environment involving real elderly
subjects. Motivated by the need to collect data for more effective
research related to elderly care, we chose to collect data in the room of
an elderly person. Specifically, we installed Kinect, a vision-based
sensor on the ceiling, to capture the activities that the elderly subject
performs in the morning every day. Based on the data, we identified
12 morning activities that the elderly person performs daily. To
recognize these activities, we created a HARELCARE framework to
investigate into the effectiveness of existing Human Activity
Recognition (HAR) algorithms and propose the use of a transfer
learning algorithm for HAR. We compared the performance, in terms
of accuracy, and training progress. Although the collected dataset is
relatively small, the proposed algorithm has a good potential to be
applied to all daily routine activities for healthcare purposes such as
evidence-based diagnosis and treatment.
Abstract: In linear estimation, the traditional Kalman filter uses the Kalman filter gain in order to produce estimation and prediction of the n-dimensional state vector using the m-dimensional measurement vector. The computation of the Kalman filter gain requires the inversion of an m x m matrix in every iteration. In this paper, a variation of the Kalman filter eliminating the Kalman filter gain is proposed. In the time varying case, the elimination of the Kalman filter gain requires the inversion of an n x n matrix and the inversion of an m x m matrix in every iteration. In the time invariant case, the elimination of the Kalman filter gain requires the inversion of an n x n matrix in every iteration. The proposed Kalman filter gain elimination algorithm may be faster than the conventional Kalman filter, depending on the model dimensions.
Abstract: The current scientific and engineering interest concerning the problems of preventing the emergency manifestations of drive synchronous motors, ensuring the ore grinding technological process has been justified. The analysis of the known works devoted to the abnormal operation modes of synchronous motors and possibilities of protection against them, has shown that their application is inexpedient for preventing the impermissible displays arising in the electrical drive synchronous motors ensuring the ore-grinding process. The main energy and technological factors affecting the technical condition of synchronous motors are evaluated. An algorithm for preventing the irregular operation modes of the electrical drive synchronous motor applied in the ore-grinding technological process has been developed and proposed for further application which gives an opportunity to provide smart solutions, ensuring the safe operation of the drive synchronous motor by a comprehensive consideration of the energy and technological factors.
Abstract: Airborne Laser Scanning (ALS) is one of the main technologies for generating high-resolution digital terrain models (DTMs). DTMs are crucial to several applications, such as topographic mapping, flood zone delineation, geographic information systems (GIS), hydrological modelling, spatial analysis, etc. Laser scanning system generates irregularly spaced three-dimensional cloud of points. Raw ALS data are mainly ground points (that represent the bare earth) and non-ground points (that represent buildings, trees, cars, etc.). Removing all the non-ground points from the raw data is referred to as filtering. Filtering heavily forested areas is considered a difficult and challenging task as the canopy stops laser pulses from reaching the terrain surface. This research presents an approach for removing non-ground points from raw ALS data in densely forested areas. Smoothing splines are exploited to interpolate and fit the noisy ALS data. The presented filter utilizes a weight function to allocate weights for each point of the data. Furthermore, unlike most of the methods, the presented filtering algorithm is designed to be automatic. Three different forested areas in the United Kingdom are used to assess the performance of the algorithm. The results show that the generated DTMs from the filtered data are accurate (when compared against reference terrain data) and the performance of the method is stable for all the heavily forested data samples. The average root mean square error (RMSE) value is 0.35 m.
Abstract: The proper number and appropriate locations of service centers can save cost, raise revenue and gain more satisfaction from customers. Establishing service centers is high-cost and difficult to relocate. In long-term planning periods, several factors may affect the service. One of the most critical factors is uncertain demand of customers. The opened service centers need to be capable of serving customers and making a profit although the demand in each period is changed. In this work, the capacitated location-allocation problem with stochastic demand is considered. A mathematical model is formulated to determine suitable locations of service centers and their allocation to maximize total profit for multiple planning periods. Two heuristic methods, a local search and genetic algorithm, are used to solve this problem. For the local search, five different chances to choose each type of moves are applied. For the genetic algorithm, three different replacement strategies are considered. The results of applying each method to solve numerical examples are compared. Both methods reach to the same best found solution in most examples but the genetic algorithm provides better solutions in some cases.
Abstract: An important feature of the exploitation of associated gas as fuel for gas turbine engines is a declining supply. So when exploiting this resource, the divestment of prime movers is very important as the fuel supply diminishes with time. This paper explores the influence of engine degradation on the timing of divestments. Hypothetical but realistic gas turbine engines were modelled with Turbomatch, the Cranfield University gas turbine performance simulation tool. The results were deployed in three degradation scenarios within the TERA (Techno-economic and environmental risk analysis) framework to develop economic models. An optimisation with Genetic Algorithms was carried out to maximize the economic benefit. The results show that degradation will have a significant impact. It will delay the divestment of power plants, while they are running less efficiently. Over a 20 year investment, a decrease of $0.11bn, $0.26bn and $0.45bn (billion US dollars) were observed for the three degradation scenarios as against the clean case.
Abstract: This paper describes the effects of photovoltaic voltage changes on Multi-level inverter (MLI) due to solar irradiation variations, and methods to overcome these changes. The irradiation variation affects the generated voltage, which in turn varies the switching angles required to turn-on the inverter power switches in order to obtain minimum harmonic content in the output voltage profile. Genetic Algorithm (GA) is used to solve harmonics elimination equations of eleven level inverters with equal and non-equal dc sources. After that artificial neural network (ANN) algorithm is proposed to generate appropriate set of switching angles for MLI at any level of input dc sources voltage causing minimization of the total harmonic distortion (THD) to an acceptable limit. MATLAB/Simulink platform is used as a simulation tool and Fast Fourier Transform (FFT) analyses are carried out for output voltage profile to verify the reliability and accuracy of the applied technique for controlling the MLI harmonic distortion. According to the simulation results, the obtained THD for equal dc source is 9.38%, while for variable or unequal dc sources it varies between 10.26% and 12.93% as the input dc voltage varies between 4.47V nd 11.43V respectively. The proposed ANN algorithm provides satisfied simulation results that match with results obtained by alternative algorithms.