The COVID-19 Pandemic: Lessons Learned in Promoting Student Internationalisation

In higher education, a great degree of importance is placed on the internationalisation of the student experience. This is seen as a valuable contributor to elements such as building confidence, broadening knowledge, creating networks, and connections and enhancing employability for current students who will become the next generation of managers in technology and business. The COVID-19 pandemic has affected all areas of people’s lives. The limitations of travel coupled with the fears and concerns generated by the health risks have dramatically reduced the opportunity for students to engage with this agenda. Institutions of higher education have been required to rethink fundamental aspects of their business model from recruitment and enrolment, through learning approaches, assessment methods and the pathway to employment. This paper presents a case study which focuses on student mobility and how the physical experience of being in another country either to study, to work, to volunteer or to gain cultural and social enhancement has of necessity been replaced by alternative approaches. It considers trans-national education as an alternative to physical study overseas, virtual mobility and internships as an alternative to international work experience and adopting collaborative on-line projects as an alternative to in-person encounters. The paper concludes that although these elements have been adopted to address the current situation, the lessons learnt and the feedback gained suggests that they have contributed successfully in new and sometimes unexpected ways, and that they will persist beyond the present to become part of the "new normal" for the future. That being the case, senior leaders of institutions of higher education will be required to revisit their international plans and to rewrite their international strategies to take account of and build upon these changes.

Failure Analysis of a Fractured Control Pressure Tube from an Aircraft Engine

This paper studies a failure case of a fuel pressure supply tube from an aircraft engine. Multiple fracture cases of the fuel pressure control tube from aircraft engines have been reported. The studied set was composed by the mentioned tube, a welded connecting pipe, where the fracture has been produced, and a union nut. The fracture has been produced in one of the most critical zones of the tube, in a region next to the supporting body of the union nut to the connector. The tube material was X6CrNiTi18-10, an austenitic stainless steel. Chemical composition was determined using an X-Ray fluorescence spectrometer (XRF) and combustion equipment. Furthermore, the material was characterized mechanically, by a hardness test, and microstructurally using a stereo microscope and an optical microscope. The results confirmed that the material was within specifications. To determine the macrofractographic features, a visual examination and an observation using a stereo microscope of the tube fracture surface were carried out. The results revealed a tube plastic macrodeformation, surface damaged and signs of a possible corrosion process. Fracture surface was also inspected by scanning electron microscopy (FE-SEM), equipped with an energy-dispersive X-ray microanalysis system (EDX), to determine the microfractographic features in order to find out the failure mechanism involved in the fracture. Fatigue striations, which are typical from a progressive fracture by a fatigue mechanism, were observed. The origin of the fracture was placed in defects located on the outer wall of the tube, leading to a final overload fracture.

Holistic Approach to Assess the Potential of Using Traditional and Advance Insulation Materials for Energy Retrofit of Office Buildings

Improving the energy performance of existing buildings can be challenging, particularly when facades cannot be modified, and the only available option is internal insulation. In such cases, the choice of the most suitable material becomes increasingly complex, as in addition to thermal transmittance and capital cost, the designer needs to account for the impact of the intervention on the internal spaces, and in particular the loss of usable space due to the additional layers of materials installed. This paper explores this issue by analyzing a case study of an average office building needing to go through a refurbishment in order to reach the limits imposed by current regulations to achieve energy efficiency in buildings. The building is simulated through dynamic performance simulation under three different climate conditions in order to evaluate its energy needs. The use of Vacuum Insulated Panels as an option for energy refurbishment is compared to traditional insulation materials (XPS, Mineral Wool). For each scenario, energy consumptions are calculated and, in combination with their expected capital costs, used to perform a financial feasibility analysis. A holistic approach is proposed, taking into account the impact of the intervention on internal space by quantifying the value of the lost usable space and used in the financial feasibility analysis. The proposed approach highlights how taking into account different drivers will lead to the choice of different insulation materials, showing how accounting for the economic value of space can make VIPs an attractive solution for energy retrofitting under various climate conditions.

Anomaly Detection in a Data Center with a Reconstruction Method Using a Multi-Autoencoders Model

Early detection of anomalies in data centers is important to reduce downtimes and the costs of periodic maintenance. However, there is little research on this topic and even fewer on the fusion of sensor data for the detection of abnormal events. The goal of this paper is to propose a method for anomaly detection in data centers by combining sensor data (temperature, humidity, power) and deep learning models. The model described in the paper uses one autoencoder per sensor to reconstruct the inputs. The auto-encoders contain Long-Short Term Memory (LSTM) layers and are trained using the normal samples of the relevant sensors selected by correlation analysis. The difference signal between the input and its reconstruction is then used to classify the samples using feature extraction and a random forest classifier. The data measured by the sensors of a data center between January 2019 and May 2020 are used to train the model, while the data between June 2020 and May 2021 are used to assess it. Performances of the model are assessed a posteriori through F1-score by comparing detected anomalies with the data center’s history. The proposed model outperforms the state-of-the-art reconstruction method, which uses only one autoencoder taking multivariate sequences and detects an anomaly with a threshold on the reconstruction error, with an F1-score of 83.60% compared to 24.16%.

Affective Adaptation Design for Better Gaming Experiences

Affective adaptation is a creative way for game designers to add an extra layer of engagement to their productions. When player’s emotions are an explicit factor in mechanics design, endless possibilities for imaginative gameplay emerge. Whilst gaining popularity, existing affective game research mostly runs controlled experiments in restrictive settings and rely on one or more specialist devices for measuring player’s emotional state. These conditions albeit effective, are not necessarily realistic. Moreover, the simplified narrative and intrusive wearables may not be suitable for players. This exploratory study investigates delivering an immersive affective experience in the wild with minimal requirements, in an attempt for the average developer to reach the average player. A puzzle game is created with rich narrative and creative mechanics. It employs both explicit and implicit adaptation and only requires a web camera. Participants played the game on their own machines in various settings. Whilst it was rated feasible, very engaging and enjoyable, it remains questionable whether a fully immersive experience was delivered due to the limited sample size.

Flight School Perceptions of Electric Planes for Training

Flight school members are facing a major disruption in the technologies available for them to fly as electric planes enter the aviation industry. The year 2020 marked a new era in aviation with the first type certification of an electric plane. The Pipistrel Velis Electro is a two-seat electric aircraft (e-plane) designed for flight training. Electric flight training has the potential to deeply reduce emissions, noise, and cost of pilot training. Though these are all attractive features, understanding must be developed on the perceptions of the essential actor of the technology, the pilot. This study asks student pilots, flight instructors, flight center managers, and other members of flight schools about their perceptions of e-planes. The questions were divided into three categories: safety and trust of the technology, expected costs in comparison to conventional planes, and interest in the technology, including their desire to fly electric planes. Participants were recruited from flight schools using a protocol approved by the Office of Research Ethics. None of these flight schools have an e-plane in their fleet so these views are based on perceptions rather than direct experience. The results revealed perceptions that were strongly positive with many qualitative comments indicating great excitement about the potential of the new electric aviation technology. Some concerns were raised regarding battery endurance limits. Overall, the flight school community is clearly in favor of introducing electric propulsion technology and reducing the environmental impacts of their industry.

Identification of Vessel Class with LSTM using Kinematic Features in Maritime Traffic Control

Prevent abuse and illegal activities in a given area of the sea is a very difficult and expensive task. Artificial intelligence offers the possibility to implement new methods to identify the vessel class type from the kinematic features of the vessel itself. The task strictly depends on the quality of the data. This paper explores the application of a deep Long Short-Term Memory model by using AIS flow only with a relatively low quality. The proposed model reaches high accuracy on detecting nine vessel classes representing the most common vessel types in the Ionian-Adriatic Sea. The model has been applied during the Adriatic-Ionian trial period of the international EU ANDROMEDA H2020 project to identify vessels performing behaviours far from the expected one, depending on the declared type.

Construction Noise Management: Hong Kong Reviews and International Best Practices

Hong Kong is known worldwide for high density living and the ability to thrive under trying circumstances. The 7.5 million residents of this busy metropolis live primarily in high-rise buildings which are built and demolished incessantly. Hong Kong residents are therefore affected continuously by numerous construction activities. In 2020, the Hong Kong Environmental Protection Department (EPD) commissioned a feasibility study on the management of construction noise, including those associated with renovation of domestic premises. A key component of the study focused on the review of practices concerning the management and control of construction noise in metropolitans in other parts of the world. To benefit from international best practices, this extensive review aimed at identifying possible areas of improvement in Hong Kong. The study first referred to the United Nations “The World’s Cities in 2016” Report and examined the top 100 cities therein. The 20 most suitable cities were then chosen for further review. Upon further screening, 12 cities with more relevant management practices were selected for further scrutiny. These 12 cities include: Asia – Tokyo, Seoul, Taipei, Guangzhou, Singapore; Europe – City of Westminster (London), Berlin; North America – Toronto, New York City, San Francisco; Oceania – Sydney, Melbourne. Subsequently, three cities, namely Sydney, City of Westminster, and New York City, were selected for in-depth review. These three were chosen primarily because of the maturity, success, and effectiveness of their construction noise management and control measures, as well as their similarity to Hong Kong in certain key aspects. One of the more important findings of the review is the importance of early focus on potential noise issues, with the objective of designing the noise away wherever practicable. The study examined the similar yet different construction noise early focus mechanisms of these three cities. This paper describes this landmark, worldwide and extensive review on international best construction noise management and control practices at the source, along the noise transmission path and at the receiver end. The methodology, approach, and key findings are presented succinctly in this paper. By sharing the findings with the acoustics professionals worldwide, it is hoped that more advanced and mature construction noise management practices can be developed to attain urban sustainability.

Comparative Analysis of Classical and Parallel Inpainting Algorithms Based on Affine Combinations of Projections on Convex Sets

The paper is a comparative study of two classical vari-ants of parallel projection methods for solving the convex feasibility problem with their equivalents that involve variable weights in the construction of the solutions. We used a graphical representation of these methods for inpainting a convex area of an image in order to investigate their effectiveness in image reconstruction applications. We also presented a numerical analysis of the convergence of these four algorithms in terms of the average number of steps and execution time, in classical CPU and, alternativaly, in parallel GPU implementation.

Curvelet Transform Based Two Class Motor Imagery Classification

One of the important parts of the brain-computer interface (BCI) studies is the classification of motor imagery (MI) obtained by electroencephalography (EEG). The major goal is to provide non-muscular communication and control via assistive technologies to people with severe motor disorders so that they can communicate with the outside world. In this study, an EEG signal classification approach based on multiscale and multi-resolution transform method is presented. The proposed approach is used to decompose the EEG signal containing motor image information (right- and left-hand movement imagery). The decomposition process is performed using curvelet transform which is a multiscale and multiresolution analysis method, and the transform output was evaluated as feature data. The obtained feature set is subjected to feature selection process to obtain the most effective ones using t-test methods. SVM and k-NN algorithms are assigned for classification.

Wave Atom Transform Based Two Class Motor Imagery Classification

Electroencephalography (EEG) investigations of the brain computer interfaces are based on the electrical signals resulting from neural activities in the brain. In this paper, it is offered a method for classifying motor imagery EEG signals. The suggested method classifies EEG signals into two classes using the wave atom transform, and the transform coefficients are assessed, creating the feature set. Classification is done with SVM and k-NN algorithms with and without feature selection. For feature selection t-test approaches are utilized. A test of the approach is performed on the BCI competition III dataset IIIa.

Two Class Motor Imagery Classification via Wave Atom Sub-Bants

The goal of motor image brain computer interface research is to create a link between the central nervous system and a computer or device. The most important signal for brain-computer interface is the electroencephalogram. The aim of this research is to explore a set of effective features from EEG signals, separated into frequency bands, using wave atom sub-bands to discriminate right and left-hand motor imagery signals. Over the transform coefficients, feature vectors are constructed for each frequency range and each transform sub-band, and their classification performances are tested. The method is validated using EEG signals from the BCI competition III dataset IIIa and classifiers such as support vector machine and k-nearest neighbors.

Review and Evaluation of Trending Canonical Correlation Analyses-Based Brain-Computer Interface Methods

The fast development of technology that has advanced neuroscience and human interaction with computers has enabled solutions to various problems and issues of this new era. The Brain-Computer Interface (BCI) has opened the door to several new research areas and have been able to provide solutions to critical and vital issues such as supporting a paralyzed patient to interact with the outside world, controlling a robot arm, playing games in VR with the brain, driving a wheelchair. This review presents the state-of-the-art methods and improvements of canonical correlation analyses (CCA), an SSVEP-based BCI method. These are the methods used to extract EEG signal features or, to be said differently, the features of interest that we are looking for in the EEG analyses. Each of the methods from oldest to newest has been discussed while comparing their advantages and disadvantages. This would create a great context and help researchers understand the most state-of-the-art methods available in this field, their pros and cons, and their mathematical representations and usage. This work makes a vital contribution to the existing field of study. It differs from other similar recently published works by providing the following: (1) stating most of the main methods used in this field in a hierarchical way, (2) explaining the pros and cons of each method and their performance, (3) presenting the gaps that exist at the end of each method that can improve the understanding and open doors to new researches or improvements. 

Graph Codes-2D Projections of Multimedia Feature Graphs for Fast and Effective Retrieval

Multimedia Indexing and Retrieval is generally de-signed and implemented by employing feature graphs. These graphs typically contain a significant number of nodes and edges to reflect the level of detail in feature detection. A higher level of detail increases the effectiveness of the results but also leads to more complex graph structures. However, graph-traversal-based algorithms for similarity are quite inefficient and computation intensive, espe-cially for large data structures. To deliver fast and effective retrieval, an efficient similarity algorithm, particularly for large graphs, is mandatory. Hence, in this paper, we define a graph-projection into a 2D space (Graph Code) as well as the corresponding algorithms for indexing and retrieval. We show that calculations in this space can be performed more efficiently than graph-traversals due to a simpler processing model and a high level of parallelisation. In consequence, we prove that the effectiveness of retrieval also increases substantially, as Graph Codes facilitate more levels of detail in feature fusion. Thus, Graph Codes provide a significant increase in efficiency and effectiveness (especially for Multimedia indexing and retrieval) and can be applied to images, videos, audio, and text information.

Gait Biometric for Person Re-Identification

Biometric identification is to identify unique features in a person like fingerprints, iris, ear, and voice recognition that need the subject's permission and physical contact. Gait biometric is used to identify the unique gait of the person by extracting moving features. The main advantage of gait biometric to identify the gait of a person at a distance, without any physical contact. In this work, the gait biometric is used for person re-identification. The person walking naturally compared with the same person walking with bag, coat and case recorded using long wave infrared, short wave infrared, medium wave infrared and visible cameras. The videos are recorded in rural and in urban environments. The pre-processing technique includes human identified using You Only Look Once, background subtraction, silhouettes extraction and synthesis Gait Entropy Image by averaging the silhouettes. The moving features are extracted from the Gait Entropy Energy Image. The extracted features are dimensionality reduced by the Principal Component Analysis and recognized using different classifiers. The comparative results with the different classifier show that Linear Discriminant Analysis outperform other classifiers with 95.8% for visible in the rural dataset and 94.8% for longwave infrared in the urban dataset.

A Real-Time Monitoring System of the Supply Chain Conditions, Products and Means of Transport

Real-time monitoring of the supply chain conditions and procedures is a critical element for the optimal coordination and safety of the deliveries, as well as for the minimization of the delivery time and cost. Real time monitoring requires IoT data streams, which are related to the conditions of the products and the means of transport (e.g., location, temperature/humidity conditions, kinematic state, ambient light conditions, etc.). These streams are generated by battery-based IoT tracking devices, equipped with appropriate sensors, and are transmitted to a cloud-based back-end system. Proper handling and processing of the IoT data streams, using predictive and artificial intelligence algorithms, can provide significant and useful results, which can be exploited by the supply chain stakeholders in order to enhance their financial benefits, as well as the efficiency, security, transparency, coordination and sustainability of the supply chain procedures. The technology, the features and the characteristics of a complete, proprietary system, including hardware, firmware and software tools - developed in the context of a co-funded R&D program - are addressed and presented in this paper. 

Performance Evaluation of a Millimeter-Wave Phased Array Antenna Using Circularly Polarized Elements

This paper is focused on the design of an mm-wave phased array. To date, linear polarization is adapted in the reported designs of phased arrays. However, linear polarization faces several well-known challenges. As such, an advanced design for phased array antennas is required that offers circularly polarized (CP) radiation. A feasible solution for achieving CP phased array antennas is proposed using open-circular loop antennas. To this end, a 3-element circular loop phased array antenna is designed to operate at 28 GHz. In addition, the array ability to control the direction of the main lobe is investigated. The results show that the highest achievable field of view (FOV) is 100°, i.e. 50° to the left and 50° to the right-hand side directions. The results are achieved with a CP bandwidth of 15%. Furthermore, the results demonstrate that a high broadside gain of circa 11 dBi can be achieved for the steered beam. Besides, radiation efficiency of 97% can also be achieved based on the proposed design.

Smartphone-Based Human Activity Recognition by Machine Learning Methods

As smartphones are continually upgrading, their software and hardware are getting smarter, so the smartphone-based human activity recognition will be described more refined, complex and detailed. In this context, we analyzed a set of experimental data, obtained by observing and measuring 30 volunteers with six activities of daily living (ADL). Due to the large sample size, especially a 561-feature vector with time and frequency domain variables, cleaning these intractable features and training a proper model become extremely challenging. After a series of feature selection and parameters adjustments, a well-performed SVM classifier has been trained. 

A Comprehensive Survey on Machine Learning Techniques and User Authentication Approaches for Credit Card Fraud Detection

With the increase of credit card usage, the volume of credit card misuse also has significantly increased, which may cause appreciable financial losses for both credit card holders and financial organizations issuing credit cards. As a result, financial organizations are working hard on developing and deploying credit card fraud detection methods, in order to adapt to ever-evolving, increasingly sophisticated defrauding strategies and identifying illicit transactions as quickly as possible to protect themselves and their customers. Compounding on the complex nature of such adverse strategies, credit card fraudulent activities are rare events compared to the number of legitimate transactions. Hence, the challenge to develop fraud detection that are accurate and efficient is substantially intensified and, as a consequence, credit card fraud detection has lately become a very active area of research. In this work, we provide a survey of current techniques most relevant to the problem of credit card fraud detection. We carry out our survey in two main parts. In the first part, we focus on studies utilizing classical machine learning models, which mostly employ traditional transnational features to make fraud predictions. These models typically rely on some static physical characteristics, such as what the user knows (knowledge-based method), or what he/she has access to (object-based method). In the second part of our survey, we review more advanced techniques of user authentication, which use behavioral biometrics to identify an individual based on his/her unique behavior while he/she is interacting with his/her electronic devices. These approaches rely on how people behave (instead of what they do), which cannot be easily forged. By providing an overview of current approaches and the results reported in the literature, this survey aims to drive the future research agenda for the community in order to develop more accurate, reliable and scalable models of credit card fraud detection.

Automatic Classification of the Stand-to-Sit Phase in the TUG Test Using Machine Learning

Over the past several years, researchers have shown a great interest in assessing the mobility of elderly people to measure their functional status. Usually, such an assessment is done by conducting tests that require the subject to walk a certain distance, turn around, and finally sit back down. Consequently, this study aims to provide an at home monitoring system to assess the patient’s status continuously. Thus, we proposed a technique to automatically detect when a subject sits down while walking at home. In this study, we utilized a Doppler radar system to capture the motion of the subjects. More than 20 features were extracted from the radar signals out of which 11 were chosen based on their Intraclass Correlation Coefficient (ICC > 0.75). Accordingly, the sequential floating forward selection wrapper was applied to further narrow down the final feature vector. Finally, five features were introduced to the Linear Discriminant Analysis classifier and an accuracy of 93.75% was achieved as well as a precision and recall of 95% and 90% respectively.