Neural Network Supervisory Proportional-Integral-Derivative Control of the Pressurized Water Reactor Core Power Load Following Operation

This work presents the particle swarm optimization trained neural network (PSO-NN) supervisory proportional integral derivative (PID) control method to monitor the pressurized water reactor (PWR) core power for safe operation. The proposed control approach is implemented on the transfer function of the PWR core, which is computed from the state-space model. The PWR core state-space model is designed from the neutronics, thermal-hydraulics, and reactivity models using perturbation around the equilibrium value. The proposed control approach computes the control rod speed to maneuver the core power to track the reference in a closed-loop scheme. The particle swarm optimization (PSO) algorithm is used to train the neural network (NN) and to tune the PID simultaneously. The controller performance is examined using integral absolute error, integral time absolute error, integral square error, and integral time square error functions, and the stability of the system is analyzed by using the Bode diagram. The simulation results indicated that the controller shows satisfactory performance to control and track the load power effectively and smoothly as compared to the PSO-PID control technique. This study will give benefit to design a supervisory controller for nuclear engineering research fields for control application.

Educational Experiences in Engineering in the COVID-19 Era and Their Comparative Analysis: Spain, March-June 2020

In March 2020, in Spain, a sanitary and unexpected crisis caused by COVID-19 was declared. All of a sudden, all degrees, classes and evaluation tests and projects had to be transformed into online activities. However, the chaotic situation generated by a complex operation like that, executed without any well-established procedure, led to very different experiences and, finally, results. In this paper, we are describing three experiences in two different Universities in Madrid. On the one hand, the Technical University of Madrid, a public university with little experience in online education was considered. On the other hand, Alfonso X el Sabio University, a private university with more than five years of experience in online teaching was involved. All analyzed subjects were related to computer engineering. Professors and students answered a survey and personal interviews were also carried out. Besides, the professors’ workload and the students’ academic results were also compared. From the comparative analysis of all these experiences, we are extracting the most successful strategies, methodologies, and activities. The recommendations in this paper will be useful for courses during the next months when the sanitary situation is still affecting an educational organization. While, at the same time, they will be considered as input for the upcoming digitalization process of higher education.

Automatic Classification of the Stand-to-Sit Phase in the TUG Test Using Machine Learning

Over the past several years, researchers have shown a great interest in assessing the mobility of elderly people to measure their functional status. Usually, such an assessment is done by conducting tests that require the subject to walk a certain distance, turn around, and finally sit back down. Consequently, this study aims to provide an at home monitoring system to assess the patient’s status continuously. Thus, we proposed a technique to automatically detect when a subject sits down while walking at home. In this study, we utilized a Doppler radar system to capture the motion of the subjects. More than 20 features were extracted from the radar signals out of which 11 were chosen based on their Intraclass Correlation Coefficient (ICC > 0.75). Accordingly, the sequential floating forward selection wrapper was applied to further narrow down the final feature vector. Finally, five features were introduced to the Linear Discriminant Analysis classifier and an accuracy of 93.75% was achieved as well as a precision and recall of 95% and 90% respectively.

Sustainable Engineering: Synergy of BIM and Environmental Assessment Tools in the Hong Kong Construction Industry

The construction industry plays an important role in environmental and carbon emissions as it consumes a huge amount of natural resources and energy. Sustainable engineering involves the process of planning, design, procurement, construction and delivery in which the whole building and construction process resulting from building and construction can be effectively and sustainability managed to achieve the use of natural resources. Implementation of sustainable technology development and innovation, adoption of the advanced construction process and facilitate the facilities management to implement the energy and waste control more accurately and effectively. Study and research in the relationship of BIM and environment assessment tools lack a clear discussion. In this paper, we will focus on the synergy of BIM technology and sustainable engineering in the AEC industry and outline the key factors which enhance the use of advanced innovation, technology and method and define the role of stakeholders to achieve zero-carbon emission toward the Paris Agreement to limit global warming to well below 2°C above pre-industrial levels. A case study of the adoption of Building Information Modeling (BIM) and environmental assessment tools in Hong Kong will be discussed in this paper.

Adaptive Kalman Filter for Noise Estimation and Identification with Bayesian Approach

Bayesian approach can be used for parameter identification and extraction in state space models and its ability for analyzing sequence of data in dynamical system is proved in different literatures. In this paper, adaptive Kalman filter with Bayesian approach for identification of variances in measurement parameter noise is developed. Next, it is applied for estimation of the dynamical state and measurement data in discrete linear dynamical system. This algorithm at each step time estimates noise variance in measurement noise and state of system with Kalman filter. Next, approximation is designed at each step separately and consequently sufficient statistics of the state and noise variances are computed with a fixed-point iteration of an adaptive Kalman filter. Different simulations are applied for showing the influence of noise variance in measurement data on algorithm. Firstly, the effect of noise variance and its distribution on detection and identification performance is simulated in Kalman filter without Bayesian formulation. Then, simulation is applied to adaptive Kalman filter with the ability of noise variance tracking in measurement data. In these simulations, the influence of noise distribution of measurement data in each step is estimated, and true variance of data is obtained by algorithm and is compared in different scenarios. Afterwards, one typical modeling of nonlinear state space model with inducing noise measurement is simulated by this approach. Finally, the performance and the important limitations of this algorithm in these simulations are explained. 

Design and Construction of an Impulse Current Generator for Lightning Strike Experiments

There has been a rising trend in using impulse current generators to investigate the lightning strike protection of materials including aluminum and composites in structures such as wind turbine blade and aircraft body. The focus of this research is to present an impulse current generator built in the High Voltage Lab at Mississippi State University. The generator is capable of producing component A and D of the natural lightning discharges in accordance with the Society of Automotive Engineers (SAE) standard, which is widely used in the aerospace industry. The generator can supply lightning impulse energy up to 400 kJ with the capability of producing impulse currents with magnitudes greater than 200 kA. The electrical circuit and physical components of an improved impulse current generator are described and several lightning strike waveforms with different amplitudes is presented for comparing with the standard waveform. The results of this study contribute to the fundamental understanding the functionality of the impulse current generators and present an impulse current generator developed at the High Voltage Lab of Mississippi State University.

Emotion Detection in Twitter Messages Using Combination of Long Short-Term Memory and Convolutional Deep Neural Networks

One of the most significant issues as attended a lot in recent years is that of recognizing the sentiments and emotions in social media texts. The analysis of sentiments and emotions is intended to recognize the conceptual information such as the opinions, feelings, attitudes and emotions of people towards the products, services, organizations, people, topics, events and features in the written text. These indicate the greatness of the problem space. In the real world, businesses and organizations are always looking for tools to gather ideas, emotions, and directions of people about their products, services, or events related to their own. This article uses the Twitter social network, one of the most popular social networks with about 420 million active users, to extract data. Using this social network, users can share their information and opinions about personal issues, policies, products, events, etc. It can be used with appropriate classification of emotional states due to the availability of its data. In this study, supervised learning and deep neural network algorithms are used to classify the emotional states of Twitter users. The use of deep learning methods to increase the learning capacity of the model is an advantage due to the large amount of available data. Tweets collected on various topics are classified into four classes using a combination of two Bidirectional Long Short Term Memory network and a Convolutional network. The results obtained from this study with an average accuracy of 93%, show good results extracted from the proposed framework and improved accuracy compared to previous work.

Cirrhosis Mortality Prediction as Classification Using Frequent Subgraph Mining

In this work, we use machine learning and data analysis techniques to predict the one-year mortality of cirrhotic patients. Data from 2,322 patients with liver cirrhosis are collected at a single medical center. Different machine learning models are applied to predict one-year mortality. A comprehensive feature space including demographic information, comorbidity, clinical procedure and laboratory tests is being analyzed. A temporal pattern mining technic called Frequent Subgraph Mining (FSM) is being used. Model for End-stage liver disease (MELD) prediction of mortality is used as a comparator. All of our models statistically significantly outperform the MELD-score model and show an average 10% improvement of the area under the curve (AUC). The FSM technic itself does not improve the model significantly, but FSM, together with a machine learning technique called an ensemble, further improves the model performance. With the abundance of data available in healthcare through electronic health records (EHR), existing predictive models can be refined to identify and treat patients at risk for higher mortality. However, due to the sparsity of the temporal information needed by FSM, the FSM model does not yield significant improvements. Our work applies modern machine learning algorithms and data analysis methods on predicting one-year mortality of cirrhotic patients and builds a model that predicts one-year mortality significantly more accurate than the MELD score. We have also tested the potential of FSM and provided a new perspective of the importance of clinical features.

The Use of Knowledge Management Systems and ICT Service Desk Management to Minimize the Digital Divide Experienced in the Museum Sector

Since the introduction of ServiceNow, the UK’s Science Museum Group’s (SMG) ICT service desk portal, there has not been an analysis of the tools available to SMG staff for Just-in-time knowledge acquisition (Knowledge Management Systems) and reporting ICT incidents with a focus on an aspect of professional identity namely, gender. Therefore, it is important for SMG to investigate the apparent disparities so that solutions can be derived to minimize this digital divide if one exists. This study is conducted in the milieu of UK museums, galleries, arts, academic, charitable, and cultural heritage sector. It is acknowledged at SMG that there are challenges with keeping up with an ever-changing digital landscape. Subsequently, this entails the rapid upskilling of staff and developing an infrastructure that supports just-in-time technological knowledge acquisition and reporting technology related issues. This problem was addressed by analysing ServiceNow ICT incident reports and reports from knowledge articles from a six-month period from February to July. This study found a statistically significant relationship between gender and reporting an ICT incident. There is also a significant relationship between gender and the priority level of ICT incident. Interestingly, there is no statistically significant relationship between gender and reading knowledge articles. Additionally, there is no statistically significant relationship between gender and reporting an ICT incident related to the knowledge article that was read by staff. The knowledge acquired from this study is useful to service desk management practice as it will help to inform the creation of future knowledge articles and ICT incident reporting processes.

Comparison of Deep Convolutional Neural Networks Models for Plant Disease Identification

Identification of plant diseases has been performed using machine learning and deep learning models on the datasets containing images of healthy and diseased plant leaves. The current study carries out an evaluation of some of the deep learning models based on convolutional neural network architectures for identification of plant diseases. For this purpose, the publicly available New Plant Diseases Dataset, an augmented version of PlantVillage dataset, available on Kaggle platform, containing 87,900 images has been used. The dataset contained images of 26 diseases of 14 different plants and images of 12 healthy plants. The CNN models selected for the study presented in this paper are AlexNet, ZFNet, VGGNet (four models), GoogLeNet, and ResNet (three models). The selected models are trained using PyTorch, an open-source machine learning library, on Google Colaboratory. A comparative study has been carried out to analyze the high degree of accuracy achieved using these models. The highest test accuracy and F1-score of 99.59% and 0.996, respectively, were achieved by using GoogLeNet with Mini-batch momentum based gradient descent learning algorithm.

The Role of Synthetic Data in Aerial Object Detection

The purpose of this study is to explore the characteristics of developing a machine learning application using synthetic data. The study is structured to develop the application for the purpose of deploying the computer vision model. The findings discuss the realities of attempting to develop a computer vision model for practical purpose, and detail the processes, tools and techniques that were used to meet accuracy requirements. The research reveals that synthetic data represent another variable that can be adjusted to improve the performance of a computer vision model. Further, a suite of tools and tuning recommendations are provided.

A Commercial Building Plug Load Management System That Uses Internet of Things Technology to Automatically Identify Plugged-In Devices and Their Locations

Plug and process loads (PPLs) account for a large portion of U.S. commercial building energy use. There is a huge potential to reduce whole building consumption by targeting PPLs for energy savings measures or implementing some form of plug load management (PLM). Despite this potential, there has yet to be a widely adopted commercial PLM technology. This paper describes the Automatic Type and Location Identification System (ATLIS), a PLM system framework with automatic and dynamic load detection (ADLD). ADLD gives PLM systems the ability to automatically identify devices as they are plugged into the outlets of a building. The ATLIS framework takes advantage of smart, connected devices to identify device locations in a building, meter and control their power, and communicate this information to a central database. ATLIS includes five primary capabilities: location identification, communication, control, energy metering, and data storage. A laboratory proof of concept (PoC) demonstrated all but the energy metering capability, and these capabilities were validated using a series of system tests. The PoC was able to identify when a device was plugged into an outlet and the location of the device in the building. When a device was moved, the PoC’s dashboard and database were automatically updated with the new location. The PoC implemented controls to devices from the system dashboard so that devices maintained correct schedules regardless of where they were plugged in within the building. ATLIS’s primary technology application is improved PLM, but other applications include asset management, energy audits, and interoperability for grid-interactive efficient buildings. An ATLIS-based system could also be used to direct power to critical devices, such as ventilators, during a brownout or blackout. Such a framework is an opportunity to make PLM more widespread and reduce the amount of energy consumed by PPLs in current and future commercial buildings.

An Overview of Technology Availability to Support Remote Decentralized Clinical Trials

Developing new medicine and health solutions and improving patient health currently rely on the successful execution of clinical trials, which generate relevant safety and efficacy data. For their success, recruitment and retention of participants are some of the most challenging aspects of protocol adherence. Main barriers include: i) lack of awareness of clinical trials; ii) long distance from the clinical site; iii) the burden on participants, including the duration and number of clinical visits, and iv) high dropout rate. Most of these aspects could be addressed with a new paradigm, namely the Remote Decentralized Clinical Trials (RDCTs). Furthermore, the COVID-19 pandemic has highlighted additional advantages and challenges for RDCTs in practice, allowing participants to join trials from home and not depending on site visits, etc. Nevertheless, RDCTs should follow the process and the quality assurance of conventional clinical trials, which involve several processes. For each part of the trial, the Building Blocks, existing software and technologies were assessed through a systematic search. The technology needed to perform RDCTs is widely available and validated but is yet segmented and developed in silos, as different software solutions address different parts of the trial and at various levels. The current paper is analyzing the availability of technology to perform RDCTs, identifying gaps and providing an overview of Basic Building Blocks and functionalities that need to be covered to support the described processes.

Performance Evaluation of Minimum Quantity Lubrication on EN3 Mild Steel Turning

Lubrication, cooling and chip removal are the desired functions of any cutting fluid. Conventional or flood lubrication requires high volume flow rate and cost associated with this is higher. In addition, flood lubrication possesses health risks to machine operator. To avoid these consequences, dry machining and minimum quantity are two alternatives. Dry machining cannot be a suited alternative as it can generate greater heat and poor surface finish. Here, turning work is carried out on a Lathe machine using EN3 Mild steel. Variable cutting speeds and depth of cuts are provided and corresponding temperatures and surface roughness values were recorded. Experimental results are analyzed by Minitab software. Regression analysis, main effect plot, and interaction plot conclusion are drawn by using ANOVA. There is a 95.83% reduction in the use of cutting fluid. MQL gives a 9.88% reduction in tool temperature, this will improve tool life. MQL produced a 17.64% improved surface finish. MQL appears to be an economical and environmentally compatible lubrication technique for sustainable manufacturing.

Energy Management System with Temperature Rise Prevention on Hybrid Ships

Marine shipping has now become one of the major worldwide contributors to pollution and greenhouse gas emissions. Hybrid ships technology based on multiple energy sources has taken a great scope of research to get rid of ship emissions and cut down fuel expenses. Insufficiency between power generated and the demand load to withstand the transient behavior on ships during severe climate conditions will lead to a blackout. Thus, an efficient energy management system (EMS) is a mandatory scope for achieving higher system efficiency while enhancing the lifetime of the onboard storage systems is another salient EMS scope. Considering energy storage system conditions, both the battery state of charge (SOC) and temperature represent important parameters to prevent any malfunction of the storage system that eventually degrades the whole system. In this paper, a two battery packs ratio fuzzy logic control model is proposed. The overall aim is to control the charging/discharging current while including both the battery SOC and temperature in the energy management system. The full designs of the proposed controllers are described and simulated using Matlab. The results prove the successfulness of the proposed controller in stabilizing the system voltage during both loading and unloading while keeping the energy storage system in a healthy condition.

Technological Applications in Automobile Manufacturing Sector: A Case Study Analysis

The research focuses on the applicable technologies in the automobile industry and their effects on the productivity and annual revenue of the industry. A study has been conducted on six major automobile manufacturing industries represented in this research as M1, M2, M3, M4, M5 and M6. The results indicate that M1, which is a pioneer in technological applications, remains the market leader, followed by M5 and M2 taking the second and third positions, respectively. M3, M6 and M4 are the followers and are placed next in positions. It has also been observed that M1 and M2 have entered into an agreement to share the basic structural technologies and they maintain long-term and trusted relationships with their suppliers through the Keiretsu system. With technological giants such as Apple, Microsoft, Uber and Google entering the automobile industry in recent years, an upward trend is expected in the futuristic market with self-driving cars to dominate the automobile sector. To keep up with the market trend, it is essential for automobile manufacturers to understand the importance of developing technological capabilities and skills to be competitive in the marketplace.

Basic Calibration and Normalization Techniques for Time Domain Reflectometry Measurements

The study of dielectric properties in a binary mixture of liquids is very useful to understand the liquid structure, molecular interaction, dynamics, and kinematics of the mixture. Time-domain reflectometry (TDR) is a powerful tool for studying the cooperation and molecular dynamics of the H-bonded system. Here we discuss the basic calibration and normalization procedure for TDR measurements. Our aim is to explain different types of error occur during TDR measurements and how to minimize it.

An E-Maintenance IoT Sensor Node Designed for Fleets of Diverse Heavy-Duty Vehicles

E-maintenance is a relatively recent concept, generally referring to maintenance management by monitoring assets over the Internet. One of the key links in the chain of an e-maintenance system is data acquisition and transmission. Specifically for the case of a fleet of heavy-duty vehicles, where the main challenge is the diversity of the vehicles and vehicle-embedded self-diagnostic/reporting technologies, the design of the data acquisition and transmission unit is a demanding task. This is clear if one takes into account that a heavy-vehicles fleet assortment may range from vehicles with only a limited number of analog sensors monitored by dashboard light indicators and gauges to vehicles with plethora of sensors monitored by a vehicle computer producing digital reporting. The present work proposes an adaptable internet of things (IoT) sensor node that is capable of addressing this challenge. The proposed sensor node architecture is based on the increasingly popular single-board computer – expansion boards approach. In the proposed solution, the expansion boards undertake the tasks of position identification, cellular connectivity, connectivity to the vehicle computer, and connectivity to analog and digital sensors by means of a specially targeted design of expansion board. Specifically, the latter offers a number of adaptability features to cope with the diverse sensor types employed in different vehicles. In standard mode, the IoT sensor node communicates to the data center through cellular network, transmitting all digital/digitized sensor data, IoT device identity and position. Moreover, the proposed IoT sensor node offers connectivity, through WiFi and an appropriate application, to smart phones or tablets allowing the registration of additional vehicle- and driver-specific information and these data are also forwarded to the data center. All control and communication tasks of the IoT sensor node are performed by dedicated firmware.

A 3D Numerical Environmental Modeling Approach for Assessing Transport of Spilled Oil in Porous Beach Conditions under a Meso-Scale Tank Design

Shorelines are vulnerable to significant environmental impacts from oil spills. Stranded oil can cause potential short- to long-term detrimental effects along beaches that include injuries to ecosystem, socio-economic and cultural resources. In this study, a three-dimensional (3D) numerical modeling approach is developed to evaluate the fate and transport of spilled oil for hypothetical oiled shoreline cases under various combinations of beach geomorphology and environmental conditions. The developed model estimates the spatial and temporal distribution of spilled oil for the various test conditions, using the finite volume method and considering the physical transport (dispersion and advection), sinks, and sorption processes. The model includes a user-friendly interface for data input on variables such as beach properties, environmental conditions, and physical-chemical properties of spilled oil. An experimental meso-scale tank design was used to test the developed model for dissolved petroleum hydrocarbon within shorelines. The simulated results for effects of different sediment substrates, oil types, and shoreline features for the transport of spilled oil are comparable to that obtained with a commercially available model. Results show that the properties of substrates and the oil removal by shoreline effects have significant impacts on oil transport in the beach area. Sensitivity analysis, through the application of the one-step-at-a-time method (OAT), for the 3D model identified hydraulic conductivity as the most sensitive parameter. The 3D numerical model allows users to examine the behavior of oil on and within beaches, assess potential environmental impacts, and provide technical support for decisions related to shoreline clean-up operations.

Technical, Environmental, and Financial Assessment for the Optimal Sizing of a Run-of-River Small Hydropower Project: A Case Study in Colombia

Run-of-river (RoR) hydropower projects represent a viable, clean, and cost-effective alternative to dam-based plants and provide decentralized power production. However, RoR schemes’ cost-effectiveness depends on the proper selection of site and design flow, which is a challenging task because it requires multivariate analysis. In this respect, this study presents the development of an investment decision support tool for assessing the optimal size of an RoR scheme considering the technical, environmental, and cost constraints. The net present value (NPV) from a project perspective is used as an objective function for supporting the investment decision. The tool has been tested by applying it to an actual RoR project recently proposed in Colombia. The obtained results show that the optimum point in financial terms does not match the flow that maximizes energy generation from exploiting the river's available flow. For the case study, the flow that maximizes energy corresponds to a value of 5.1 m3/s. In comparison, an amount of 2.1 m3/s maximizes the investors NPV. Finally, a sensitivity analysis is performed to determine the NPV as a function of the debt rate changes and the electricity prices and the CapEx. Even for the worst-case scenario, the optimal size represents a positive business case with an NPV of 2.2 USD million and an internal rate of return (IRR) 1.5 times higher than the discount rate.