Spherical Harmonic Based Monostatic Anisotropic Point Scatterer Model for RADAR Applications

High-performance computing (HPC) based emulators can be used to model the scattering from multiple stationary and moving targets for RADAR applications. These emulators rely on the RADAR Cross Section (RCS) of the targets being available in complex scenarios. Representing the RCS using tables generated from EM simulations is oftentimes cumbersome leading to large storage requirements. In this paper, we proposed a spherical harmonic based anisotropic scatterer model to represent the RCS of complex targets. The problem of finding the locations and reflection profiles of all scatterers can be formulated as a linear least square problem with a special sparsity constraint. We solve this problem using a modified Orthogonal Matching Pursuit algorithm. The results show that the spherical harmonic based scatterer model can effectively represent the RCS data of complex targets.

Willingness and Attitude Towards Organ Donation of Nurses in Taiwan

Taking the medical staff in an emergency ward of a medical center in Central Taiwan as the research object, the questionnaire data were collected by anonymous and voluntary reporting methods with structured questionnaires to explore organ donation’s actual situation, willingness, and attitude. Only 80 valid questionnaires were gathered. Of the 8 questions, the correct mean rate was 5.9 and the correct rate was 73.13%. According to the statistics of organ donation survey, only 8.7% have signed the consent for organ donation, 21.3% are willing but have not yet signed the consent for organ donation, 62.5% have not yet decided, and 7.5% are unwilling. The average total score (standard deviation) of attitude towards organ donation was 36.2. There is no significant difference between the demographic variables and the awareness and willingness of organ donation, but there is a significant correlation between marital status and the attitude toward organ donation.

Seismic Behavior and Loss Assessment of High-Rise Buildings with Light Gauge Steel-Concrete Hybrid Structure

The steel-concrete hybrid structure has been extensively employed in high-rise buildings and super high-rise buildings. The light gauge steel-concrete hybrid structure, including light gauge steel structure and concrete hybrid structure, is a type of steel-concrete hybrid structure, which possesses some advantages of light gauge steel structure and concrete hybrid structure. The seismic behavior and loss assessment of three high-rise buildings with three different concrete hybrid structures were investigated through finite element software. The three concrete hybrid structures are reinforced concrete column-steel beam (RC-S) hybrid structure, concrete-filled steel tube column-steel beam (CFST-S) hybrid structure, and tubed concrete column-steel beam (TC-S) hybrid structure. The nonlinear time-history analysis of three high-rise buildings under 80 earthquakes was carried out. After simulation, it indicated that the seismic performances of three high-rise buildings were superior. Under extremely rare earthquakes, the maximum inter-story drifts of three high-rise buildings are significantly lower than 1/50. The inter-story drift and floor acceleration of high-rise building with CFST-S hybrid structure were bigger than those of high-rise buildings with RC-S hybrid structure, and smaller than those of high-rise building with TC-S hybrid structure. Then, based on the time-history analysis results, the post-earthquake repair cost ratio and repair time of three high-rise buildings were predicted through an economic performance analysis method proposed in FEMA-P58 report. Under frequent earthquakes, basic earthquakes and rare earthquakes, the repair cost ratio and repair time of three high-rise buildings were less than 5% and 15 days, respectively. Under extremely rare earthquakes, the repair cost ratio and repair time of high-rise buildings with TC-S hybrid structure were the most among three high rise buildings. Due to the advantages of CFST-S hybrid structure, it could be extensively employed in high-rise buildings subjected to earthquake excitations.

Double Clustering as an Unsupervised Approach for Order Picking of Distributed Warehouses

Planning the order picking lists for warehouses to achieve some operational performances is a significant challenge when the costs associated with logistics are relatively high, and it is especially important in e-commerce era. Nowadays, many order planning techniques employ supervised machine learning algorithms. However, to define features for supervised machine learning algorithms is not a simple task. Against this background, we consider whether unsupervised algorithms can enhance the planning of order-picking lists. A double zone picking approach, which is based on using clustering algorithms twice, is developed. A simplified example is given to demonstrate the merit of our approach.

Geometric Simplification Method of Building Energy Model Based on Building Performance Simulation

In the design stage of a new building, the energy model of this building is often required for the analysis of the performance on energy efficiency. In practice, a certain degree of geometric simplification should be done in the establishment of building energy models, since the detailed geometric features of a real building are hard to be described perfectly in most energy simulation engine, such as ESP-r, eQuest or EnergyPlus. Actually, the detailed description is not necessary when the result with extremely high accuracy is not demanded. Therefore, this paper analyzed the relationship between the error of the simulation result from building energy models and the geometric simplification of the models. Finally, the following two parameters are selected as the indices to characterize the geometric feature of in building energy simulation: the southward projected area and total side surface area of the building. Based on the parameterization method, the simplification from an arbitrary column building to a typical shape (a cuboid) building can be made for energy modeling. The result in this study indicates that no more than 7% prediction error of annual cooling/heating load will be caused by the geometric simplification for those buildings with the ratio of southward projection length to total perimeter of the bottom of 0.25~0.35, which means this method is applicable for building performance simulation.

An Approach for Coagulant Dosage Optimization Using Soft Jar Test: A Case Study of Bangkhen Water Treatment Plant

The most important process of the water treatment plant process is coagulation, which uses alum and poly aluminum chloride (PACL). Therefore, determining the dosage of alum and PACL is the most important factor to be prescribed. This research applies an artificial neural network (ANN), which uses the Levenberg–Marquardt algorithm to create a mathematical model (Soft Jar Test) for chemical dose prediction, as used for coagulation, such as alum and PACL, with input data consisting of turbidity, pH, alkalinity, conductivity, and, oxygen consumption (OC) of the Bangkhen Water Treatment Plant (BKWTP), under the authority of the Metropolitan Waterworks Authority of Thailand. The data were collected from 1 January 2019 to 31 December 2019 in order to cover the changing seasons of Thailand. The input data of ANN are divided into three groups: training set, test set, and validation set. The coefficient of determination and the mean absolute errors of the alum model are 0.73, 3.18 and the PACL model are 0.59, 3.21, respectively.

Identifying Network Subgraph-Associated Essential Genes in Molecular Networks

Essential genes play an important role in the survival of an organism. It has been shown that cancer-associated essential genes are genes necessary for cancer cell proliferation, where these genes are potential therapeutic targets. Also, it was demonstrated that mutations of the cancer-associated essential genes give rise to the resistance of immunotherapy for patients with tumors. In the present study, we focus on studying the biological effects of the essential genes from a network perspective. We hypothesize that one can analyze a biological molecular network by decomposing it into both three-node and four-node digraphs (subgraphs). These network subgraphs encode the regulatory interaction information among the network’s genetic elements. In this study, the frequency of occurrence of the subgraph-associated essential genes in a molecular network was quantified by using the statistical parameter, odds ratio. Biological effects of subgraph-associated essential genes are discussed. In summary, the subgraph approach provides a systematic method for analyzing molecular networks and it can capture useful biological information for biomedical research.

Social Influences on Americans' Mask-Wearing Behavior during COVID-19

Based on a convenience sample of 2,092 participants from across all 50 states of the United States, a survey was conducted to explore Americans’ mask-wearing behaviors during COVID-19 according to their political convictions, religious beliefs, and ethnic cultures from late July to early September, 2020. The purpose of the study is to provide evidential support for government policymaking so as to drive up more effective public policies by taking into consideration the variance in these social factors. It was found that the respondents’ party affiliation or preference, religious belief, and ethnicity, in addition to their health condition, gender, level of concern of contracting COVID-19, all affected their mask-wearing habits both in March, the initial coronavirus outbreak stage, and in August, when mask-wearing had been made mandatory by state governments. The study concludes that pandemic awareness campaigns must be run among all citizens, especially among African Americans, Muslims, and Republicans, who have the lowest rates of wearing masks, in order to protect themselves and others. It is recommended that complementary cognitive bias awareness programs should be implemented in non-Black and non-Muslim communities to eliminate social concerns that deter them from wearing masks.

Research on the Teaching Quality Evaluation of China’s Network Music Education APP

With the advent of the Internet era in recent years, social music education has gradually shifted from the original entity education mode to the mode of entity plus network teaching. No matter for school music education, professional music education or social music education, the teaching quality is the most important evaluation index. Regarding the research on teaching quality evaluation, scholars at home and abroad have contributed a lot of research results on the basis of multiple methods and evaluation subjects. However, to our best knowledge the complete evaluation model for the virtual teaching interaction mode of the emerging network music education Application (APP) has not been established. This research firstly found out the basic dimensions that accord with the teaching quality required by the three parties, constructing the quality evaluation index system; and then, on the basis of expounding the connotation of each index, it determined the weight of each index by using method of fuzzy analytic hierarchy process, providing ideas and methods for scientific, objective and comprehensive evaluation of the teaching quality of network education APP.

Deep Learning Based 6D Pose Estimation for Bin-Picking Using 3D Point Clouds

Estimating the 6D pose of objects is a core step for robot bin-picking tasks. The problem is that various objects are usually randomly stacked with heavy occlusion in real applications. In this work, we propose a method to regress 6D poses by predicting three points for each object in the 3D point cloud through deep learning. To solve the ambiguity of symmetric pose, we propose a labeling method to help the network converge better. Based on the predicted pose, an iterative method is employed for pose optimization. In real-world experiments, our method outperforms the classical approach in both precision and recall.

AI Tutor: A Computer Science Domain Knowledge Graph-Based QA System on JADE platform

In this paper, we proposed an AI Tutor using ontology and natural language process techniques to generate a computer science domain knowledge graph and answer users’ questions based on the knowledge graph. We define eight types of relation to extract relationships between entities according to the computer science domain text. The AI tutor is separated into two agents: learning agent and Question-Answer (QA) agent and developed on JADE (a multi-agent system) platform. The learning agent is responsible for reading text to extract information and generate a corresponding knowledge graph by defined patterns. The QA agent can understand the users’ questions and answer humans’ questions based on the knowledge graph generated by the learning agent.

Laser Data Based Automatic Generation of Lane-Level Road Map for Intelligent Vehicles

With the development of intelligent vehicle systems, a high-precision road map is increasingly needed in many aspects. The automatic lane lines extraction and modeling are the most essential steps for the generation of a precise lane-level road map. In this paper, an automatic lane-level road map generation system is proposed. To extract the road markings on the ground, the multi-region Otsu thresholding method is applied, which calculates the intensity value of laser data that maximizes the variance between background and road markings. The extracted road marking points are then projected to the raster image and clustered using a two-stage clustering algorithm. Lane lines are subsequently recognized from these clusters by the shape features of their minimum bounding rectangle. To ensure the storage efficiency of the map, the lane lines are approximated to cubic polynomial curves using a Bayesian estimation approach. The proposed lane-level road map generation system has been tested on urban and expressway conditions in Hefei, China. The experimental results on the datasets show that our method can achieve excellent extraction and clustering effect, and the fitted lines can reach a high position accuracy with an error of less than 10 cm.

A Deep-Learning Based Prediction of Pancreatic Adenocarcinoma with Electronic Health Records from the State of Maine

Predicting the risk of Pancreatic Adenocarcinoma (PA) in advance can benefit the quality of care and potentially reduce population mortality and morbidity. The aim of this study was to develop and prospectively validate a risk prediction model to identify patients at risk of new incident PA as early as 3 months before the onset of PA in a statewide, general population in Maine. The PA prediction model was developed using Deep Neural Networks, a deep learning algorithm, with a 2-year electronic-health-record (EHR) cohort. Prospective results showed that our model identified 54.35% of all inpatient episodes of PA, and 91.20% of all PA that required subsequent chemoradiotherapy, with a lead-time of up to 3 months and a true alert of 67.62%. The risk assessment tool has attained an improved discriminative ability. It can be immediately deployed to the health system to provide automatic early warnings to adults at risk of PA. It has potential to identify personalized risk factors to facilitate customized PA interventions.

Deep Learning Application for Object Image Recognition and Robot Automatic Grasping

Since the vision system application in industrial environment for autonomous purposes is required intensely, the image recognition technique becomes an important research topic. Here, deep learning algorithm is employed in image system to recognize the industrial object and integrate with a 7A6 Series Manipulator for object automatic gripping task. PC and Graphic Processing Unit (GPU) are chosen to construct the 3D Vision Recognition System. Depth Camera (Intel RealSense SR300) is employed to extract the image for object recognition and coordinate derivation. The YOLOv2 scheme is adopted in Convolution neural network (CNN) structure for object classification and center point prediction. Additionally, image processing strategy is used to find the object contour for calculating the object orientation angle. Then, the specified object location and orientation information are sent to robotic controller. Finally, a six-axis manipulator can grasp the specific object in a random environment based on the user command and the extracted image information. The experimental results show that YOLOv2 has been successfully employed to detect the object location and category with confidence near 0.9 and 3D position error less than 0.4 mm. It is useful for future intelligent robotic application in industrial 4.0 environment.

DocPro: A Framework for Processing Semantic and Layout Information in Business Documents

With the recent advance of the deep neural network, we observe new applications of NLP (natural language processing) and CV (computer vision) powered by deep neural networks for processing business documents. However, creating a real-world document processing system needs to integrate several NLP and CV tasks, rather than treating them separately. There is a need to have a unified approach for processing documents containing textual and graphical elements with rich formats, diverse layout arrangement, and distinct semantics. In this paper, a framework that fulfills this unified approach is presented. The framework includes a representation model definition for holding the information generated by various tasks and specifications defining the coordination between these tasks. The framework is a blueprint for building a system that can process documents with rich formats, styles, and multiple types of elements. The flexible and lightweight design of the framework can help build a system for diverse business scenarios, such as contract monitoring and reviewing.

Using Project MIND - Math Is Not Difficult Strategies to Help Children with Autism Improve Mathematics Skills

This study aimed to provide a practical, systematic, and comprehensive intervention for children with Autism Spectrum Disorder (ASD). A pilot study of quasi-experimental pre-post intervention with control group design was conducted to evaluate if the mathematical intervention (Project MIND - Math Is Not Difficult) increases the math comprehension of children with ASD Children with ASD in the primary grades (K-1, 2) participated in math interventions to enhance their math comprehension and cognitive ability. The Bracken basic concept scale was used to evaluate subjects’ language skills, cognitive development, and school readiness. The study found that our systemic interventions of Project MIND significantly improved the mathematical and cognitive abilities in children with autism. The results of this study may lead to a major change in effective and adequate health care services for children with ASD and their families. All statistical analyses were performed with the IBM SPSS Statistics Version 25 for Windows. The significant level was set at 0.05 P-value.

Time Series Simulation by Conditional Generative Adversarial Net

Generative Adversarial Net (GAN) has proved to be a powerful machine learning tool in image data analysis and generation. In this paper, we propose to use Conditional Generative Adversarial Net (CGAN) to learn and simulate time series data. The conditions include both categorical and continuous variables with different auxiliary information. Our simulation studies show that CGAN has the capability to learn different types of normal and heavy-tailed distributions, as well as dependent structures of different time series. It also has the capability to generate conditional predictive distributions consistent with training data distributions. We also provide an in-depth discussion on the rationale behind GAN and the neural networks as hierarchical splines to establish a clear connection with existing statistical methods of distribution generation. In practice, CGAN has a wide range of applications in market risk and counterparty risk analysis: it can be applied to learn historical data and generate scenarios for the calculation of Value-at-Risk (VaR) and Expected Shortfall (ES), and it can also predict the movement of the market risk factors. We present a real data analysis including a backtesting to demonstrate that CGAN can outperform Historical Simulation (HS), a popular method in market risk analysis to calculate VaR. CGAN can also be applied in economic time series modeling and forecasting. In this regard, we have included an example of hypothetical shock analysis for economic models and the generation of potential CCAR scenarios by CGAN at the end of the paper.

SVID: Structured Vulnerability Intelligence for Building Deliberated Vulnerable Environment

The diversity and complexity of modern IT systems make it almost impossible for internal teams to find vulnerabilities in all software before the software is officially released. The emergence of threat intelligence and vulnerability reporting policy has greatly reduced the burden on software vendors and organizations to find vulnerabilities. However, to prove the existence of the reported vulnerability, it is necessary but difficult for security incident response team to build a deliberated vulnerable environment from the vulnerability report with limited and incomplete information. This paper presents a structured, standardized, machine-oriented vulnerability intelligence format, that can be used to automate the orchestration of Deliberated Vulnerable Environment (DVE). This paper highlights the important role of software configuration and proof of vulnerable specifications in vulnerability intelligence, and proposes a triad model, which is called DIR (Dependency Configuration, Installation Configuration, Runtime Configuration), to define software configuration. Finally, this paper has also implemented a prototype system to demonstrate that the orchestration of DVE can be automated with the intelligence.

A Design of Anisotropic Wet Etching System to Reduce Hillocks on Etched Surface of Silicon Substrate

This research aims to design and build a wet etching system, which is suitable for anisotropic wet etching, in order to reduce etching time, to reduce hillocks on the etched surface (to reduce roughness), and to create a 45-degree wall angle (micro-mirror). This study would start by designing a wet etching system. There are four main components in this system: an ultrasonic cleaning, a condenser, a motor and a substrate holder. After that, an ultrasonic machine was modified by applying a condenser to maintain the consistency of the solution concentration during the etching process and installing a motor for improving the roughness. This effect on the etch rate and the roughness showed that the etch rate increased and the roughness was reduced.

Weak Instability in Direct Integration Methods for Structural Dynamics

Three structure-dependent integration methods have been developed for solving equations of motion, which are second-order ordinary differential equations, for structural dynamics and earthquake engineering applications. Although they generally have the same numerical properties, such as explicit formulation, unconditional stability and second-order accuracy, a different performance is found in solving the free vibration response to either linear elastic or nonlinear systems with high frequency modes. The root cause of this different performance in the free vibration responses is analytically explored herein. As a result, it is verified that a weak instability is responsible for the different performance of the integration methods. In general, a weak instability will result in an inaccurate solution or even numerical instability in the free vibration responses of high frequency modes. As a result, a weak instability must be prohibited for time integration methods.