Online Teaching Methods and Student Satisfaction during a Pandemic

With the outbreak of the global pandemic of COVID-19, online education characterizes today’s higher education. For some higher education institutions (HEIs), the shift from classroom education to online solutions was swift and smooth, and students are continuously asked about their experience regarding online education. Therefore, there is a growing emphasis on student satisfaction with online education, a field that had emerged previously, but has become the center of higher education and research interest today. The aim of the current paper is to give a brief overview of the tools used in the online education of marketing-related classes at the examined university and to investigate student satisfaction with the applied teaching methodologies with the tool of a questionnaire. Results show that students are most satisfied with their teachers’ competences and preparedness, while they are least satisfied with online class quality, where it seems that further steps are needed to be taken.

Two-Stage Approach for Solving the Multi-Objective Optimization Problem on Combinatorial Configurations

The statement of the multi-objective optimization problem on combinatorial configurations is formulated, and the approach to its solution is proposed. The problem is of interest as a combinatorial optimization one with many criteria, which is a model of many applied tasks. The approach to solving the multi-objective optimization problem on combinatorial configurations consists of two stages; the first is the reduction of the multi-objective problem to the single criterion based on existing multi-objective optimization methods, the second stage solves the directly replaced single criterion combinatorial optimization problem by the horizontal combinatorial method. This approach provides the optimal solution to the multi-objective optimization problem on combinatorial configurations, taking into account additional restrictions for a finite number of steps.

Deep Learning Based 6D Pose Estimation for Bin-Picking Using 3D Point Clouds

Estimating the 6D pose of objects is a core step for robot bin-picking tasks. The problem is that various objects are usually randomly stacked with heavy occlusion in real applications. In this work, we propose a method to regress 6D poses by predicting three points for each object in the 3D point cloud through deep learning. To solve the ambiguity of symmetric pose, we propose a labeling method to help the network converge better. Based on the predicted pose, an iterative method is employed for pose optimization. In real-world experiments, our method outperforms the classical approach in both precision and recall.

Thresholding Approach for Automatic Detection of Pseudomonas aeruginosa Biofilms from Fluorescence in situ Hybridization Images

Pseudomonas aeruginosa is an opportunistic pathogen that forms surface-associated microbial communities (biofilms) on artificial implant devices and on human tissue. Biofilm infections are difficult to treat with antibiotics, in part, because the bacteria in biofilms are physiologically heterogeneous. One measure of biological heterogeneity in a population of cells is to quantify the cellular concentrations of ribosomes, which can be probed with fluorescently labeled nucleic acids. The fluorescent signal intensity following fluorescence in situ hybridization (FISH) analysis correlates to the cellular level of ribosomes. The goals here are to provide computationally and statistically robust approaches to automatically quantify cellular heterogeneity in biofilms from a large library of epifluorescent microscopy FISH images. In this work, the initial steps were developed toward these goals by developing an automated biofilm detection approach for use with FISH images. The approach allows rapid identification of biofilm regions from FISH images that are counterstained with fluorescent dyes. This methodology provides advances over other computational methods, allowing subtraction of spurious signals and non-biological fluorescent substrata. This method will be a robust and user-friendly approach which will enable users to semi-automatically detect biofilm boundaries and extract intensity values from fluorescent images for quantitative analysis of biofilm heterogeneity.

Blueprinting of a Normalized Supply Chain Processes: Results in Implementing Normalized Software Systems

With the technology evolving every day and with the increase in global competition, industries are always under the pressure to be the best. They need to provide good quality products at competitive prices, when and how the customer wants them.  In order to achieve this level of service, products and their respective supply chain processes need to be flexible and evolvable; otherwise changes will be extremely expensive, slow and with many combinatorial effects. Those combinatorial effects impact the whole organizational structure, from a management, financial, documentation, logistics and specially the information system Enterprise Requirement Planning (ERP) perspective. By applying the normalized system concept/theory to segments of the supply chain, we believe minimal effects, especially at the time of launching an organization global software project. The purpose of this paper is to point out that if an organization wants to develop a software from scratch or implement an existing ERP software for their business needs and if their business processes are normalized and modular then most probably this will yield to a normalized and modular software system that can be easily modified when the business evolves. Another important goal of this paper is to increase the awareness regarding the design of the business processes in a software implementation project. If the blueprints created are normalized then the software developers and configurators will use those modular blueprints to map them into modular software. This paper only prepares the ground for further studies;  the above concept will be supported by going through the steps of developing, configuring and/or implementing a software system for an organization by using two methods: The Software Development Lifecycle method (SDLC) and the Accelerated SAP implementation method (ASAP). Both methods start with the customer requirements, then blue printing of its business processes and finally mapping those processes into a software system.  Since those requirements and processes are the starting point of the implementation process, then normalizing those processes will end up in a normalizing software.

Intraoperative ICG-NIR Fluorescence Angiography Visualization of Intestinal Perfusion in Primary Pull-Through for Hirschsprung Disease

Purpose: Assessment of anastomotic perfusion in Hirschsprung disease using Indocyanine Green (ICG)-near-infrared (NIR) fluorescence angiography. Introduction: Anastomotic stricture and leak are well-known complications of Hirschsprung pull-through procedures. Complications are due to tension, infection, and/or poor perfusion. While a surgeon can visually determine and control the amount of tension and contamination, assessment of perfusion is subject to surgeon determination. Intraoperative use of ICG-NIR enhances this decision-making process by illustrating perfusion intensity and adequacy in the pulled-through bowel segment. This technique, proven to reduce anastomotic stricture and leak in adults, has not been studied in children to our knowledge. ICG, an FDA approved, nontoxic, non-immunogenic, intravascular (IV) dye, has been used in adults and children for over 60 years, with few side effects. ICG-NIR was used in this report to demonstrate the adequacy of perfusion during transanal pullthrough for Hirschsprung’s disease. Method: 8 patients with Hirschsprung disease were evaluated with ICG-NIR technology. Levels of affected area ranged from sigmoid to total colonic Hirschsprung disease. After leveling, but prior to anastomosis, ICG was administered at 1.25 mg (< 2 mg/kg) and perfusion visualized using an NIR camera, before and during anastomosis. Video and photo imaging was performed and perfusion of the bowel was compared to surrounding tissues. This showed the degree of perfusion and demarcation of perfused and non-perfused bowel. The anastomosis was completed uneventfully and the patients all did well. Results: There were no complications of stricture or leak. 5 of 8 patients (62.5%) had modification of the plan based on ICG-NIR imaging. Conclusion: Technologies that enhance surgeons’ ability to visualize bowel perfusion prior to anastomosis in Hirschsprung’s patients may help reduce post-operative complications. Further studies are needed to assess the potential benefits.

Advancement of Oscillating Water Column Wave Energy Technologies through Integrated Applications and Alternative Systems

Wave energy converter technologies continue to show good progress in worldwide research. One of the most researched technologies, the Oscillating Water Column (OWC), is arguably one of the most popular categories within the converter technologies due to its robustness, simplicity and versatility. However, the versatility of the OWC is still largely untapped with most deployments following similar trends with respect to applications and operating systems. As the competitiveness of the energy market continues to increase, the demand for wave energy technologies to be innovative also increases. For existing wave energy technologies, this requires identifying areas to diversify for lower costs of energy with respect to applications and synergies or integrated systems. This paper provides a review of all OWCs systems integrated into alternative applications in the past and present. The aspects and variation in their design, deployment and system operation are discussed. Particular focus is given to the Multi-OWCs (M-OWCs) and their great potential to increase capture on a larger scale, especially in synergy applications. It is made clear that these steps need to be taken in order to make wave energy a competitive and viable option in the renewable energy mix as progression to date shows that stand alone single function devices are not economical. Findings reveal that the trend of development is moving toward these integrated applications in order to reduce the Levelised Cost of Energy (LCOE) and will ultimately continue in this direction in efforts to make wave energy a competitive option in the renewable energy mix.

Digital Transformation of Payment Systems Using Field Service Management

Like many other industries, the payment industry has been affected by digital transformation. The importance of digital transformation in the payment industry is very crucial. Because the payment industry is considered a leading industry in digital and emerging technologies, and the digitalization of other industries such as retail, health, and telecommunication, it also depends on the growth rate of digitalized payment systems. One of the technological innovations in service management is Field Service Management (FSM). Despite the widespread use of FSM in various industries such as petrochemical, health, maintenance, etc., this technology can also be recruited in the payment industry, transforming the payment industry into a more agile and efficient one. Accordingly, the present study pays close attention to the application of FSM in the payment industry. Given the importance of merchants' bargaining power in the payment industry, this study aims to use FSM in the digital transformation initiative with a targeted focus on providing real-time services to merchants. The research method consists of three parts. Firstly, conducting the review of past research, applications of FSM in the payment industry are considered. In the next step, merchants' benefits such as emotional, functional, economic, and social benefits in using FSM are identified using in-depth interviews and content analysis methods. The related business model in helping the payment industry transforming into a more agile and efficient industry is considered in the following step. The results revealed the 10 main pillars required to realize the digital transformation of payment systems using FSM.

Laser Data Based Automatic Generation of Lane-Level Road Map for Intelligent Vehicles

With the development of intelligent vehicle systems, a high-precision road map is increasingly needed in many aspects. The automatic lane lines extraction and modeling are the most essential steps for the generation of a precise lane-level road map. In this paper, an automatic lane-level road map generation system is proposed. To extract the road markings on the ground, the multi-region Otsu thresholding method is applied, which calculates the intensity value of laser data that maximizes the variance between background and road markings. The extracted road marking points are then projected to the raster image and clustered using a two-stage clustering algorithm. Lane lines are subsequently recognized from these clusters by the shape features of their minimum bounding rectangle. To ensure the storage efficiency of the map, the lane lines are approximated to cubic polynomial curves using a Bayesian estimation approach. The proposed lane-level road map generation system has been tested on urban and expressway conditions in Hefei, China. The experimental results on the datasets show that our method can achieve excellent extraction and clustering effect, and the fitted lines can reach a high position accuracy with an error of less than 10 cm.

Soil/Phytofisionomy Relationship in Southeast of Chapada Diamantina, Bahia, Brazil

This study aims to characterize the physicochemical aspects of the soils of southeastern Chapada Diamantina - Bahia related to the phytophysiognomies of this area, rupestrian field, small savanna (savanna fields), small dense savanna (savanna fields), savanna (Cerrado), dry thorny forest (Caatinga), dry thorny forest/savanna, scrub (Carrasco - ecotone), forest island (seasonal semi-deciduous forest - Capão) and seasonal semi-deciduous forest. To achieve the research objective, soil samples were collected in each plant formation and analyzed in the soil laboratory of ESALQ - USP in order to identify soil fertility through the determination of pH, organic matter, phosphorus, potassium, calcium, magnesium, potential acidity, sum of bases, cation exchange capacity and base saturation. The composition of soil particles was also checked; that is, the texture, step made in the terrestrial ecosystems laboratory of the Department of Ecology of USP and in the soil laboratory of ESALQ. Another important factor also studied was to show the variations in the vegetation cover in the region as a function of soil moisture in the different existing physiographic environments. Another study carried out was a comparison between the average soil moisture data with precipitation data from three locations with very different phytophysiognomies. The soils found in this part of Bahia can be classified into 5 classes, with a predominance of oxisols. All of these classes have a great diversity of physical and chemical properties, as can be seen in photographs and in particle size and fertility analyzes. The deepest soils are located in the Central Pediplano of Chapada Diamantina where the dirty field, the clean field, the executioner and the semideciduous seasonal forest (Capão) are located, and the shallower soils were found in the rupestrian field, dry thorny forest, and savanna fields, the latter located on a hillside. As for the variations in water in the region's soil, the data indicate that there were large spatial variations in humidity in both the rainy and dry periods.

Step Method for Solving Nonlinear Two Delays Differential Equation in Parkinson’s Disease

Parkinson's disease (PD) is a heterogeneous disorder with common age of onset, symptoms, and progression levels. In this paper we will solve analytically the PD model as a non-linear delay differential equation using the steps method. The step method transforms a system of delay differential equations (DDEs) into systems of ordinary differential equations (ODEs). On some numerical examples, the analytical solution will be difficult. So we will approximate the analytical solution using Picard method and Taylor method to ODEs.

Analysis of Urban Slum: Case Study of Korail Slum, Dhaka

Bangladesh is one of the poorest countries in the world. There are several reasons for this insufficiency and uncontrolled population growth is one of the prime reasons. Others include low economic progress, imbalanced resource management, unemployment and underemployment, urban migration and natural catastrophes etc. As a result, the rate of urban poor is increasing inevitably in every sphere of urban cities in Bangladesh and Dhaka is the most affected one. Besides there is scarcity of urban land, housing, urban infrastructure and amenities which create pressure on urban cities and mostly encroach the open space, wetlands that causes environmental degradation. Government has no or limited control over these due to poor government policy and management, political pressure and lack of resource management. Unfortunately, over centralization and bureaucracy creates unnecessary delay and interruptions in any government initiations. There is also no coordination between government and private sector developer to solve the problem of urban Poor. To understand the problem of these huge populations this paper analyzes one of the single largest slum areas in Dhaka, Korail Slum. The study focuses on socio demographic analysis, morphological pattern and role of different actors responsible for the improvements of the area and recommended some possible steps for determining the potential outcomes.

A Continuous Real-Time Analytic for Predicting Instability in Acute Care Rapid Response Team Activations

A reliable, real-time, and non-invasive system that can identify patients at risk for hemodynamic instability is needed to aid clinicians in their efforts to anticipate patient deterioration and initiate early interventions. The purpose of this pilot study was to explore the clinical capabilities of a real-time analytic from a single lead of an electrocardiograph to correctly distinguish between rapid response team (RRT) activations due to hemodynamic (H-RRT) and non-hemodynamic (NH-RRT) causes, as well as predict H-RRT cases with actionable lead times. The study consisted of a single center, retrospective cohort of 21 patients with RRT activations from step-down and telemetry units. Through electronic health record review and blinded to the analytic’s output, each patient was categorized by clinicians into H-RRT and NH-RRT cases. The analytic output and the categorization were compared. The prediction lead time prior to the RRT call was calculated. The analytic correctly distinguished between H-RRT and NH-RRT cases with 100% accuracy, demonstrating 100% positive and negative predictive values, and 100% sensitivity and specificity. In H-RRT cases, the analytic detected hemodynamic deterioration with a median lead time of 9.5 hours prior to the RRT call (range 14 minutes to 52 hours). The study demonstrates that an electrocardiogram (ECG) based analytic has the potential for providing clinical decision and monitoring support for caregivers to identify at risk patients within a clinically relevant timeframe allowing for increased vigilance and early interventional support to reduce the chances of continued patient deterioration.

Process Optimization and Automation of Information Technology Services in a Heterogenic Digital Environment

With customers’ ever-increasing expectations for fast services provisioning for all their business needs, information technology (IT) organizations, as business partners, have to cope with this demanding environment and deliver their services in the most effective and efficient way. The purpose of this paper is to identify optimization and automation opportunities for the top requested IT services in a heterogenic digital environment and widely spread customer base. In collaboration with systems, processes, and subject matter experts (SMEs), the processes in scope were approached by analyzing four-year related historical data, identifying and surveying stakeholders, modeling the as-is processes, and studying systems integration/automation capabilities. This effort resulted in identifying several pain areas, including standardization, unnecessary customer and IT involvement, manual steps, systems integration, and performance measurement. These pain areas were addressed by standardizing the top five requested IT services, eliminating/automating 43 steps, and utilizing a single platform for end-to-end process execution. In conclusion, the optimization of IT service request processes in a heterogenic digital environment and widely spread customer base is challenging, yet achievable without compromising the service quality and customers’ added value. Further studies can focus on measuring the value of the eliminated/automated process steps to quantify the enhancement impact. Moreover, a similar approach can be utilized to optimize other IT service requests, with a focus on business criticality.

Limited Component Evaluation of the Effect of Regular Cavities on the Sheet Metal Element of the Steel Plate Shear Wall

Steel Metal Shear Wall is one of the most common and widely used energy dissipation systems in structures, which is used today as a damping system due to the increase in the construction of metal structures. In the present study, the shear wall of the steel plate with dimensions of 5×3 m and thickness of 0.024 m was modeled with 2 floors of total height from the base level with finite element method in Abaqus software. The loading is done as a concentrated load at the upper point of the shear wall on the second floor based on step type buckle. The mesh in the model is applied in two directions of length and width of the shear wall, equal to 0.02 and 0.033, respectively, and the mesh in the models is of sweep type. Finally, it was found that the steel plate shear wall with cavity (CSPSW) compared to the SPSW model, S (Mises), Smax (In-Plane Principal), Smax (In-Plane Principal-ABS), Smax (Min Principal) increased by 53%, 70%, 68% and 43%, respectively. The presence of cavities has led to an increase in the estimated stresses, but their presence has caused critical stresses and critical deformations created to be removed from the inner surface of the shear wall and transferred to the desired sections (regular cavities) which can be suggested as a solution in seismic design and improvement of the structure to transfer possible damage during the earthquake and storm to the desired and pre-designed location in the structure.

Battery Energy Storage System Economic Benefits Assessment on a Network Frequency Control

Here a methodology is considered aiming at evaluating the economic benefit of the provision of a primary frequency control unit using a Battery Energy Storage System (BESS). In this methodology, two control types (basic and hysteresis) are implemented and the corresponding minimum energy storage system power allowing to maintain the frequency drop inside a given threshold under a given contingency is identified and compared using DigSilent’s PowerFactory software. Following this step, the corresponding energy storage capacity (in MWh) is calculated. As PowerFactory is dedicated to dynamic simulation for transient analysis, a first order model related to the IEEE 9 bus grid used for the analysis under PowerFactory is characterized and implemented on MATLAB-Simulink. Primary frequency control is simulated using the two control types over one-month grid's frequency deviation data on this Simulink model. This simulation results in the energy throughput both basic and hysteresis BESSs. It emerges that the 15 minutes operation band of the battery capacity allocated to frequency control is sufficient under the considered disturbances. A sensitivity analysis on the width of the control deadband is then performed for the two control types. The deadband width variation leads to an identical sizing with the hysteresis control showing a better frequency control at the cost of a higher delivered throughput compared to the basic control. An economic analysis comparing the cost of the sized BESS to the potential revenues is then performed.

Uncertainty Analysis of a Hardware in Loop Setup for Testing Products Related to Building Technology

Hardware in Loop (HIL) testing is done to test and validate a particular product especially in building technology. When it comes to building technology, it is more important to test the products for their efficiency. The test rig in the HIL simulator may contribute to some uncertainties on measured efficiency. The uncertainties include physical uncertainties and scenario-based uncertainties. In this paper, a simple uncertainty analysis framework for an HIL setup is shown considering only the physical uncertainties. The entire modeling of the HIL setup is done in Dymola. The uncertain sources are considered based on available knowledge of the components and also on expert knowledge. For the propagation of uncertainty, Monte Carlo Simulation is used since it is the most reliable and easy to use. In this article it is shown how an HIL setup can be modeled and how uncertainty propagation can be performed on it. Such an approach is not common in building energy analysis.

The Influence of Strengthening on the Fundamental Frequency and Stiffness of a Confined Masonry Wall with an Opening for а Door

This paper presents the observations from a series of shaking-table tests done on a 1:1 scaled confined masonry wall model, with opening for a door – specimens CMDuS (confined masonry wall with opening for a door before strengthening) and CMDS (confined masonry wall with opening for a door after strengthening). Frequency and stiffness changes before and after GFRP (Glass Fiber Reinforced Plastic) wall strengthening are analyzed. Definition of dynamic properties of the models was the first step of the experimental testing, which enabled acquiring important information about the achieved stiffness (natural frequencies) of the model. The natural frequency was defined in the Y direction of the model by applying resonant frequency search tests. It is important to mention that both specimens CMDuS and CMDS are subjected to the same effects. The tests are realized in the laboratory of the Institute of Earthquake Engineering and Engineering Seismology (IZIIS), Skopje. The specimens were examined separately on the shaking table, with uniaxial, in-plane excitation. After testing, samples were strengthened with GFRP and re-tested. The initial frequency of the undamaged model CMDuS is 13.55 Hz, while at the end of the testing, the frequency decreased to 6.38 Hz. This emphasizes the reduction of the initial stiffness of the model due to damage, especially in the masonry and tie-beam to tie-column connection. After strengthening of the damaged wall, the natural frequency increases to 10.89 Hz. This highlights the beneficial effect of the strengthening. After completion of dynamic testing at CMDS, the natural frequency is reduced to 6.66 Hz.

Simulation and Analysis of Passive Parameters of Building in eQuest: A Case Study in Istanbul, Turkey

With rapid development of urbanization and improvement of living standards in the world, energy consumption and carbon emissions of the building sector are expected to increase in the near future; because of that, energy-saving issues have become more important among the engineers. Besides, the building sector is a major contributor to energy consumption and carbon emissions. The concept of efficient building appeared as a response to the need for reducing energy demand in this sector which has the main purpose of shifting from standard buildings to low-energy buildings. Although energy-saving should happen in all steps of a building during the life cycle (material production, construction, demolition), the main concept of efficient energy building is saving energy during the life expectancy of a building by using passive and active systems, and should not sacrifice comfort and quality to reach these goals. The main aim of this study is to investigate passive strategies (do not need energy consumption or use renewable energy) to achieve energy-efficient buildings. Energy retrofit measures were explored by eQuest software using a case study as a base model. The study investigates predictive accuracy for the major factors like thermal transmittance (U-value) of the material, windows, shading devices, thermal insulation, rate of the exposed envelope, window/wall ration, lighting system in the energy consumption of the building. The base model was located in Istanbul, Turkey. The impact of eight passive parameters on energy consumption had been indicated. After analyzing the base model by eQuest, a final scenario was suggested which had a good energy performance. The results showed a decrease in the U-values of materials, the rate of exposing buildings, and windows had a significant effect on energy consumption. Finally, savings in electric consumption of about 10.5%, and gas consumption by about 8.37% in the suggested model were achieved annually.

Supervisor Controller-Based Colored Petri Nets for Deadlock Control and Machine Failures in Automated Manufacturing Systems

This paper develops a robust deadlock control technique for shared and unreliable resources in automated manufacturing systems (AMSs) based on structural analysis and colored Petri nets, which consists of three steps. The first step involves using strict minimal siphon control to create a live (deadlock-free) system that does not consider resource failure. The second step uses an approach based on colored Petri net, in which all monitors designed in the first step are merged into a single monitor. The third step addresses the deadlock control problems caused by resource failures. For all resource failures in the Petri net model a common recovery subnet based on colored petri net is proposed. The common recovery subnet is added to the obtained system at the second step to make the system reliable. The proposed approach is evaluated using an AMS from the literature. The results show that the proposed approach can be applied to an unreliable complex Petri net model, has a simpler structure and less computational complexity, and can obtain one common recovery subnet to model all resource failures.