Fatigue Life Prediction on Steel Beam Bridges under Variable Amplitude Loading

Steel bridges are normally subjected to random loads with different traffic frequencies. They are structures with dynamic behavior and are subject to fatigue failure process, where the nucleation of a crack, growth and failure can occur. After locating and determining the size of an existing fault, it is important to predict the crack propagation and the convenient time for repair. Therefore, fracture mechanics and fatigue concepts are essential to the right approach to the problem. To study the fatigue crack growth, a computational code was developed by using the root mean square (RMS) and the cycle-by-cycle models. One observes the variable amplitude loading influence on the life structural prediction. Different loads histories and initial crack length were considered as input variables. Thus, it was evaluated the dispersion of results of the expected structural life choosing different initial parameters.

Computational Fluid Dynamics Study on Water Soot Blower Direction in Tangentially Fired Pulverized-Coal Boiler

In this study, Computational Fluid Dynamics (CFD) was utilized to simulate and predict the path of water from water soot blower through an ambient flow field in 300-megawatt tangentially burned pulverized coal boiler that utilizes a water soot blower as a cleaning device. To predict the position of the impact of water on the opposite side of the water soot blower under identical conditions, the nozzle size and water flow rate were fixed in this investigation. The simulation findings demonstrated a high degree of accuracy in predicting the direction of water flow to the boiler's water wall tube, which was validated by comparison to experimental data. Results show maximum deviation value of the water jet trajectory is 10.2%.

Index t-SNE: Tracking Dynamics of High-Dimensional Datasets with Coherent Embeddings

t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embedding. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic, and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n2) to O(n2/k), and the memory requirement from n2 to 2(n/k)2 which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.

Influence of Inhomogeneous Wind Fields on the Aerostatic Stability of a Cable-Stayed Pedestrian Bridge without Backstays: Experiments and Numerical Simulations

Sightseeing glass bridges located in steep valley area are being built on a large scale owing to the development of tourism. Consequently, their aerostatic stability is seriously affected by the wind field characteristics created by strong wind and special terrain, such as wind speed and wind attack angle. For instance, a cable-stayed pedestrian bridge without backstays comprised of a 60-m cantilever girder and the glass bridge deck is located in an abrupt valley, acting as a viewing platform. The bridge’s nonlinear aerostatic stability was analyzed by the segmental model test and numerical simulation in this paper. Based on aerostatic coefficients of the main girder measured in wind tunnel tests, nonlinear influences caused by the structure and aerostatic load, inhomogeneous distribution of torsion angle along the bridge axis, and the influence of initial attack angle were analyzed by using the incremental double iteration method. The results show that the aerostatic response varying with speed shows an obvious nonlinearity, and the aerostatic instability mode is of the characteristic of space deformation of bending-twisting coupling mode. The vertical and torsional deformation of the main girder is larger than its lateral deformation, with the wind speed approaching the critical wind speed. The flow of negative attack angle will reduce the bridges’ critical stability wind speed, but the influence of the negative attack angle on the aerostatic stability is more significant than that of the positive attack angle. The critical wind speeds of torsional divergence and lateral buckling are both larger than 200 m/s; namely, the bridge will not occur aerostatic instability under the action of various wind attack angles.

Simulation and Assessment of Carbon Dioxide Separation by Piperazine Blended Solutions Using E-NRTL and Peng-Robinson Models: A Study of Regeneration Heat Duty

High pressure carbon dioxide (CO2) absorption from a specific off-gas in a conventional column has been evaluated for the environmental concerns by the Aspen HYSYS simulator using a wide range of single absorbents and piperazine (PZ) blended solutions to estimate the outlet CO2 concentration, CO2 loading, reboiler power supply and regeneration heat duty to choose the most efficient solution in terms of CO2 removal and required heat duty. The property package, which is compatible with all applied solutions for the simulation in this study, estimates the properties based on electrolyte non-random two-liquid (E-NRTL) model for electrolyte thermodynamics and Peng-Robinson equation of state for vapor phase and liquid hydrocarbon phase properties. The results of the simulation indicate that PZ in addition to the mixture of PZ and monoethanolamine (MEA) demand the highest regeneration heat duty compared with other studied single and blended amine solutions respectively. The blended amine solutions with the lowest PZ concentrations (5wt% and 10wt%) were considered and compared to reduce the cost of process, among which the blended solution of 10wt%PZ+35wt%MDEA (methyldiethanolamine) was found as the most appropriate solution in terms of CO2 content in the outlet gas, rich-CO2 loading and regeneration heat duty.

A Medical Vulnerability Scoring System Incorporating Health and Data Sensitivity Metrics

With the advent of complex software and increased connectivity, security of life-critical medical devices is becoming an increasing concern, particularly with their direct impact to human safety. Security is essential, but it is impossible to develop completely secure and impenetrable systems at design time. Therefore, it is important to assess the potential impact on security and safety of exploiting a vulnerability in such critical medical systems. The common vulnerability scoring system (CVSS) calculates the severity of exploitable vulnerabilities. However, for medical devices, it does not consider the unique challenges of impacts to human health and privacy. Thus, the scoring of a medical device on which a human life depends (e.g., pacemakers, insulin pumps) can score very low, while a system on which a human life does not depend (e.g., hospital archiving systems) might score very high. In this paper, we present a Medical Vulnerability Scoring System (MVSS) that extends CVSS to address the health and privacy concerns of medical devices. We propose incorporating two new parameters, namely health impact and sensitivity impact. Sensitivity refers to the type of information that can be stolen from the device, and health represents the impact to the safety of the patient if the vulnerability is exploited (e.g., potential harm, life threatening). We evaluate 15 different known vulnerabilities in medical devices and compare MVSS against two state-of-the-art medical device-oriented vulnerability scoring system and the foundational CVSS.

Advances on the Understanding of Sequence Convergence Seen from the Perspective of Mathematical Working Spaces

We analyze a first-class on the convergence of real number sequences, named hereafter sequences, to foster exploration and discovery of concepts through graphical representations before engaging students in proving. The main goal was to differentiate between sequences and continuous functions-of-a-real-variable and better understand concepts at an initial stage. We applied the analytic frame of Mathematical Working Spaces, which we expect to contribute to extending to sequences since, as far as we know, it has only developed for other objects, and which is relevant to analyze how mathematical work is built systematically by connecting the epistemological and cognitive perspectives, and involving the semiotic, instrumental, and discursive dimensions.

Classification of Extreme Ground-Level Ozone Based on Generalized Extreme Value Model for Air Monitoring Station

Higher ground-level ozone (GLO) concentration adversely affects human health, vegetations as well as activities in the ecosystem. In Malaysia, most of the analysis on GLO concentration are carried out using the average value of GLO concentration, which refers to the centre of distribution to make a prediction or estimation. However, analysis which focuses on the higher value or extreme value in GLO concentration is rarely explored. Hence, the objective of this study is to classify the tail behaviour of GLO using generalized extreme value (GEV) distribution estimation the return level using the corresponding modelling (Gumbel, Weibull, and Frechet) of GEV distribution. The results show that Weibull distribution which is also known as short tail distribution and considered as having less extreme behaviour is the best-fitted distribution for four selected air monitoring stations in Peninsular Malaysia, namely Larkin, Pelabuhan Kelang, Shah Alam, and Tanjung Malim; while Gumbel distribution which is considered as a medium tail distribution is the best-fitted distribution for Nilai station. The return level of GLO concentration in Shah Alam station is comparatively higher than other stations. Overall, return levels increase with increasing return periods but the increment depends on the type of the tail of GEV distribution’s tail. We conduct this study by using maximum likelihood estimation (MLE) method to estimate the parameters at four selected stations in Peninsular Malaysia. Next, the validation for the fitted block maxima series to GEV distribution is performed using probability plot, quantile plot and likelihood ratio test. Profile likelihood confidence interval is tested to verify the type of GEV distribution. These results are important as a guide for early notification on future extreme ozone events.

Scientific Methods in Educational Management: The Metasystems Perspective

Although scientific methods have been the subject of a large number of papers, the term ‘scientific methods in educational management’ is still not well defined. In this paper, it is adopted the metasystems perspective to define the mentioned term and distinguish them from methods used in time of the scientific management and knowledge management paradigms. In our opinion, scientific methods in educational management rely on global phenomena, events, and processes and their influence on the educational organization. Currently, scientific methods in educational management are integrated with the phenomenon of globalization, cognitivisation, and openness, etc. of educational systems and with global events like the COVID-19 pandemic. Concrete scientific methods are nested in a hierarchy of more and more abstract models of educational management, which form the context of the global impact on education, in general, and learning outcomes, in particular. However, scientific methods can be assigned to a specific mission, strategy, or tactics of educational management of the concrete organization, either by the global management, local development of school organization, or/and development of the life-long successful learner. By accepting this assignment, the scientific method becomes a personal goal of each individual with the educational organization or the option to develop the educational organization at the global standards. In our opinion, in educational management, the scientific methods need to confine the scope to the deep analysis of concrete tasks of the educational system (i.e., teaching, learning, assessment, development), which result in concrete strategies of organizational development. More important are seeking the ways for dynamic equilibrium between the strategy and tactic of the planetary tasks in the field of global education, which result in a need for ecological methods of learning and communication. In sum, distinction between local and global scientific methods is dependent on the subjective conception of the task assignment, measurement, and appraisal. Finally, we conclude that scientific methods are not holistic scientific methods, but the strategy and tactics implemented in the global context by an effective educational/academic manager.

Performance of BLDC Motor under Kalman Filter Sensorless Drive

The performance of a permanent magnet brushless direct current (BLDC) motor controlled by the Kalman filter based position-sensorless drive is studied in terms of its dependence from the system’s parameters variations. The effects of the system’s parameters changes on the dynamic behavior of state variables are verified. Simulated is the closed loop control scheme with Kalman filter in the feedback line. Distinguished are two separate data sampling modes in analyzing feedback output from the BLDC motor: (1) equal angular separation and (2) equal time intervals. In case (1), the data are collected via equal intervals  of rotor’s angular position i, i.e. keeping  = const. In case (2), the data collection time points ti are separated by equal sampling time intervals t = const. Demonstrated are the effects of the parameters changes on the sensorless control flow, in particular, reduction of the instability torque ripples, switching spikes, and torque load balancing. It is specifically shown that an efficient suppression of commutation induced instability torque ripples is an achievable selection of the sampling rate in the Kalman filter settings above a certain critical value. The computational cost of such suppression is shown to be higher for the motors with lower induction values of the windings.

The Applicability of Distillation as an Alternative Nuclear Reprocessing Method

A customized two-stage model has been developed to simulate, analyse, and visualize distillation of actinides as a useful alternative low-pressure separation method in the nuclear recycling cases. Under the most optimal conditions of idealized thermodynamic equilibrium stages and under total reflux of distillate the investigated cases of chloride systems for the separation of such actinides are (A) UCl4-CsCl-PuCl3 and (B) ThCl4-NaCl-PuCl3. Simulatively, uranium tetrachloride in case A is successfully separated by distillation into a six-stage distillation column, and thorium tetrachloride from case B into an eight-stage distillation column. For this, a permissible mole fraction value of 1E-06 has been assumed for the residual impurification degree. With further separation effort of eleven to seventeen required separation stages, the monochlorides of plutonium trichloride from both systems A and B are simulatively shown to be separated as high pure distillation products.

De Broglie Wavelength Defined by the Rest Energy E0 and Its Velocity

In this paper, we take a different approach to de Broglie wavelength, as we relate it to relativistic physics. The quantum energy of the photon radiated by a body with de Broglie wavelength, as it moves with velocity v, can be defined within relativistic physics by rest energy E₀. In this way, we can show the connection between the quantum of radiation energy of the body and the rest of energy E₀ and thus combine what has been incompatible so far, namely relativistic and quantum physics. So, here we discuss the unification of relativistic and quantum physics by introducing the factor k that is analog to the Lorentz factor in Einstein's theory of relativity.

Experimental Study on the Variation of Young's Modulus of Hollow Clay Brick Obtained from Static and Dynamic Tests

In parallel with the appearance of new materials, brick masonry had and still has an essential part of the construction market today, with new technical challenges in designing bricks to meet additional requirements. Being used in structural applications, predicting the performance of clay brick masonry allows a significant cost reduction, in terms of practical experimentation. The behavior of masonry walls depends on the behavior of their elementary components, such as bricks, joints, and coatings. Therefore, it is necessary to consider it at different scales (from the scale of the intrinsic material to the real scale of the wall) and then to develop appropriate models, using numerical simulations. The work presented in this paper focuses on the mechanical characterization of the terracotta material at ambient temperature. As a result, the static Young’s modulus obtained from the flexural test shows different values in comparison with the compression test, as well as with the dynamic Young’s modulus obtained from the Impulse excitation of vibration test. Moreover, the Young's modulus varies according to the direction in which samples are extracted, where the values in the extrusion direction diverge from the ones in the orthogonal directions. Based on these results, hollow bricks can be considered as transversely isotropic bimodulus material.

Photovoltaic Array Cleaning System Design and Evaluation

Dust accumulation on the photovoltaic module's surface results in appreciable loss and negatively affects the generated power. Hence, in this paper, the design of a photovoltaic array cleaning system is presented. The cleaning system utilizes one drive motor, two guide rails, and four sweepers during the cleaning process. The cleaning system was experimentally implemented for one month to investigate its efficiency on PV array energy output. The energy capture over a month for PV array cleaned using the proposed cleaning system is compared with that of the energy capture using soiled PV array. The results show a 15% increase in energy generation from PV array with cleaning. From the results, investigating the optimal scheduling of the PV array cleaning could be an interesting research topic.

Select-Low and Select-High Methods for the Wheeled Robot Dynamic States Control

The paper enquires on the two methods of the wheeled robot braking torque control. Those two methods are applied when the adhesion coefficient under left side wheels is different from the adhesion coefficient under the right side wheels. In case of the select-low (SL) method the braking torque on both wheels is controlled by the signals originating from the wheels on the side of the lower adhesion. In the select-high (SH) method the torque is controlled by the signals originating from the wheels on the side of the higher adhesion. The SL method is securing stable and secure robot behaviors during the braking process. However, the efficiency of this method is relatively low. The SH method is more efficient in terms of time and braking distance but in some situations may cause wheels blocking. It is important to monitor the velocity of all wheels and then take a decision about the braking torque distribution accordingly. In case of the SH method the braking torque slope may require significant decrease in order to avoid wheel blocking.

Shaking Force Balancing of Mechanisms: An Overview

The balancing of mechanisms is a well-known problem in the field of mechanical engineering because the variable dynamic loads cause vibrations, as well as noise, wear and fatigue of the machines. A mechanical system with unbalance shaking force and shaking moment transmits substantial vibration to the frame. Therefore, the objective of the balancing is to cancel or reduce the variable dynamic reactions transmitted to the frame. The resolution of this problem consists in the balancing of the shaking force and shaking moment. It can be fully or partially, by internal mass redistribution via adding counterweights or by modification of the mechanism's architecture via adding auxiliary structures. The balancing problems are of continue interest to researchers. Several laboratories around the world are very active in this area and new results are published regularly. However, despite its ancient history, mechanism balancing theory continues to be developed and new approaches and solutions are constantly being reported. Various surveys have been published that disclose particularities of balancing methods. The author believes that this is an appropriate moment to present a state of the art of the shaking force balancing studies completed by new research results. This paper presents an overview of methods devoted to the shaking force balancing of mechanisms, as well as the historical aspects of the origins and the evolution of the balancing theory of mechanisms.

Vulnerability Analysis for Risk Zones Boundary Definition to Support a Decision Making Process at CBRNE Operations

An effective emergency response to accidents with chemical, biological, radiological, nuclear, or explosive materials (CBRNE) that represent highly dynamic situations needs immediate actions within limited time, information and resources. The aim of the study is to provide the foundation for division of unsafe area into risk zones according to the impact of hazardous parameters (heat radiation, thermal dose, overpressure, chemical concentrations). A decision on the boundary values for three risk zones is based on the vulnerability analysis that covered a variety of accident scenarios containing the release of a toxic or flammable substance which either evaporates, ignites and/or explodes. Critical values are selected for the boundary definition of the Red, Orange and Yellow risk zones upon the examination of harmful effects that are likely to cause injuries of varying severity to people and different levels of damage to structures. The obtained results provide the basis for creating a comprehensive real-time risk map for a decision support at CBRNE operations.

Investigation of the Effects of Biodiesel Blend on Particulate-Phase Exhaust Emissions from a Light Duty Diesel Vehicle

This study presents an investigation of diesel vehicle particulate-phase emissions with neat ultralow sulphur diesel (B0, ULSD) and 5% waste cooking oil-based biodiesel blend (B5) in Hong Kong. A Euro VI light duty diesel vehicle was tested under transient (New European Driving Cycle (NEDC)), steady-state and idling on a chassis dynamometer. Chemical analyses including organic carbon (OC), elemental carbon (EC), as well as 30 polycyclic aromatic hydrocarbons (PAHs) and 10 oxygenated PAHs (oxy-PAHs) were conducted. The OC fuel-based emission factors (EFs) for B0 ranged from 2.86 ± 0.33 to 7.19 ± 1.51 mg/kg, and those for B5 ranged from 4.31 ± 0.64 to 15.36 ± 3.77 mg/kg, respectively. The EFs of EC were low for both fuel blends (0.25 mg/kg or below). With B5, the EFs of total PAHs were decreased as compared to B0. Specifically, B5 reduced total PAH emissions by 50.2%, 30.7%, and 15.2% over NEDC, steady-state and idling, respectively. It was found that when B5 was used, PAHs and oxy-PAHs with lower molecular weight (2 to 3 rings) were reduced whereas PAHs/oxy-PAHs with medium or high molecular weight (4 to 7 rings) were increased. Our study suggests the necessity of taking atmospheric and health factors into account for biodiesel application as an alternative motor fuel.

The Reproducibility and Repeatability of Modified Likelihood Ratio for Forensics Handwriting Examination

The forensic use of handwriting depends on the analysis, comparison, and evaluation decisions made by forensic document examiners. When using biometric technology in forensic applications, it is necessary to compute Likelihood Ratio (LR) for quantifying strength of evidence under two competing hypotheses, namely the prosecution and the defense hypotheses wherein a set of assumptions and methods for a given data set will be made. It is therefore important to know how repeatable and reproducible our estimated LR is. This paper evaluated the accuracy and reproducibility of examiners' decisions. Confidence interval for the estimated LR were presented so as not get an incorrect estimate that will be used to deliver wrong judgment in the court of Law. The estimate of LR is fundamentally a Bayesian concept and we used two LR estimators, namely Logistic Regression (LoR) and Kernel Density Estimator (KDE) for this paper. The repeatability evaluation was carried out by retesting the initial experiment after an interval of six months to observe whether examiners would repeat their decisions for the estimated LR. The experimental results, which are based on handwriting dataset, show that LR has different confidence intervals which therefore implies that LR cannot be estimated with the same certainty everywhere. Though the LoR performed better than the KDE when tested using the same dataset, the two LR estimators investigated showed a consistent region in which LR value can be estimated confidently. These two findings advance our understanding of LR when used in computing the strength of evidence in handwriting using forensics.

Geometrical Based Unequal Droplet Splitting Using Microfluidic Y-Junction

Among different droplet manipulations, controlled droplet-splitting is of great significance due to its ability to increase throughput and operational capability. Furthermore, unequal droplet-splitting can provide greater flexibility and a wider range of dilution factors. In this study, we developed two-dimensional, time-dependent complex fluid dynamics simulations to model droplet formation in a flow focusing device, followed by splitting in a Y-shaped junction with sub-channels of unequal widths. From the results obtained from the numerical study, we correlated the diameters of the droplets in the sub-channels to the Weber number, thereby demarcating the droplet splitting and non-splitting regimes.