Fatigue Life Prediction on Steel Beam Bridges under Variable Amplitude Loading

Steel bridges are normally subjected to random loads with different traffic frequencies. They are structures with dynamic behavior and are subject to fatigue failure process, where the nucleation of a crack, growth and failure can occur. After locating and determining the size of an existing fault, it is important to predict the crack propagation and the convenient time for repair. Therefore, fracture mechanics and fatigue concepts are essential to the right approach to the problem. To study the fatigue crack growth, a computational code was developed by using the root mean square (RMS) and the cycle-by-cycle models. One observes the variable amplitude loading influence on the life structural prediction. Different loads histories and initial crack length were considered as input variables. Thus, it was evaluated the dispersion of results of the expected structural life choosing different initial parameters.

Learning Objects Content Presentation Adaptation Model Considering Students' Learning Styles

Learning styles (LSs) correspond to the individual preferences of a person regarding the modes and forms in which he/she prefers to learn throughout the teaching/learning process. The content presentation of learning objects (LOs) using knowledge about the students’ LSs offers them digital educational resources tailored to their individual learning preferences. In this context, the most relevant characteristics of the LSs along with the most appropriate forms of LOs' content presentation were mapped and associated. Such was performed in order to define the composition of an adaptive model of LO's content presentation considering the LSs, which was called Adaptation of Content Presentation of Learning Objects Considering Learning Styles (ACPLOLS). LO prototypes were created with interfaces that were adapted to students' LSs. These prototypes were based on a model created for validation of the approaches that were used, which were established through experiments with the students. The results of subjective measures of students' emotional responses demonstrated that the ACPLOLS has reached the desired results in relation to the adequacy of the LOs interface, in accordance with the Felder-Silverman LSs Model.

Research on the Protection and Reuse Model of Historical Buildings in Chinese Airports

China had constructed a large number of military and civilian airports before and after World War II, and then began large-scale repairs, reconstructions or relocation of airports after the baptism of wars after World War I and World War II. The airport's historical area and its historical buildings such as terminals, hangars, and towers have adopted different protection strategies and reuse application strategies. This paper is based on the judgment of the value of airport historical buildings to study different protection and reuse strategies. The protection and reuse models of historical buildings are classified in three dimensions: the airport historical area, the airport historical building complex and its individual buildings, and combined with specific examples to discuss and summarize the technical characteristics, protection strategies and successful experiences of different modes of protection and reuse of historical areas and historical buildings of airports.

Discrete Breeding Swarm for Cost Minimization of Parallel Job Shop Scheduling Problem

Parallel Job Shop Scheduling Problem (JSSP) is a multi-objective and multi constrains NP-optimization problem. Traditional Artificial Intelligence techniques have been widely used; however, they could be trapped into the local minimum without reaching the optimum solution. Thus, we propose a hybrid Artificial Intelligence (AI) model with Discrete Breeding Swarm (DBS) added to traditional AI to avoid this trapping. This model is applied in the cost minimization of the Car Sequencing and Operator Allocation (CSOA) problem. The practical experiment shows that our model outperforms other techniques in cost minimization.

Efficacy of Polyfluoroalkyl Substances Filtration with Low-Cost Organic Fiber Filter

The purpose of this study was to evaluate the efficacy of a low-cost filter regarding per- and polyfluoroalkyl substances (PFAS). PFAS is a commonly used man-made chemical that can be found in a variety of household and industrial products with deleterious effects on humans. The filter consists of a combination of low-cost materials which could be locally procured. Water testing results for 4 different PFAS contaminants indicated that for Perfluorooctane sulfonic acid (PFOS), the Agency for Toxic Substances and Disease Registry (ATSDR) regulation is 7 ppt, the initial concentration was 15 ppt, and the final concentration was 3.9 ppt. For Perfluorononanoic acid (PFNA), the ATSDR regulation is 10.5 ppt, the initial concentration was 15 ppt, and the final concentration was 3.9 ppt. For Perfluorooctanoic acid (PFOA), the ATSDR regulation is 11 ppt, the initial concentration was 15 ppt, and the final concentration was 3.9 ppt. For Perfluorohexane sulfonic acid (PFHxS), the ATSDR regulation is 70 ppt, the initial concentration was 15 ppt, and the final concentration was 3.9 ppt. The results indicated a 74% reduction in PFAS concentration in filtered samples. Statistical data through regression analysis showed 0.9 validity of the sample data. Initial tests show the efficiency of the proposed filter described could be far greater if tested at a greater scale. It is highly recommended further testing to be conducted to validate the data for an innovative solution to a ubiquitous problem.

IntelligentLogger: A Heavy-Duty Vehicles Fleet Management System Based on IoT and Smart Prediction Techniques

Both daily and long-term management of a heavy-duty vehicles and construction machinery fleet is an extremely complicated and hard to solve issue. This is mainly due to the diversity of the fleet vehicles – machinery, which concerns not only the vehicle types, but also their age/efficiency, as well as the fleet volume, which is often of the order of hundreds or even thousands of vehicles/machineries. In the present paper we present “InteligentLogger”, a holistic heavy-duty fleet management system covering a wide range of diverse fleet vehicles. This is based on specifically designed hardware and software for the automated vehicle health status and operational cost monitoring, for smart maintenance. InteligentLogger is characterized by high adaptability that permits to be tailored to practically any heavy-duty vehicle/machinery (of different technologies -modern or legacy- and of dissimilar uses). Contrary to conventional logistic systems, which are characterized by raised operational costs and often errors, InteligentLogger provides a cost-effective and reliable integrated solution for the e-management and e-maintenance of the fleet members. The InteligentLogger system offers the following unique features that guarantee successful heavy-duty vehicles/machineries fleet management: (a) Recording and storage of operating data of motorized construction machinery, in a reliable way and in real time, using specifically designed Internet of Things (IoT) sensor nodes that communicate through the available network infrastructures, e.g., 3G/LTE; (b) Use on any machine, regardless of its age, in a universal way; (c) Flexibility and complete customization both in terms of data collection, integration with 3rd party systems, as well as in terms of processing and drawing conclusions; (d) Validation, error reporting & correction, as well as update of the system’s database; (e) Artificial intelligence (AI) software, for processing information in real time, identifying out-of-normal behavior and generating alerts; (f) A MicroStrategy based enterprise BI, for modeling information and producing reports, dashboards, and alerts focusing on vehicles– machinery optimal usage, as well as maintenance and scraping policies; (g) Modular structure that allows low implementation costs in the basic fully functional version, but offers scalability without requiring a complete system upgrade.

Slime Mould Optimization Algorithms for Optimal Distributed Generation Integration in Distribution Electrical Network

This document proposes a method for determining the optimal point of integration of distributed generation (DG) in distribution grid. Slime mould optimization is applied to determine best node in case of one and two injection point. Problem has been modeled as an optimization problem where the objective is to minimize joule loses and main constraint is to regulate voltage in each point. The proposed method has been implemented in MATLAB and applied in IEEE network 33 and 69 nodes. Comparing results obtained with other algorithms showed that slime mould optimization algorithms (SMOA) have the best reduction of power losses and good amelioration of voltage profile.

Computational Fluid Dynamics Study on Water Soot Blower Direction in Tangentially Fired Pulverized-Coal Boiler

In this study, Computational Fluid Dynamics (CFD) was utilized to simulate and predict the path of water from water soot blower through an ambient flow field in 300-megawatt tangentially burned pulverized coal boiler that utilizes a water soot blower as a cleaning device. To predict the position of the impact of water on the opposite side of the water soot blower under identical conditions, the nozzle size and water flow rate were fixed in this investigation. The simulation findings demonstrated a high degree of accuracy in predicting the direction of water flow to the boiler's water wall tube, which was validated by comparison to experimental data. Results show maximum deviation value of the water jet trajectory is 10.2%.

Effect of Birks Constant and Defocusing Parameter on Triple-to-Double Coincidence Ratio Parameter in Monte Carlo Simulation-GEANT4

This project concerns with the detection efficiency of the portable Triple-to-Double Coincidence Ratio (TDCR) at the National Institute of Metrology of Ionizing Radiation (INMRI-ENEA) which allows direct activity measurement and radionuclide standardization for pure-beta emitter or pure electron capture radionuclides. The dependency of the simulated detection efficiency of the TDCR, by using Monte Carlo simulation Geant4 code, on the Birks factor (kB) and defocusing parameter has been examined especially for low energy beta-emitter radionuclides such as 3H and 14C, for which this dependency is relevant. The results achieved in this analysis can be used for selecting the best kB factor and the defocusing parameter for computing theoretical TDCR parameter value. The theoretical results were compared with the available ones, measured by the ENEA TDCR portable detector, for some pure-beta emitter radionuclides. This analysis allowed to improve the knowledge of the characteristics of the ENEA TDCR detector that can be used as a traveling instrument for in-situ measurements with particular benefits in many applications in the field of nuclear medicine and in the nuclear energy industry.

Threshold Concepts in TESOL: A Thematic Analysis of Disciplinary Guiding Principles

The notion of Threshold Concepts has offered a fertile new perspective on the transformative effects of mastery of particular concepts on student understanding of subject matter and their developing identities as inductees into disciplinary discourse communities. Only by successfully traversing essential knowledge thresholds can neophytes achieve the more sophisticated understandings of subject matter possessed by mature members of a discipline. This paper uses thematic analysis of disciplinary guiding principles to identify nine candidate Threshold Concepts that appear to underpin effective TESOL practice. The relationship between these candidate TESOL Threshold Concepts, TESOL principles, and TESOL instructional techniques appears to be amenable to a schematic representation based on superordinate categories of TESOL practitioner concern and, as such, offers an alternative to the view of Threshold Concepts as a privileged subset of disciplinary core concepts. The paper concludes by exploring the potential of a Threshold Concepts framework to productively inform TESOL initial teacher education (ITE) and in-service education and training (INSET).

Research Design for Developing and Validating Ice-Hockey Team Diagnostics Scale

In the modern world, ice-hockey (and in a broader sense, team sports) is becoming an increasingly popular field of entertainment. Although the main element is most likely perceived as the show itself, winning is an inevitable part of the successful operation of any sports team. In this paper, the author creates a research design allowing to develop and validate an ice-hockey team-focused diagnostics scale, which enables researchers and practitioners to identify the problems associated with underperforming teams. The construction of the scale starts with personal interviews with experts of the field, carefully chosen from Hungarian ice-hockey sector. Based on the interviews, the author is shown to be in the position to create the categories and the relevant items for the scale. When constructed, the next step is the validation process on a Hungarian sample. Data for validation are acquired through reaching the licensed database of the Hungarian Ice-Hockey Federation involving Hungarian ice-hockey coaches and players. The Ice-Hockey Team Diagnostics Scale is to be created to orientate practitioners in understanding both effective and underperforming team work.

Index t-SNE: Tracking Dynamics of High-Dimensional Datasets with Coherent Embeddings

t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embedding. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic, and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n2) to O(n2/k), and the memory requirement from n2 to 2(n/k)2 which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.

Influence of Inhomogeneous Wind Fields on the Aerostatic Stability of a Cable-Stayed Pedestrian Bridge without Backstays: Experiments and Numerical Simulations

Sightseeing glass bridges located in steep valley area are being built on a large scale owing to the development of tourism. Consequently, their aerostatic stability is seriously affected by the wind field characteristics created by strong wind and special terrain, such as wind speed and wind attack angle. For instance, a cable-stayed pedestrian bridge without backstays comprised of a 60-m cantilever girder and the glass bridge deck is located in an abrupt valley, acting as a viewing platform. The bridge’s nonlinear aerostatic stability was analyzed by the segmental model test and numerical simulation in this paper. Based on aerostatic coefficients of the main girder measured in wind tunnel tests, nonlinear influences caused by the structure and aerostatic load, inhomogeneous distribution of torsion angle along the bridge axis, and the influence of initial attack angle were analyzed by using the incremental double iteration method. The results show that the aerostatic response varying with speed shows an obvious nonlinearity, and the aerostatic instability mode is of the characteristic of space deformation of bending-twisting coupling mode. The vertical and torsional deformation of the main girder is larger than its lateral deformation, with the wind speed approaching the critical wind speed. The flow of negative attack angle will reduce the bridges’ critical stability wind speed, but the influence of the negative attack angle on the aerostatic stability is more significant than that of the positive attack angle. The critical wind speeds of torsional divergence and lateral buckling are both larger than 200 m/s; namely, the bridge will not occur aerostatic instability under the action of various wind attack angles.

Methane versus Carbon Dioxide: Mitigation Prospects

Atmospheric carbon dioxide (CO2) has dominated the discussion around the causes of climate change. This is a reflection of a 100-year time horizon for all greenhouse gases that became a norm.  The 100-year time horizon is much too long – and yet, almost all mitigation efforts, including those set in the near-term frame of within 30 years, are still geared toward it. In this paper, we show that for a 30-year time horizon, methane (CH4) is the greenhouse gas whose radiative forcing exceeds that of CO2. In our analysis, we use the radiative forcing of greenhouse gases in the atmosphere, because they directly affect the rise in temperature on Earth. We found that in 2019, the radiative forcing (RF) of methane was ~2.5 W/m2 and that of carbon dioxide was ~2.1 W/m2. Under a business-as-usual (BAU) scenario until 2050, such forcing would be ~2.8 W/m2 and ~3.1 W/m2 respectively. There is a substantial spread in the data for anthropogenic and natural methane (CH4) emissions, along with natural gas, (which is primarily CH4), leakages from industrial production to consumption. For this reason, we estimate the minimum and maximum effects of a reduction of these leakages, and assume an effective immediate reduction by 80%. Such action may serve to reduce the annual radiative forcing of all CH4 emissions by ~15% to ~30%. This translates into a reduction of RF by 2050 from ~2.8 W/m2 to ~2.5 W/m2 in the case of the minimum effect that can be expected, and to ~2.15 W/m2 in the case of the maximum effort to reduce methane leakages. Under the BAU, we find that the RF of CO2 will increase from ~2.1 W/m2 now to ~3.1 W/m2 by 2050. We assume a linear reduction of 50% in anthropogenic emission over the course of the next 30 years, which would reduce the radiative forcing of CO2 from ~3.1 W/m2 to ~2.9 W/m2. In the case of "net zero," the other 50% of only anthropogenic CO2 emissions reduction would be limited to being either from sources of emissions or directly from the atmosphere. In this instance, the total reduction would be from ~3.1 W/m2 to ~2.7 W/m2, or ~0.4 W/m2. To achieve the same radiative forcing as in the scenario of maximum reduction of methane leakages of ~2.15 W/m2, an additional reduction of radiative forcing of CO2 would be approximately 2.7 -2.15 = 0.55 W/m2. In total, one would need to remove ~660 GT of CO2 from the atmosphere in order to match the maximum reduction of current methane leakages, and ~270 GT of CO2 from emitting sources, to reach "negative emissions". This amounts to over 900 GT of CO2.

Cybersecurity for Digital Twins in the Built Environment: Research Landscape, Industry Attitudes and Future Direction

Technological advances in the construction sector are helping to make smart cities a reality by means of Cyber-Physical Systems (CPS). CPS integrate information and the physical world through the use of Information Communication Technologies (ICT). An increasingly common goal in the built environment is to integrate Building Information Models (BIM) with Internet of Things (IoT) and sensor technologies using CPS. Future advances could see the adoption of digital twins, creating new opportunities for CPS using monitoring, simulation and optimisation technologies. However, researchers often fail to fully consider the security implications. To date, it is not widely possible to assimilate BIM data and cybersecurity concepts and, therefore, security has thus far been overlooked. This paper reviews the empirical literature concerning IoT applications in the built environment and discusses real-world applications of the IoT intended to enhance construction practices, people’s lives and bolster cybersecurity. Specifically, this research addresses two research questions: (a) How suitable are the current IoT and CPS security stacks to address the cybersecurity threats facing digital twins in the context of smart buildings and districts? and (b) What are the current obstacles to tackling cybersecurity threats to the built environment CPS? To answer these questions, this paper reviews the current state-of-the-art research concerning digital twins in the built environment, the IoT, BIM, urban cities and cybersecurity. The results of the findings of this study confirmed the importance of using digital twins in both IoT and BIM. Also, eight reference zones across Europe have gained special recognition for their contributions to the advancement of IoT science. Therefore, this paper evaluates the use of digital twins in CPS to arrive at recommendations for expanding BIM specifications to facilitate IoT compliance, bolster cybersecurity and integrate digital twin and city standards in the smart cities of the future.

The Mediating Role of Level of Education and Income on the Relationship between Political Ideology and Attitude towards Immigration

This study is investigating the impact of ideological structures in terms of conservative and liberal on shaping immigration acceptance attitudes under the contribution of socio-economic status. According to motivated reasoning theory, political ideology is identified as a recurrent impact on the formation of attitude, while conservatives tend to express more hostility toward immigrants in comparison to liberals which are proposed to be more tolerant towards immigrants. Our finding suggests that political ideology will structure individual attitudes when citizens socio-economic vulnerability and level of education are low enough to consider immigrants as a threat. Therefore, economic vulnerability is proposed to weaken the ideological predispositions’ resistance. There has been some threats and factors such as level of education and economic condition proposed by group competition theory and labor market competition theory as fundamental factors which can strengthen or weaken the effects of political ideology on individuals’ attitudes towards immigration; those mechanisms for liberals and conservatives will be operated differently.

Participatory Financial Inclusion Hypothesis: A Preliminary Empirical Validation Using Survey Design

In Nigeria, enormous efforts/resources had, over the years, been expended on promoting financial inclusion (FI); however, it is seemingly discouraging that many of its self-declared targets on FI remained unachieved, especially amongst the Rural Dwellers and Actors in the Informal Sectors (RDAIS). Expectedly, many reasons had been earmarked for these failures: low literacy level, huge informal/rural sectors etc. This study posits that in spite of these truly-debilitating factors, these FI policy failures could have been avoided or mitigated if the principles of active and better-managed citizens’ participation had been strictly followed in the (re)design/implementation of its FI policies. In other words, in a bid to mitigate the prevalent financial exclusion (FE) in Nigeria, this study hypothesizes the significant positive impact of involving the RDAIS in policy-wide decision making in the FI domain, backed by a preliminary empirical validation. Also, the study introduces the RDAIS-focused Participatory Financial Inclusion Policy (PFIP) as a major FI policy regeneration/improvement tool. The three categories of respondents that served as research subjects are FI experts in Nigeria (n = 72), RDAIS from the very rural/remote village of Unguwar Dogo in Northern Nigeria (n = 43) and RDAIS from another rural village of Sekere (n = 56) in the Southern region of Nigeria. Using survey design (5-point Likert scale questionnaires), random/stratified sampling, and descriptive/inferential statistics, the study often recorded independent consensus (amongst these three categories of respondents) that RDAIS’s active participation in iterative FI policy initiation, (re)design, implementation, (re)evaluation could indeed give improved FI outcomes. However, few questionnaire items also recorded divergent opinions and various statistically (in)significant differences on the mean scores of these three categories. The PFIP (or any customized version of it) should then be carefully integrated into the NFIS of Nigeria (and possibly in the NFIS of other developing countries) to truly/fully provide FI policy integration for these excluded RDAIS and arrest the prevalence of FE.

Catalytic Pyrolysis of Sewage Sludge for Upgrading Bio-Oil Quality Using Sludge-Based Activated Char as an Alternative to HZSM5

Due to the concerns about the depletion of fossil fuel sources and the deteriorating environment, the attempt to investigate the production of renewable energy will play a crucial role as a potential to alleviate the dependency on mineral fuels. One particular area of interest is generation of bio-oil through sewage sludge (SS) pyrolysis. SS can be a potential candidate in contrast to other types of biomasses due to its availability and low cost. However, the presence of high molecular weight hydrocarbons and oxygenated compounds in the SS bio-oil hinders some of its fuel applications. In this context, catalytic pyrolysis is another attainable route to upgrade bio-oil quality. Among different catalysts (i.e., zeolites) studied for SS pyrolysis, activated chars (AC) are eco-friendly alternatives. The beneficial features of AC derived from SS comprise the comparatively large surface area, porosity, enriched surface functional groups and presence of a high amount of metal species that can improve the catalytic activity. Hence, a sludge-based AC catalyst was fabricated in a single-step pyrolysis reaction with NaOH as the activation agent and was compared with HZSM5 zeolite in this study. The thermal decomposition and kinetics were invested via thermogravimetric analysis (TGA) for guidance and control of pyrolysis and catalytic pyrolysis and the design of the pyrolysis setup. The results indicated that the pyrolysis and catalytic pyrolysis contain four obvious stages and the main decomposition reaction occurred in the range of 200-600 °C. Coats-Redfern method was applied in the 2nd and 3rd devolatilization stages to estimate the reaction order and activation energy (E) from the mass loss data. The average activation energy (Em) values for the reaction orders n = 1, 2 and 3 were in the range of 6.67-20.37 kJ/mol for SS; 1.51-6.87 kJ/mol for HZSM5; and 2.29-9.17 kJ/mol for AC, respectively. According to the results, AC and HZSM5 both were able to improve the reaction rate of SS pyrolysis by abridging the Em value. Moreover, to generate and examine the effect of the catalysts on the quality of bio-oil, a fixed-bed pyrolysis system was designed and implemented. The composition analysis of the produced bio-oil was carried out via gas chromatography/mass spectrometry (GC/MS). The selected SS to catalyst ratios were 1:1, 2:1 and 4:1. The optimum ratio in terms of cracking the long-chain hydrocarbons and removing oxygen-containing compounds was 1:1 for both catalysts. The upgraded bio-oils with HZSM5 and AC were in the total range of C4-C17 with around 72% in the range of C4-C9. The bio-oil from pyrolysis of SS contained 49.27% oxygenated compounds while the presence of HZSM5 and AC dropped to 7.3% and 13.02%, respectively. Meanwhile, generation of value-added chemicals such as light aromatic compounds were significantly improved in the catalytic process. Furthermore, the fabricated AC catalyst was characterized by BET, SEM-EDX, FT-IR and TGA techniques. Overall, this research demonstrated that AC is an efficient catalyst in the pyrolysis of SS and can be used as a cost-competitive catalyst in contrast to HZSM5.

Dimensionality Reduction in Modal Analysis for Structural Health Monitoring

Autonomous structural health monitoring (SHM) of many structures and bridges became a topic of paramount importance for maintenance purposes and safety reasons. This paper proposes a set of machine learning (ML) tools to perform automatic feature selection and detection of anomalies in a bridge from vibrational data and compare different feature extraction schemes to increase the accuracy and reduce the amount of data collected. As a case study, the Z-24 bridge is considered because of the extensive database of accelerometric data in both standard and damaged conditions. The proposed framework starts from the first four fundamental frequencies extracted through operational modal analysis (OMA) and clustering, followed by time-domain filtering (tracking). The fundamental frequencies extracted are then fed to a dimensionality reduction block implemented through two different approaches: feature selection (intelligent multiplexer) that tries to estimate the most reliable frequencies based on the evaluation of some statistical features (i.e., entropy, variance, kurtosis), and feature extraction (auto-associative neural network (ANN)) that combine the fundamental frequencies to extract new damage sensitive features in a low dimensional feature space. Finally, one-class classification (OCC) algorithms perform anomaly detection, trained with standard condition points, and tested with normal and anomaly ones. In particular, principal component analysis (PCA), kernel principal component analysis (KPCA), and autoassociative neural network (ANN) are presented and their performance are compared. It is also shown that, by evaluating the correct features, the anomaly can be detected with accuracy and an F1 score greater than 95%.

Rolling Element Bearing Diagnosis by Improved Envelope Spectrum: Optimal Frequency Band Selection

The Rolling Element Bearing (REB) vibration diagnosis is worth of special interest by the variety of REB and the wide necessity of those elements in industrial applications. The presence of a localized fault in a REB gives rise to a vibrational response, characterized by the modulation of a carrier signal. Frequency content of carrier signal (Spectral Frequency –f) is mainly related to resonance frequencies of the REB. This carrier signal is modulated by another signal, governed by the periodicity of the fault impact (Cyclic Frequency –α). In this sense, REB fault vibration response gives rise to a second-order cyclostationary signal. Second order cyclostationary signals could be represented in a bi-spectral map, where Spectral Coherence –SCoh are plotted against f and α. The Improved Envelope Spectrum –IES, is a useful approach to execute REB fault diagnosis. IES could be applied by the integration of SCoh over a predefined bandwidth on the f axis. Approaches to select f-bandwidth have been recently exposed by the definition of a metric which intends to evaluate the magnitude of the IES at the fault characteristics frequencies. This metric is represented in a 1/3-binary tree as a function of the frequency bandwidth and centre. Based on this binary tree the optimal frequency band is selected. However, some advantages have been seen if the metric is changed, which in fact tends to dictate different optimal f-bandwidth and so improve the IES representation. This paper evaluates the behaviour of the IES from a different metric optimization. This metric is based on the sample correlation coefficient, detecting high peaks in the selected frequencies while penalizing high peaks in the neighbours of the selected frequencies. Prior results indicate an improvement on the signal-noise ratio (SNR) on around 86% of samples analysed, which belong to IMS database.