Aircraft Selection Using Multiple Criteria Decision Making Analysis Method with Different Data Normalization Techniques

This paper presents an original application of multiple criteria decision making analysis theory to the evaluation of aircraft selection problem. The selection of an optimal, efficient and reliable fleet, network and operations planning policy is one of the most important factors in aircraft selection problem. Given that decision making in aircraft selection involves the consideration of a number of opposite criteria and possible solutions, such a selection can be considered as a multiple criteria decision making analysis problem. This study presents a new integrated approach to decision making by considering the multiple criteria utility theory and the maximal regret minimization theory methods as well as aircraft technical, economical, and environmental aspects. Multiple criteria decision making analysis method uses different normalization techniques to allow criteria to be aggregated with qualitative and quantitative data of the decision problem. Therefore, selecting a suitable normalization technique for the model is also a challenge to provide data aggregation for the aircraft selection problem. To compare the impact of different normalization techniques on the decision problem, the vector, linear (sum), linear (max), and linear (max-min) data normalization techniques were identified to evaluate aircraft selection problem. As a logical implication of the proposed approach, it enhances the decision making process through enabling the decision maker to: (i) use higher level knowledge regarding the selection of criteria weights and the proposed technique, (ii) estimate the ranking of an alternative, under different data normalization techniques and integrated criteria weights after a posteriori analysis of the final rankings of alternatives. A set of commercial passenger aircraft were considered in order to illustrate the proposed approach. The obtained results of the proposed approach were compared using Spearman's rho tests. An analysis of the final rank stability with respect to the changes in criteria weights was also performed so as to assess the sensitivity of the alternative rankings obtained by the application of different data normalization techniques and the proposed approach.

Hematologic Inflammatory Markers and Inflammation-Related Hepatokines in Pediatric Obesity

Obesity in children particularly draws attention, because it may threaten the individual’s future life due to many chronic diseases it may lead to. Most of these diseases including obesity itself altogether are related to inflammation. For this reason, inflammation-related parameters gain importance. Within this context, complete blood cell counts, ratios or indices derived from these counts have recently found some platform to be used as inflammatory markers. So far, mostly adipokines were investigated within the field of obesity. Metabolic inflammation is closely associated with cellular dysfunction. In this study, hematologic inflammatory markers and cytokines produced predominantly by the liver (fibroblast growth factor-21 (FGF-21) and fetuin A) were investigated in pediatric obesity. Two groups were constituted from 76 obese children based on World Health Organization criteria. Group 1 was composed of children, whose age- and sex-adjusted body mass index (BMI) percentiles were between 95 and 99. Group 2 consists of children, who are above 99th percentile. The first and the latter groups were defined as obese (OB) and morbid obese (MO). Anthropometric measurements of the children were performed. Informed consent forms and the approval of the institutional ethics committee were obtained. Blood cell counts and ratios were determined by automated hematology analyzer. The related ratios and indexes were calculated. Statistical evaluation of the data was performed by SPSS program. There was no statistically significant difference in terms of neutrophil-to lymphocyte ratio, monocyte-to-high density lipoprotein cholesterol ratio and platelet-to-lymphocyte ratio between the groups. Mean platelet volume and platelet distribution width values were decreased (p < 0.05), total platelet count, red cell distribution width (RDW) and systemic immune inflammation index values were increased (p < 0.01) in MO group. Both hepatokines were increased in the same group, however increases were not statistically significant. In this group, also a strong correlation was calculated between FGF-21 and RDW when controlled by age, hematocrit, iron and ferritin (r = 0.425; p < 0.01). In conclusion, the association between RDW, a hematologic inflammatory marker, and FGF-21, an inflammation-related hepatokine, found in MO group is an important finding discriminating between OB and MO children. This association is even more powerful when controlled by age and iron-related parameters.

Scholar Index for Research Performance Evaluation Using Multiple Criteria Decision Making Analysis

This paper aims to present an objective quantitative methodology on how to evaluate individual’s scholarly research output using multiple criteria decision analysis. A multiple criteria decision making analysis (MCDMA) methodological process is adopted to build a multiple criteria evaluation model. With the introduction of the scholar index, which gives significant information about a researcher's productivity and the scholarly impact of his or her publications in a single number (s is the number of publications with at least s citations); cumulative research citation index; the scholar index is included in the citation databases to cover the multidimensional complexity of scholarly research performance and to undertake objective evaluations with scholar index. The scholar index, one of publication activity indexes, is analyzed by considering it to be the most appropriate sciencemetric indicator which allows to smooth over many drawbacks of scholarly output assessment by mere calculation of the number of publications (quantity) and citations (quality). Hence, this study includes a set of indicators-based scholar index to be used for evaluating scholarly researchers. Google Scholar open science database was used to assess and discuss scholarly productivity and impact of researchers. Based on the experiment of computing the scholar index, and its derivative indexes for a set of researchers on open research database platform, quantitative methods of assessing scholarly research output were successfully considered to rank researchers. The proposed methodology considers the ranking, and the selection of data on which a scholarly research performance evaluation was based, the analysis of the data, and the presentation of the multiple criteria analysis results.

Assessing and Evaluating the Course Outcomes of Control Systems Course Mapping Complex Engineering Problem Solving Issues and Associated Knowledge Profiles with the Program Outcomes

In the current context, the engineering program educators need to think about how to develop the concepts and complex engineering problem-solving skills through various complex engineering activities by the undergraduate engineering students in various engineering courses. But most of them are facing challenges to assess and evaluate these skills of their students. In this study, detailed assessment and evaluation methods for the undergraduate Electrical and Electronic Engineering (EEE) program are stated using the Outcome-Based Education (OBE) approach. For this purpose, a final year course titled control systems has been selected. The assessment and evaluation approach, course contents, course objectives, course outcomes (COs), and their mapping to the program outcomes (POs) with complex engineering problems and activities via the knowledge profiles, performance indicators, rubrics of assessment, CO and PO attainment data, and other statistics, are reported for a student-cohort of control systems course registered by the students of BSc in EEE program in Spring 2021 Semester at the EEE Department of Southeast University (SEU). It is found that the target benchmark was achieved by the students of that course. Several recommendations for the continuous quality improvement (CQI) process are also provided.

Automated 3D Segmentation System for Detecting Tumor and Its Heterogeneity in Patients with High Grade Ovarian Epithelial Cancer

High grade ovarian epithelial cancer (OEC) is the most fatal gynecological cancer and poor prognosis of this entity is closely related to considerable intratumoral genetic heterogeneity. By examining imaging data, it is possible to assess the heterogeneity of tumorous tissue. This study presents a methodology for aligning, segmenting and finally visualizing information from various magnetic resonance imaging series, in order to construct 3D models of heterogeneity maps from the same tumor in OEC patients. The proposed system may be used as an adjunct digital tool by health professionals for personalized medicine, as it allows for an easy visual assessment of the heterogeneity of the examined tumor.

Learning Objects Content Presentation Adaptation Model Considering Students' Learning Styles

Learning styles (LSs) correspond to the individual preferences of a person regarding the modes and forms in which he/she prefers to learn throughout the teaching/learning process. The content presentation of learning objects (LOs) using knowledge about the students’ LSs offers them digital educational resources tailored to their individual learning preferences. In this context, the most relevant characteristics of the LSs along with the most appropriate forms of LOs' content presentation were mapped and associated. Such was performed in order to define the composition of an adaptive model of LO's content presentation considering the LSs, which was called Adaptation of Content Presentation of Learning Objects Considering Learning Styles (ACPLOLS). LO prototypes were created with interfaces that were adapted to students' LSs. These prototypes were based on a model created for validation of the approaches that were used, which were established through experiments with the students. The results of subjective measures of students' emotional responses demonstrated that the ACPLOLS has reached the desired results in relation to the adequacy of the LOs interface, in accordance with the Felder-Silverman LSs Model.

An Integrated Approach to Child Care Earthquake Preparedness through “Telemachus” Project

A lot of children under the age of five spend their daytime hours away from their home, in a kindergarten. Caring for children is a serious subject, and their safety in case of earthquake is the first priority. Being aware of earthquakes helps to prioritize the needs and take the appropriate actions to limit the effects. Earthquakes occurring anywhere at any time require emergency planning. Earthquake planning is a cooperative effort and childcare providers have unique roles and responsibilities. Greece has high seismicity and Ionian Islands Region has the highest seismic activity of the country. Earthquake Planning and Protection Organization (EPPO) is a national organization in Greece. The mission of EPPO is the seismic risk reduction by designing an earthquake management program of mitigation and preparedness. Among other actions EPPO has analyzed the needs and requirements of kindergartens on earthquake protection issues and has designed specific activities to familiarize the day care centers staff being prepared for earthquakes.  This research presents the results of a survey that detects the level of earthquake preparedness of kindergartens in all over the country and Ionian Islands too. A closed-form questionnaire of 20 main questions was developed for the survey in order to detect the aspects of participants concerning the earthquake preparedness actions at individual, family and day care environment level. 2668 questionnaires were gathered from March 2014 to May 2019, and analyzed by EPPO’s Department of Education. Moreover, this paper presents the EPPO’s educational activities targeted to the Ionian Islands Region that implemented in the framework of “Telemachus” Project. To provide safe environment for children to learn, and staff to work is the foremost goal of any State, community and kindergarten. This project is funded under the Priority Axis “Environmental Protection and Sustainable Development” of Operational Plan “Ionian Islands 2014-2020”. It is increasingly accepted that emergency preparedness should be thought of as an ongoing process rather than a one-time activity. Creating an earthquake safe daycare environment that facilitates learning is a challenging task. Training, drills, and update of emergency plan should take place throughout the year at kindergartens to identify any gaps and to ensure the emergency procedures. EPPO will continue to work closely with regional and local authorities to actively address the needs of children and kindergartens before, during and after earthquakes.

Research Design for Developing and Validating Ice-Hockey Team Diagnostics Scale

In the modern world, ice-hockey (and in a broader sense, team sports) is becoming an increasingly popular field of entertainment. Although the main element is most likely perceived as the show itself, winning is an inevitable part of the successful operation of any sports team. In this paper, the author creates a research design allowing to develop and validate an ice-hockey team-focused diagnostics scale, which enables researchers and practitioners to identify the problems associated with underperforming teams. The construction of the scale starts with personal interviews with experts of the field, carefully chosen from Hungarian ice-hockey sector. Based on the interviews, the author is shown to be in the position to create the categories and the relevant items for the scale. When constructed, the next step is the validation process on a Hungarian sample. Data for validation are acquired through reaching the licensed database of the Hungarian Ice-Hockey Federation involving Hungarian ice-hockey coaches and players. The Ice-Hockey Team Diagnostics Scale is to be created to orientate practitioners in understanding both effective and underperforming team work.

Methane versus Carbon Dioxide: Mitigation Prospects

Atmospheric carbon dioxide (CO2) has dominated the discussion around the causes of climate change. This is a reflection of a 100-year time horizon for all greenhouse gases that became a norm.  The 100-year time horizon is much too long – and yet, almost all mitigation efforts, including those set in the near-term frame of within 30 years, are still geared toward it. In this paper, we show that for a 30-year time horizon, methane (CH4) is the greenhouse gas whose radiative forcing exceeds that of CO2. In our analysis, we use the radiative forcing of greenhouse gases in the atmosphere, because they directly affect the rise in temperature on Earth. We found that in 2019, the radiative forcing (RF) of methane was ~2.5 W/m2 and that of carbon dioxide was ~2.1 W/m2. Under a business-as-usual (BAU) scenario until 2050, such forcing would be ~2.8 W/m2 and ~3.1 W/m2 respectively. There is a substantial spread in the data for anthropogenic and natural methane (CH4) emissions, along with natural gas, (which is primarily CH4), leakages from industrial production to consumption. For this reason, we estimate the minimum and maximum effects of a reduction of these leakages, and assume an effective immediate reduction by 80%. Such action may serve to reduce the annual radiative forcing of all CH4 emissions by ~15% to ~30%. This translates into a reduction of RF by 2050 from ~2.8 W/m2 to ~2.5 W/m2 in the case of the minimum effect that can be expected, and to ~2.15 W/m2 in the case of the maximum effort to reduce methane leakages. Under the BAU, we find that the RF of CO2 will increase from ~2.1 W/m2 now to ~3.1 W/m2 by 2050. We assume a linear reduction of 50% in anthropogenic emission over the course of the next 30 years, which would reduce the radiative forcing of CO2 from ~3.1 W/m2 to ~2.9 W/m2. In the case of "net zero," the other 50% of only anthropogenic CO2 emissions reduction would be limited to being either from sources of emissions or directly from the atmosphere. In this instance, the total reduction would be from ~3.1 W/m2 to ~2.7 W/m2, or ~0.4 W/m2. To achieve the same radiative forcing as in the scenario of maximum reduction of methane leakages of ~2.15 W/m2, an additional reduction of radiative forcing of CO2 would be approximately 2.7 -2.15 = 0.55 W/m2. In total, one would need to remove ~660 GT of CO2 from the atmosphere in order to match the maximum reduction of current methane leakages, and ~270 GT of CO2 from emitting sources, to reach "negative emissions". This amounts to over 900 GT of CO2.

Simulation and Assessment of Carbon Dioxide Separation by Piperazine Blended Solutions Using E-NRTL and Peng-Robinson Models: A Study of Regeneration Heat Duty

High pressure carbon dioxide (CO2) absorption from a specific off-gas in a conventional column has been evaluated for the environmental concerns by the Aspen HYSYS simulator using a wide range of single absorbents and piperazine (PZ) blended solutions to estimate the outlet CO2 concentration, CO2 loading, reboiler power supply and regeneration heat duty to choose the most efficient solution in terms of CO2 removal and required heat duty. The property package, which is compatible with all applied solutions for the simulation in this study, estimates the properties based on electrolyte non-random two-liquid (E-NRTL) model for electrolyte thermodynamics and Peng-Robinson equation of state for vapor phase and liquid hydrocarbon phase properties. The results of the simulation indicate that PZ in addition to the mixture of PZ and monoethanolamine (MEA) demand the highest regeneration heat duty compared with other studied single and blended amine solutions respectively. The blended amine solutions with the lowest PZ concentrations (5wt% and 10wt%) were considered and compared to reduce the cost of process, among which the blended solution of 10wt%PZ+35wt%MDEA (methyldiethanolamine) was found as the most appropriate solution in terms of CO2 content in the outlet gas, rich-CO2 loading and regeneration heat duty.

An Approach for Coagulant Dosage Optimization Using Soft Jar Test: A Case Study of Bangkhen Water Treatment Plant

The most important process of the water treatment plant process is coagulation, which uses alum and poly aluminum chloride (PACL). Therefore, determining the dosage of alum and PACL is the most important factor to be prescribed. This research applies an artificial neural network (ANN), which uses the Levenberg–Marquardt algorithm to create a mathematical model (Soft Jar Test) for chemical dose prediction, as used for coagulation, such as alum and PACL, with input data consisting of turbidity, pH, alkalinity, conductivity, and, oxygen consumption (OC) of the Bangkhen Water Treatment Plant (BKWTP), under the authority of the Metropolitan Waterworks Authority of Thailand. The data were collected from 1 January 2019 to 31 December 2019 in order to cover the changing seasons of Thailand. The input data of ANN are divided into three groups: training set, test set, and validation set. The coefficient of determination and the mean absolute errors of the alum model are 0.73, 3.18 and the PACL model are 0.59, 3.21, respectively.

Battery Grading Algorithm in 2nd-Life Repurposing Li-ion Battery System

This article presents a methodology that improves reliability and cyclability of 2nd-life Li-ion battery system repurposed as energy storage system (ESS). Most of the 2nd-life retired battery systems in market have module/pack-level state of health (SOH) indicator, which is utilized for guiding appropriate depth of discharge (DOD) in the application of ESS. Due to the lack of cell-level SOH indication, the different degrading behaviors among various cells cannot be identified upon reaching retired status; in the end, considering end of life (EOL) loss and pack-level DOD, the repurposed ESS has to be oversized by > 1.5 times to complement the application requirement of reliability and cyclability. This proposed battery grading algorithm, using non-invasive methodology, is able to detect outlier cells based on historical voltage data and calculate cell-level historical maximum temperature data using semi-analytic methodology. In this way, the individual battery cell in the 2nd-life battery system can be graded in terms of SOH on basis of the historical voltage fluctuation and estimated historical maximum temperature variation. These grades will have corresponding DOD grades in the application of the repurposed ESS to enhance the system reliability and cyclability. In all, this introduced battery grading algorithm is non-invasive, compatible with all kinds of retired Li-ion battery systems which lack of cell-level SOH indication, as well as potentially being embedded into battery management software for preventive maintenance and real-time cyclability optimization.

Thin Bed Reservoir Delineation Using Spectral Decomposition and Instantaneous Seismic Attributes, Pohokura Field, Taranaki Basin, New Zealand

The thick bed hydrocarbon reservoirs are primarily interested because of the more prolific production. When the amount of petroleum in the thick bed starts decreasing, the thin bed reservoirs are the alternative targets to maintain the reserves. The conventional interpretation of seismic data cannot delineate the thin bed having thickness less than the vertical seismic resolution. Therefore, spectral decomposition and instantaneous seismic attributes were used to delineate the thin bed in this study. Short Window Discrete Fourier Transform (SWDFT) spectral decomposition and instantaneous frequency attributes were used to reveal the thin bed reservoir, while Continuous Wavelet Transform (CWT) spectral decomposition and envelope (instantaneous amplitude) attributes were used to indicate hydrocarbon bearing zone. The study area is located in the Pohokura Field, Taranaki Basin, New Zealand. The thin bed target is the uppermost part of Mangahewa Formation, the most productive in the gas-condensate production in the Pohokura Field. According to the time-frequency analysis, SWDFT spectral decomposition can reveal the thin bed using a 72 Hz SWDFT isofrequency section and map, and that is confirmed by the instantaneous frequency attribute. The envelope attribute showing the high anomaly indicates the hydrocarbon accumulation area at the thin bed target. Moreover, the CWT spectral decomposition shows the low-frequency shadow zone and abnormal seismic attenuation in the higher isofrequencies below the thin bed confirms that the thin bed can be a prospective hydrocarbon zone.

Graves’ Disease and Its Related Single Nucleotide Polymorphisms and Genes

Graves’ Disease (GD), an autoimmune health condition caused by the over reactiveness of the thyroid, affects about 1 in 200 people worldwide. GD is not caused by one specific single nucleotide polymorphism (SNP) or gene mutation, but rather determined by multiple factors, each differing from each other. Malfunction of the genes in Human Leukocyte Antigen (HLA) family tend to play a major role in autoimmune diseases, but other genes, such as LOC101929163, have functions that still remain ambiguous. Currently, little studies were done to study GD, resulting in inconclusive results. This study serves not only to introduce background knowledge about GD, but also to organize and pinpoint the major SNPs and genes that are potentially related to the occurrence of GD in humans. Collected from multiple sources from genome-wide association studies (GWAS) Central, the potential SNPs related to the causes of GD are included in this study. This study has located the genes that are related to those SNPs and closely examines a selected sample. Using the data from this study, scientists will then be able to focus on the most expressed genes in GD patients and develop a treatment for GD.

Dimensionality Reduction in Modal Analysis for Structural Health Monitoring

Autonomous structural health monitoring (SHM) of many structures and bridges became a topic of paramount importance for maintenance purposes and safety reasons. This paper proposes a set of machine learning (ML) tools to perform automatic feature selection and detection of anomalies in a bridge from vibrational data and compare different feature extraction schemes to increase the accuracy and reduce the amount of data collected. As a case study, the Z-24 bridge is considered because of the extensive database of accelerometric data in both standard and damaged conditions. The proposed framework starts from the first four fundamental frequencies extracted through operational modal analysis (OMA) and clustering, followed by time-domain filtering (tracking). The fundamental frequencies extracted are then fed to a dimensionality reduction block implemented through two different approaches: feature selection (intelligent multiplexer) that tries to estimate the most reliable frequencies based on the evaluation of some statistical features (i.e., entropy, variance, kurtosis), and feature extraction (auto-associative neural network (ANN)) that combine the fundamental frequencies to extract new damage sensitive features in a low dimensional feature space. Finally, one-class classification (OCC) algorithms perform anomaly detection, trained with standard condition points, and tested with normal and anomaly ones. In particular, principal component analysis (PCA), kernel principal component analysis (KPCA), and autoassociative neural network (ANN) are presented and their performance are compared. It is also shown that, by evaluating the correct features, the anomaly can be detected with accuracy and an F1 score greater than 95%.

Numerical and Experimental Analyses of a Semi-Active Pendulum Tuned Mass Damper

Modern structures such as floor systems, pedestrian bridges and high-rise buildings have become lighter in mass and more flexible with negligible damping and thus prone to vibration. In this paper, a semi-actively controlled pendulum tuned mass dampers (PTMD) is presented that uses air springs as both the restoring (resilient) and energy dissipating (damping) elements; the tuned mass damper (TMD) uses no passive dampers. The proposed PTMD can readily be fine-tuned and re-tuned, via software, without changing any hardware. Almost all existing semi-active systems have the three elements that passive TMDs have, i.e., inertia, resilient, and dissipative elements with some adjustability built into one or two of these elements. The proposed semi-active air suspended TMD, on the other hand, is made up of only inertia and resilience elements. A notable feature of this TMD is the absence of a physical damping element in its make-up. The required viscous damping is introduced into the TMD using a semi-active control scheme residing in a micro-controller which actuates a high-speed proportional valve regulating the flow of air in and out of the air springs. In addition to introducing damping into the TMD, the semi-active control scheme adjusts the stiffness of the TMD. The focus of this work has been the synthesis and analysis of the control algorithms and strategies to vary the tuning accuracy, introduce damping into air suspended PTMD, and enable the PTMD to self-tune itself. The accelerations of the main structure and PTMD as well as the pressure in the air springs are used as the feedback signals in control strategies. Numerical simulation and experimental evaluation of the proposed tuned damping system are presented in this paper.

Classification of Extreme Ground-Level Ozone Based on Generalized Extreme Value Model for Air Monitoring Station

Higher ground-level ozone (GLO) concentration adversely affects human health, vegetations as well as activities in the ecosystem. In Malaysia, most of the analysis on GLO concentration are carried out using the average value of GLO concentration, which refers to the centre of distribution to make a prediction or estimation. However, analysis which focuses on the higher value or extreme value in GLO concentration is rarely explored. Hence, the objective of this study is to classify the tail behaviour of GLO using generalized extreme value (GEV) distribution estimation the return level using the corresponding modelling (Gumbel, Weibull, and Frechet) of GEV distribution. The results show that Weibull distribution which is also known as short tail distribution and considered as having less extreme behaviour is the best-fitted distribution for four selected air monitoring stations in Peninsular Malaysia, namely Larkin, Pelabuhan Kelang, Shah Alam, and Tanjung Malim; while Gumbel distribution which is considered as a medium tail distribution is the best-fitted distribution for Nilai station. The return level of GLO concentration in Shah Alam station is comparatively higher than other stations. Overall, return levels increase with increasing return periods but the increment depends on the type of the tail of GEV distribution’s tail. We conduct this study by using maximum likelihood estimation (MLE) method to estimate the parameters at four selected stations in Peninsular Malaysia. Next, the validation for the fitted block maxima series to GEV distribution is performed using probability plot, quantile plot and likelihood ratio test. Profile likelihood confidence interval is tested to verify the type of GEV distribution. These results are important as a guide for early notification on future extreme ozone events.

Research on User Experience and Brand Attitudes of Chatbots

With the advancement of artificial intelligence technology, most companies are aware of the profound potential of artificial intelligence in commercial marketing. Man-machine dialogue has become the latest trend in marketing customer service. However, chatbots are often considered to be lack of intelligent or unfriendly conversion, which instead reduces the communication effect of chatbots. To ensure that chatbots represent the brand image and provide a good user experience, companies and users attach great importance. In this study, customer service chatbot was used as the research sample. The research variables are based on the theory of artificial intelligence emotions, integrating the technology acceptance model and innovation diffusion theory, and the three aspects of pleasure, arousal, and dominance of the human-machine PAD (Pleasure, Arousal and Dominance) dimension. The results show that most of the participants have a higher acceptance of innovative technologies and are high pleasure and arousal in the user experience. Participants still have traditional gender (female) service stereotypes about customer service chatbots. Users who have high trust in using chatbots can easily enhance brand acceptance and easily accept brand messages, extend the trust of chatbots to trust in the brand, and develop a positive attitude towards the brand.

An Investigation into the Role of School Social Workers and Psychologists with Children Experiencing Special Educational Needs in Libya

This study explores the function of schools’ psychosocial services within Libyan mainstream schools in relation to children’s special educational needs (SEN). This is with the aim to examine the role of school social workers and psychologists in the assessment procedure of children with SEN. A semi-structured interview was used in this study, with 21 professionals working in the schools’ psychosocial services, of whom 13 were school social workers (SSWs) and eight were school psychologists (SPs). The results of the interviews with SSWs and SPs provided insights into how SEN children are identified, assessed, and dealt with by school professionals. It appears from the results that what constitutes a problem has not changed significantly, and the link between learning difficulties and behavioural difficulties is also evident from this study. Children with behaviour difficulties are more likely to be referred to school psychosocial services than children with learning difficulties. Yet, it is not clear from the interviews with SSWs and SPs whether children are excluded merely because of their behaviour problems. Instead, they would surely be expelled from the school if they failed academically. Furthermore, the interviews with SSWs and SPs yield a rather unusual source accountable for children’s SEN; school-related difficulties were a major factor in which almost all participants attributed children’s learning and behaviour problems to teachers’ deficiencies, followed by school lack of resources.

Investigation of Tbilisi City Atmospheric Air Pollution with PM in Usual and Emergency Situations Using the Observational and Numerical Modeling Data

Pollution of the Tbilisi atmospheric air with PM2.5 and PM10 in usual and pandemic situations by using the data of 5 stationary observation points is investigated. The values of the statistical characteristic parameters of PM in the atmosphere of Tbilisi are analyzed and trend graphs are constructed. By means of analysis of pollution levels in the quarantine and usual periods the proportion of vehicle traffic in pollution of city is estimated. Experimental measurements of PM2.5, PM10 in the atmosphere have been carried out in different districts of the city and map of the distribution of their concentrations were constructed. It is shown that maximum pollution values are recorded in the city center and along major motorways. It is shown that the average monthly concentrations vary in the range of 0.6-1.6 Maximum Permissible Concentration (MPC). Average daily values of concentration vary at 2-4 days intervals. The distribution of PM10 generated as a result of traffic is numerical modeled. The modeling results are compared with the observation data.