Two-Stage Approach for Solving the Multi-Objective Optimization Problem on Combinatorial Configurations

The statement of the multi-objective optimization problem on combinatorial configurations is formulated, and the approach to its solution is proposed. The problem is of interest as a combinatorial optimization one with many criteria, which is a model of many applied tasks. The approach to solving the multi-objective optimization problem on combinatorial configurations consists of two stages; the first is the reduction of the multi-objective problem to the single criterion based on existing multi-objective optimization methods, the second stage solves the directly replaced single criterion combinatorial optimization problem by the horizontal combinatorial method. This approach provides the optimal solution to the multi-objective optimization problem on combinatorial configurations, taking into account additional restrictions for a finite number of steps.

The Viscosity of Xanthan Gum Grout with Different pH and Ionic Strength

Xanthan gum (XG) an eco-friendly biopolymer has been recently explicitly investigated for ground improvement approaches. Rheological behavior of this additive strongly depends on electrochemical condition such as pH, ionic strength and also its content in aqueous solution. So, the effects of these factors have been studied in this paper considering various XG contents as 0.25, 0.5, 1, and 2% of water. Moreover, adjusting pH values such as 3, 5, 7 and 9 in addition to increasing ionic strength to 0.1 and 0.2 in the molar scale has covered a practical range of electrochemical condition. The viscosity of grouts shows an apparent upward trend with an increase in ionic strength and XG content. Also, pH affects the polymerization as much as other parameters. As a result, XG behavior is severely influenced by electrochemical settings

Parametric Approach for Reserve Liability Estimate in Mortgage Insurance

Chain Ladder (CL) method, Expected Loss Ratio (ELR) method and Bornhuetter-Ferguson (BF) method, in addition to more complex transition-rate modeling, are commonly used actuarial reserving methods in general insurance. There is limited published research about their relative performance in the context of Mortgage Insurance (MI). In our experience, these traditional techniques pose unique challenges and do not provide stable claim estimates for medium to longer term liabilities. The relative strengths and weaknesses among various alternative approaches revolve around: stability in the recent loss development pattern, sufficiency and reliability of loss development data, and agreement/disagreement between reported losses to date and ultimate loss estimate. CL method results in volatile reserve estimates, especially for accident periods with little development experience. The ELR method breaks down especially when ultimate loss ratios are not stable and predictable. While the BF method provides a good tradeoff between the loss development approach (CL) and ELR, the approach generates claim development and ultimate reserves that are disconnected from the ever-to-date (ETD) development experience for some accident years that have more development experience. Further, BF is based on subjective a priori assumption. The fundamental shortcoming of these methods is their inability to model exogenous factors, like the economy, which impact various cohorts at the same chronological time but at staggered points along their life-time development. This paper proposes an alternative approach of parametrizing the loss development curve and using logistic regression to generate the ultimate loss estimate for each homogeneous group (accident year or delinquency period). The methodology was tested on an actual MI claim development dataset where various cohorts followed a sigmoidal trend, but levels varied substantially depending upon the economic and operational conditions during the development period spanning over many years. The proposed approach provides the ability to indirectly incorporate such exogenous factors and produce more stable loss forecasts for reserving purposes as compared to the traditional CL and BF methods.

Verification and Proposal of Information Processing Model Using EEG-Based Brain Activity Monitoring

Human beings perform a task by perceiving information from outside, recognizing them, and responding them. There have been various attempts to analyze and understand internal processes behind the reaction to a given stimulus by conducting psychological experiments and analysis from multiple perspectives. Among these, we focused on Model Human Processor (MHP). However, it was built based on psychological experiments and thus the relation with brain activity was unclear so far. To verify the validity of the MHP and propose our model from a viewpoint of neuroscience, EEG (Electroencephalography) measurements are performed during experiments in this study. More specifically, first, experiments were conducted where Latin alphabet characters were used as visual stimuli. In addition to response time, ERPs (event-related potentials) such as N100 and P300 were measured by using EEG. By comparing cycle time predicted by the MHP and latency of ERPs, it was found that N100, related to perception of stimuli, appeared at the end of the perceptual processor. Furthermore, by conducting an additional experiment, it was revealed that P300, related to decision making, appeared during the response decision process, not at the end. Second, by experiments using Japanese Hiragana characters, i.e. Japan's own phonetic symbols, those findings were confirmed. Finally, Japanese Kanji characters were used as more complicated visual stimuli. A Kanji character usually has several readings and several meanings. Despite the difference, a reading-related task and a meaning-related task exhibited similar results, meaning that they involved similar information processing processes of the brain. Based on those results, our model was proposed which reflects response time and ERP latency. It consists of three processors: the perception processor from an input of a stimulus to appearance of N100, the cognitive processor from N100 to P300, and the decision-action processor from P300 to response. Using our model, an application system which reflects brain activity can be established.

Entrepreneur Universal Education System: Future Evolution

The success of education is dependent on evolution and adaptation, while the traditional system has worked before, one type of education evolved with the digital age is virtual education that has influenced efficiency in today’s learning environments. Virtual learning has indeed proved its efficiency to overcome the drawbacks of the physical environment such as time, facilities, location, etc., but despite what it had accomplished, the educational system over all is not adequate for being a productive system yet. Earning a degree is not anymore enough to obtain a career job; it is simply missing the skills and creativity. There are always two sides of a coin; a college degree or a specialized certificate, each has its own merits, but having both can put you on a successful IT career path. For many of job-seeking individuals across world to have a clear meaningful goal for work and education and positively contribute the community, a productive correlation and cooperation among employers, universities alongside with the individual technical skills is a must for generations to come. Fortunately, the proposed research “Entrepreneur Universal Education System” is an evolution to meet the needs of both employers and students, in addition to gaining vital and real-world experience in the chosen fields is easier than ever. The new vision is to empower the education to improve organizations’ needs which means improving the world as its primary goal, adopting universal skills of effective thinking, effective action, effective relationships, preparing the students through real-world accomplishment and encouraging them to better serve their organization and their communities faster and more efficiently.

Exploring the Need to Study the Efficacy of VR Training Compared to Traditional Cybersecurity Training

Effective cybersecurity training is of the utmost importance, given the plethora of attacks that continue to increase in complexity and ubiquity. VR cybersecurity training remains a starkly understudied discipline. Studies that evaluated the effectiveness of VR cybersecurity training over traditional methods are required. An engaging and interactive platform can support knowledge retention of the training material. Consequently, an effective form of cybersecurity training is required to support a culture of cybersecurity awareness. Measurements of effectiveness varied throughout the studies, with surveys and observations being the two most utilized forms of evaluating effectiveness. Further research is needed to evaluate the effectiveness of VR cybersecurity training and traditional training. Additionally, research for evaluating if VR cybersecurity training is more effective than traditional methods is vital. This paper proposes a methodology to compare the two cybersecurity training methods and their effectiveness. The proposed framework includes developing both VR and traditional cybersecurity training methods and delivering them to at least 100 users. A quiz along with a survey will be administered and statistically analyzed to determine if there is a difference in knowledge retention and user satisfaction. The aim of this paper is to bring attention to the need to study VR cybersecurity training and its effectiveness compared to traditional training methods. This paper hopes to contribute to the cybersecurity training field by providing an effective way to train users for security awareness. If VR training is deemed more effective, this could create a new direction for cybersecurity training practices.

An Optimized Method for 3D Magnetic Navigation of Nanoparticles inside Human Arteries

In the present work, a numerical method for the estimation of the appropriate gradient magnetic fields for optimum driving of the particles into the desired area inside the human body is presented. The proposed method combines Computational Fluid Dynamics (CFD), Discrete Element Method (DEM) and Covariance Matrix Adaptation (CMA) evolution strategy for the magnetic navigation of nanoparticles. It is based on an iteration procedure that intents to eliminate the deviation of the nanoparticles from a desired path. Hence, the gradient magnetic field is constantly adjusted in a suitable way so that the particles’ follow as close as possible to a desired trajectory. Using the proposed method, it is obvious that the diameter of particles is crucial parameter for an efficient navigation. In addition, increase of particles' diameter decreases their deviation from the desired path. Moreover, the navigation method can navigate nanoparticles into the desired areas with efficiency approximately 99%.

The Socio-Economic Impact of the English Leather Glove Industry from the 17th Century to Its Recent Decline

Gloves are significant physical objects, being one of the oldest forms of dress. Glove culture is part of every facet of life; its extraordinary history encompasses practicality, and symbolism reflecting a wide range of social practices. The survival of not only the gloves but associated articles enables the possibility to analyse real lives, however so far this area has been largely neglected. Limited information is available to students, researchers, or those involved with the design and making of gloves. There are several museums and independent collectors in England that hold collections of gloves (some from as early as 16th century), machinery, tools, designs and patterns, marketing materials and significant archives which demonstrate the rich heritage of English glove design and manufacturing, being of national significance and worthy of international interest. Through a research glove network which now exists thanks to research grant funding, there is potential for the holders of glove collections to make connections and explore links between these resources to promote a stronger understanding of the significance, breadth and heritage of the English glove industry. The network takes an interdisciplinary approach to bring together interested parties from academia, museums and manufacturing, with expert knowledge of the production, collections, conservation and display of English leather gloves. Academics from diverse arts and humanities disciplines benefit from the opportunities to share research and discuss ideas with network members from non-academic contexts including museums and heritage organisations, industry, and contemporary designers. The fragmented collections when considered in entirety provide an overview of English glove making since earliest times and those who wore them. This paper makes connections and explores links between these resources to promote a stronger understanding of the significance, breadth and heritage of the English Glove industry. The following areas are explored: current content and status of the individual museum collections, potential links, sharing of information histories, social and cultural and relationship to history of fashion design, manufacturing and materials, approaches to maintenance and conservation, access to the collections and strategies for future understanding of their national significance. The facilitation of knowledge exchange and exploration of the collections through the network informs organisations’ future strategies for the maintenance, access and conservation of their collections. By involving industry in the network, it is possible to ensure a contemporary perspective on glove-making in addition to the input from heritage partners. The slow fashion movement and awareness of artisan craft and how these can be preserved and adopted for glove and accessory design is addressed. Artisan leather glove making was a skilled and significant industry in England that has now declined to the point where there is little production remaining utilising the specialist skills that have hardly changed since earliest times. This heritage will be identified and preserved for future generations of the rich cultural history of gloves may be lost.

Characterization of a Hypoeutectic Al Alloy Obtained by Selective Laser Melting

In this investigation, a hypoeutectic AlSi11Cu alloy was printed. This alloy was obtained in powder form with an average particle size of 40 µm. Bars 20 mm in diameter and 100 mm in length were printed with the building direction parallel to the bars' longitudinal direction. The microstructural characterization demonstrated an Al matrix surrounded by a Si network forming a coral-like pattern. The microstructure of the alloy showed a heterogeneous behavior with a mixture of columnar and equiaxed grains. Likewise, the texture indicated that the columnar grains were preferentially oriented towards the building direction, while the equiaxed followed a texture dominated by the cube component. On the other hand, the as-printed material strength showed higher values than those obtained in the same alloy using conventional processes such as casting. In addition, strength and ductility differences were found in the printed material, depending on the measurement direction. The highest values were obtained in the radial direction (565 MPa maximum strength and 4.8% elongation to failure). The lowest values corresponded to the transverse direction (508 MPa maximum strength and 3.2 elongation to failure), which corroborate the material anisotropy.

Optimization of Mechanical Properties of Alginate Hydrogel for 3D Bio-Printing Self-Standing Scaffold Architecture for Tissue Engineering Applications

In this study, the mechanical properties of alginate hydrogel material for self-standing 3D scaffold architecture with proper shape fidelity are investigated. In-lab built 3D bio-printer extrusion-based technology is utilized to fabricate 3D alginate scaffold constructs. The pressure, needle speed and stage speed are varied using a computer-controlled system. The experimental result indicates that the concentration of alginate solution, calcium chloride (CaCl2) cross-linking concentration and cross-linking ratios lead to the formation of alginate hydrogel with various gelation states. Besides, the gelling conditions, such as cross-linking reaction time and temperature also have a significant effect on the mechanical properties of alginate hydrogel. Various experimental tests such as the material gelation, the material spreading and the printability test for filament collapse as well as the swelling test were conducted to evaluate the fabricated 3D scaffold constructs. The result indicates that the fabricated 3D scaffold from composition of 3.5% wt alginate solution, that is prepared in DI water and 1% wt CaCl2 solution with cross-linking ratios of 7:3 show good printability and sustain good shape fidelity for more than 20 days, compared to alginate hydrogel that is prepared in a phosphate buffered saline (PBS). The fabricated self-standing 3D scaffold constructs measured 30 mm × 30 mm and consisted of 4 layers (n = 4) show good pore geometry and clear grid structure after printing. In addition, the percentage change of swelling degree exhibits high swelling capability with respect to time. The swelling test shows that the geometry of 3D alginate-scaffold construct and of the macro-pore are rarely changed, which indicates the capability of holding the shape fidelity during the incubation period. This study demonstrated that the mechanical and physical properties of alginate hydrogel could be tuned for a 3D bio-printing extrusion-based system to fabricate self-standing 3D scaffold soft structures. This 3D bioengineered scaffold provides a natural microenvironment present in the extracellular matrix of the tissue, which could be seeded with the biological cells to generate the desired 3D live tissue model for in vitro and in vivo tissue engineering applications.

Dual-Actuated Vibration Isolation Technology for a Rotary System’s Position Control on a Vibrating Frame: Disturbance Rejection and Active Damping

A vibration isolation technology for precise position control of a rotary system powered by two permanent magnet DC (PMDC) motors is proposed, where this system is mounted on an oscillatory frame. To achieve vibration isolation for this system, active damping and disturbance rejection (ADDR) technology is presented which introduces a cooperation of a main and an auxiliary PMDC, controlled by discrete-time sliding mode control (DTSMC) based schemes. The controller of the main actuator tracks a desired position and the auxiliary actuator simultaneously isolates the induced vibration, as its controller follows a torque trend. To determine this torque trend, a combination of two algorithms is introduced by the ADDR technology. The first torque-trend producing algorithm rejects the disturbance by counteracting the perturbation, estimated using a model-based observer. The second torque trend applies active variable damping to minimize the oscillation of the output shaft. In this practice, the presented technology is implemented on a rotary system with a pendulum attached, mounted on a linear actuator simulating an oscillation-transmitting structure. In addition, the obtained results illustrate the functionality of the proposed technology.

Effect of Social Media on Knowledge Work

This paper examines the impact of social media on knowledge work. It discloses and highlights which specific aspects, areas and tasks of knowledge work can be improved by the use of social media. Moreover, the study includes a survey about higher education students’ viewpoints in regard to the use of social media as a means to enhance knowledge work and knowledge sharing. The analysis has been conducted based both on empirical data and on discussions about the sources dealing with knowledge work and how it can be enhanced by using social media. The results show that social media can improve knowledge work, knowledge building and maintenance tasks in which communication, information sharing and collaboration play a vital role. Additionally, by using social media, personal, collaborative and supplementary work activities can be enhanced. Based on the results of the study, we suggest how knowledge work can be enhanced when using the contemporary information and communications technologies (ICTs) of the 21st century and recommend future directions towards improving knowledge work.

Development of a Feedback Control System for a Lab-Scale Biomass Combustion System Using Programmable Logic Controller

The application of combustion technologies for thermal conversion of biomass and solid wastes to energy has been a major solution to the effective handling of wastes over a long period of time. Lab-scale biomass combustion systems have been observed to be economically viable and socially acceptable, but major concerns are the environmental impacts of the process and deviation of temperature distribution within the combustion chamber. Both high and low combustion chamber temperature may affect the overall combustion efficiency and gaseous emissions. Therefore, there is an urgent need to develop a control system which measures the deviations of chamber temperature from set target values, sends these deviations (which generates disturbances in the system) in the form of feedback signal (as input), and control operating conditions for correcting the errors. In this research study, major components of the feedback control system were determined, assembled, and tested. In addition, control algorithms were developed to actuate operating conditions (e.g., air velocity, fuel feeding rate) using ladder logic functions embedded in the Programmable Logic Controller (PLC). The developed control algorithm having chamber temperature as a feedback signal is integrated into the lab-scale swirling fluidized bed combustor (SFBC) to investigate the temperature distribution at different heights of the combustion chamber based on various operating conditions. The air blower rates and the fuel feeding rates obtained from automatic control operations were correlated with manual inputs. There was no observable difference in the correlated results, thus indicating that the written PLC program functions were adequate in designing the experimental study of the lab-scale SFBC. The experimental results were analyzed to study the effect of air velocity operating at 222-273 ft/min and fuel feeding rate of 60-90 rpm on the chamber temperature. The developed temperature-based feedback control system was shown to be adequate in controlling the airflow and the fuel feeding rate for the overall biomass combustion process as it helps to minimize the steady-state error.

Irrigation Water Quality Evaluation Based on Multivariate Statistical Analysis: A Case Study of Jiaokou Irrigation District

Groundwater is main source of water supply in the Guanzhong Basin, China. To investigate the quality of groundwater for agricultural purposes in Jiaokou Irrigation District located in the east of the Guanzhong Basin, 141 groundwater samples were collected for analysis of major ions (K+, Na+, Mg2+, Ca2+, SO42-, Cl-, HCO3-, and CO32-), pH, and total dissolved solids (TDS). Sodium percentage (Na%), residual sodium carbonate (RSC), magnesium hazard (MH), and potential salinity (PS) were applied for irrigation water quality assessment. In addition, multivariate statistical techniques were used to identify the underlying hydrogeochemical processes. Results show that the content of TDS mainly depends on Cl-, Na+, Mg2+, and SO42-, and the HCO3- content is generally high except for the eastern sand area. These are responsible for complex hydrogeochemical processes, such as dissolution of carbonate minerals (dolomite and calcite), gypsum, halite, and silicate minerals, the cation exchange, as well as evaporation and concentration. The average evaluation levels of Na%, RSC, MH, and PS for irrigation water quality are doubtful, good, unsuitable, and injurious to unsatisfactory, respectively. Therefore, it is necessary for decision makers to comprehensively consider the indicators and thus reasonably evaluate the irrigation water quality.

Greenhouse Gasses’ Effect on Atmospheric Temperature Increase and the Observable Effects on Ecosystems

Radiative forces of greenhouse gases (GHG) increase the temperature of the Earth's surface, more on land, and less in oceans, due to their thermal capacities. Given this inertia, the temperature increase is delayed over time. Air temperature, however, is not delayed as air thermal capacity is much lower. In this study, through analysis and synthesis of multidisciplinary science and data, an estimate of atmospheric temperature increase is made. Then, this estimate is used to shed light on current observations of ice and snow loss, desertification and forest fires, and increased extreme air disturbances. The reason for this inquiry is due to the author’s skepticism that current changes cannot be explained by a "~1 oC" global average surface temperature rise within the last 50-60 years. The only other plausible cause to explore for understanding is that of atmospheric temperature rise. The study utilizes an analysis of air temperature rise from three different scientific disciplines: thermodynamics, climate science experiments, and climactic historical studies. The results coming from these diverse disciplines are nearly the same, within ± 1.6%. The direct radiative force of GHGs with a high level of scientific understanding is near 4.7 W/m2 on average over the Earth’s entire surface in 2018, as compared to one in pre-Industrial time in the mid-1700s. The additional radiative force of fast feedbacks coming from various forms of water gives approximately an additional ~15 W/m2. In 2018, these radiative forces heated the atmosphere by approximately 5.1 oC, which will create a thermal equilibrium average ground surface temperature increase of 4.6 oC to 4.8 oC by the end of this century. After 2018, the temperature will continue to rise without any additional increases in the concentration of the GHGs, primarily of carbon dioxide and methane. These findings of the radiative force of GHGs in 2018 were applied to estimates of effects on major Earth ecosystems. This additional force of nearly 20 W/m2 causes an increase in ice melting by an additional rate of over 90 cm/year, green leaves temperature increase by nearly 5 oC, and a work energy increase of air by approximately 40 Joules/mole. This explains the observed high rates of ice melting at all altitudes and latitudes, the spread of deserts and increases in forest fires, as well as increased energy of tornadoes, typhoons, hurricanes, and extreme weather, much more plausibly than the 1.5 oC increase in average global surface temperature in the same time interval. Planned mitigation and adaptation measures might prove to be much more effective when directed toward the reduction of existing GHGs in the atmosphere.

Computational Fluid Dynamics Analysis and Optimization of the Coanda Unmanned Aerial Vehicle Platform

It is known that using Coanda aerosurfaces can drastically augment the lift forces when applied to an Unmanned Aerial Vehicle (UAV) platform. However, Coanda saucer UAVs, which commonly use a dish-like, radially-extending structure, have shown no significant increases in thrust/lift force and therefore have never been commercially successful: the additional thrust/lift generated by the Coanda surface diminishes since the airstreams emerging from the rotor compartment expand radially causing serious loss of momentums and therefore a net loss of total thrust/lift. To overcome this technical weakness, we propose to examine a Coanda surface of straight, cylindrical design and optimize its geometry for highest thrust/lift utilizing computational fluid dynamics software ANSYS Fluent®. The results of this study reveal that a Coanda UAV configured with 4 sides of straight, cylindrical Coanda surface achieve an overall 45% increase in lift compared to conventional Coanda Saucer UAV configurations. This venture integrates with an ongoing research project where a Coanda prototype is being assembled. Additionally, a custom thrust-stand has been constructed for thrust/lift measurement.

Enhancing Temporal Extrapolation of Wind Speed Using a Hybrid Technique: A Case Study in West Coast of Denmark

The demand for renewable energy is significantly increasing, major investments are being supplied to the wind power generation industry as a leading source of clean energy. The wind energy sector is entirely dependable and driven by the prediction of wind speed, which by the nature of wind is very stochastic and widely random. This s0tudy employs deep multi-fidelity Gaussian process regression, used to predict wind speeds for medium term time horizons. Data of the RUNE experiment in the west coast of Denmark were provided by the Technical University of Denmark, which represent the wind speed across the study area from the period between December 2015 and March 2016. The study aims to investigate the effect of pre-processing the data by denoising the signal using empirical wavelet transform (EWT) and engaging the vector components of wind speed to increase the number of input data layers for data fusion using deep multi-fidelity Gaussian process regression (GPR). The outcomes were compared using root mean square error (RMSE) and the results demonstrated a significant increase in the accuracy of predictions which demonstrated that using vector components of the wind speed as additional predictors exhibits more accurate predictions than strategies that ignore them, reflecting the importance of the inclusion of all sub data and pre-processing signals for wind speed forecasting models.

Embedded Semantic Segmentation Network Optimized for Matrix Multiplication Accelerator

Autonomous driving systems require high reliability to provide people with a safe and comfortable driving experience. However, despite the development of a number of vehicle sensors, it is difficult to always provide high perceived performance in driving environments that vary from time to season. The image segmentation method using deep learning, which has recently evolved rapidly, provides high recognition performance in various road environments stably. However, since the system controls a vehicle in real time, a highly complex deep learning network cannot be used due to time and memory constraints. Moreover, efficient networks are optimized for GPU environments, which degrade performance in embedded processor environments equipped simple hardware accelerators. In this paper, a semantic segmentation network, matrix multiplication accelerator network (MMANet), optimized for matrix multiplication accelerator (MMA) on Texas instrument digital signal processors (TI DSP) is proposed to improve the recognition performance of autonomous driving system. The proposed method is designed to maximize the number of layers that can be performed in a limited time to provide reliable driving environment information in real time. First, the number of channels in the activation map is fixed to fit the structure of MMA. By increasing the number of parallel branches, the lack of information caused by fixing the number of channels is resolved. Second, an efficient convolution is selected depending on the size of the activation. Since MMA is a fixed, it may be more efficient for normal convolution than depthwise separable convolution depending on memory access overhead. Thus, a convolution type is decided according to output stride to increase network depth. In addition, memory access time is minimized by processing operations only in L3 cache. Lastly, reliable contexts are extracted using the extended atrous spatial pyramid pooling (ASPP). The suggested method gets stable features from an extended path by increasing the kernel size and accessing consecutive data. In addition, it consists of two ASPPs to obtain high quality contexts using the restored shape without global average pooling paths since the layer uses MMA as a simple adder. To verify the proposed method, an experiment is conducted using perfsim, a timing simulator, and the Cityscapes validation sets. The proposed network can process an image with 640 x 480 resolution for 6.67 ms, so six cameras can be used to identify the surroundings of the vehicle as 20 frame per second (FPS). In addition, it achieves 73.1% mean intersection over union (mIoU) which is the highest recognition rate among embedded networks on the Cityscapes validation set.

Multilayer Thermal Screens for Greenhouse Insulation

Greenhouse cultivation is an energy-intensive process due to the high demands on cooling or heating according to external climatic conditions, which could be extreme in the summer or winter seasons. The thermal radiation rate inside a greenhouse depends mainly on the type of covering material and greenhouse construction. Using additional thermal screens under a greenhouse covering combined with a dehumidification system improves the insulation and could be cost-effective. Greenhouse covering material usually contains protective ultraviolet (UV) radiation additives to prevent the film wear, insect harm, and crop diseases. This paper investigates the overall heat transfer coefficient, or U-value, for greenhouse polyethylene covering contains UV-additives and glass covering with or without a thermal screen supplement. The hot-box method was employed to evaluate overall heat transfer coefficients experimentally as a function of the type and number of the thermal screens. The results show that the overall heat transfer coefficient decreases with increasing the number of thermal screens as a hyperbolic function. The overall heat transfer coefficient highly depends on the ability of the material to reflect thermal radiation. Using a greenhouse covering, i.e., polyethylene films or glass, in combination with high reflective thermal screens, i.e., containing about 98% of aluminum stripes or aluminum foil, the U-value reduces by 61%-89% in the first case, whereas by 70%-92% in the second case, depending on the number of the thermal screen. Using thermal screens made from low reflective materials may reduce the U-value by 30%-57%. The heat transfer coefficient is an indicator of the thermal insulation properties of the materials, which allows farmers to make decisions on the use of appropriate thermal screens depending on the external and internal climate conditions in a greenhouse.

A Machine Learning Based Framework for Education Levelling in Multicultural Countries: UAE as a Case Study

In Abu Dhabi, there are many different education curriculums where sector of private schools and quality assurance is supervising many private schools in Abu Dhabi for many nationalities. As there are many different education curriculums in Abu Dhabi to meet expats’ needs, there are different requirements for registration and success. In addition, there are different age groups for starting education in each curriculum. In fact, each curriculum has a different number of years, assessment techniques, reassessment rules, and exam boards. Currently, students that transfer curriculums are not being placed in the right year group due to different start and end dates of each academic year and their date of birth for each year group is different for each curriculum and as a result, we find students that are either younger or older for that year group which therefore creates gaps in their learning and performance. In addition, there is not a way of storing student data throughout their academic journey so that schools can track the student learning process. In this paper, we propose to develop a computational framework applicable in multicultural countries such as UAE in which multi-education systems are implemented. The ultimate goal is to use cloud and fog computing technology integrated with Artificial Intelligence techniques of Machine Learning to aid in a smooth transition when assigning students to their year groups, and provide leveling and differentiation information of students who relocate from a particular education curriculum to another, whilst also having the ability to store and access student data from anywhere throughout their academic journey.