Model-Driven and Data-Driven Approaches for Crop Yield Prediction: Analysis and Comparison

Crop yield prediction is a paramount issue in agriculture. The main idea of this paper is to find out efficient way to predict the yield of corn based meteorological records. The prediction models used in this paper can be classified into model-driven approaches and data-driven approaches, according to the different modeling methodologies. The model-driven approaches are based on crop mechanistic modeling. They describe crop growth in interaction with their environment as dynamical systems. But the calibration process of the dynamic system comes up with much difficulty, because it turns out to be a multidimensional non-convex optimization problem. An original contribution of this paper is to propose a statistical methodology, Multi-Scenarios Parameters Estimation (MSPE), for the parametrization of potentially complex mechanistic models from a new type of datasets (climatic data, final yield in many situations). It is tested with CORNFLO, a crop model for maize growth. On the other hand, the data-driven approach for yield prediction is free of the complex biophysical process. But it has some strict requirements about the dataset. A second contribution of the paper is the comparison of these model-driven methods with classical data-driven methods. For this purpose, we consider two classes of regression methods, methods derived from linear regression (Ridge and Lasso Regression, Principal Components Regression or Partial Least Squares Regression) and machine learning methods (Random Forest, k-Nearest Neighbor, Artificial Neural Network and SVM regression). The dataset consists of 720 records of corn yield at county scale provided by the United States Department of Agriculture (USDA) and the associated climatic data. A 5-folds cross-validation process and two accuracy metrics: root mean square error of prediction(RMSEP), mean absolute error of prediction(MAEP) were used to evaluate the crop prediction capacity. The results show that among the data-driven approaches, Random Forest is the most robust and generally achieves the best prediction error (MAEP 4.27%). It also outperforms our model-driven approach (MAEP 6.11%). However, the method to calibrate the mechanistic model from dataset easy to access offers several side-perspectives. The mechanistic model can potentially help to underline the stresses suffered by the crop or to identify the biological parameters of interest for breeding purposes. For this reason, an interesting perspective is to combine these two types of approaches.

Performance Analysis of a Hybrid DF-AF Hybrid RF/FSO System under Gamma Gamma Atmospheric Turbulence Channel Using MPPM Modulation

The performance of hybrid amplify and forward - decode and forward (AF-DF) hybrid radio frequency/free space optical (RF/FSO) communication system, that adopts M-ary pulse position modulation (MPPM) techniques, is analyzed. Both exact and approximate symbol-error rates (SERs) are derived. The random variations of the received optical irradiance, produced by the atmospheric turbulence, is modeled by the gamma-gamma (GG) statistical distribution. A closed-form expression for the probability density function (PDF) is derived for the whole above system is obtained. Thanks to the use of hybrid AF-DF hybrid RF/FSO configuration and MPPM, the effects of atmospheric turbulence is mitigated; hence the capacity of combating atmospheric turbulence and the transmissitted signal quality are improved.

Projections of Climate Change in the Rain Regime of the Ibicui River Basin

The global concern about climate change has been increasing, since the emission of gases from human activities contributes to the greenhouse effect in the atmosphere, indicating significant impacts to the planet in the coming years. The study of precipitation regime is fundamental for the development of research in several areas. Among them are hydrology, agriculture, and electric sector. Using the climatic projections of the models belonging to the CMIP5, the main objective of the paper was to present an analysis of the impacts of climate change without rainfall in the Uruguay River basin. After an analysis of the results, it can be observed that for the future climate, there is a tendency, in relation to the present climate, for larger numbers of dry events, mainly in the winter months, changing the pluviometric regime for wet summers and drier winters. Given this projected framework, it is important to note the importance of adequate management of the existing water sources in the river basin, since the value of rainfall is reduced for the next years, it may compromise the dynamics of the ecosystems in the region. Facing climate change is fundamental issue for regions and cities all around the world. Society must improve its resilience to phenomenon impacts, and spreading the knowledge among decision makers and citizens is also essential. So, these research results can be subsidies for the decision-making in planning and management of mitigation measures and/or adaptation in south Brazil.

Optimal Portfolio Selection in a DC Pension with Multiple Contributors and the Impact of Stochastic Additional Voluntary Contribution on the Optimal Investment Strategy

In this paper, we studied the optimal portfolio selection in a defined contribution (DC) pension scheme with multiple contributors under constant elasticity of variance (CEV) model and the impact of stochastic additional voluntary contribution on the investment strategies. We assume that the voluntary contributions are stochastic and also consider investments in a risk free asset and a risky asset to increase the expected returns of the contributing members. We derived a stochastic differential equation which consists of the members’ monthly contributions and the invested fund and obtained an optimized problem with the help of Hamilton Jacobi Bellman equation. Furthermore, we find an explicit solution for the optimal investment strategy with stochastic voluntary contribution using power transformation and change of variables method and the corresponding optimal fund size was obtained. We discussed the impact of the voluntary contribution on the optimal investment strategy with numerical simulations and observed that the voluntary contribution reduces the optimal investment strategy of the risky asset.

Proximity-Inset Fed Triple Band Antenna for Global Position System with High Gain

A triple band circularly polarized antenna covering 1.17, 1.22, and 1.57 GHz is presented. To extend to the triple-band operation, we need to add one more ring while maintaining the mechanism to independently control each ring. The inset-part in the feeding scheme is used to excite the band at 1.22 GHz, while the proximate-part of the feeding scheme is used to excite not only the band at 1.57 GHz but also the band at 1.17 GHz. This is achieved by up-vertically coupled with one ring to radiate at 1.57 GHz and down-vertically coupled another ring to radiate at 1.17 GHz. It is also noted that the inset-part in our feeding scheme is by horizontal coupling. Furthermore, to increase the gain at all three bands, three air-layers are added to make the total height of the antenna be 7.8 mm. The total thickness of the three air-layers is 3 mm. The gains of the three bands are all greater than 5 dBiC after adding the air-layers.

Risk Based Maintenance Planning for Loading Equipment in Underground Hard Rock Mine: Case Study

Mining industry is known for its appetite to spend sizeable capital on mine equipment. However, in the current scenario, the mining industry is challenged by daunting factors of non-uniform geological conditions, uneven ore grade, uncontrollable and volatile mineral commodity prices and the ever increasing quest to optimize the capital and operational costs. Thus, the role of equipment reliability and maintenance planning inherits a significant role in augmenting the equipment availability for the operation and in turn boosting the mine productivity. This paper presents the Risk Based Maintenance (RBM) planning conducted on mine loading equipment namely Load Haul Dumpers (LHDs) at Vedanta Resources Ltd subsidiary Hindustan Zinc Limited operated Sindesar Khurd Mines, an underground zinc and lead mine situated in Dariba, Rajasthan, India. The mining equipment at the location is maintained by the Original Equipment Manufacturers (OEMs) namely Sandvik and Atlas Copco, who carry out the maintenance and inspection operations for the equipment. Based on the downtime data extracted for the equipment fleet over the period of 6 months spanning from 1st January 2017 until 30th June 2017, it was revealed that significant contribution of three downtime issues related to namely Engine, Hydraulics, and Transmission to be common among all the loading equipment fleet and substantiated by Pareto Analysis. Further scrutiny through Bubble Matrix Analysis of the given factors revealed the major influence of selective factors namely Overheating, No Load Taken (NTL) issues, Gear Changing issues and Hose Puncture and leakage issues. Utilizing the equipment wise analysis of all the downtime factors obtained, spares consumed, and the alarm logs extracted from the machines, technical design changes in the equipment and pre shift critical alarms checklist were proposed for the equipment maintenance. The given analysis is beneficial to allow OEMs or mine management to focus on the critical issues hampering the reliability of mine equipment and design necessary maintenance strategies to mitigate them.

System Security Impact on the Dynamic Characteristics of Measurement Sensors in Smart Grids

Smart grid is a term used to describe the next generation power grid. New challenges such as integration of renewable and decentralized energy sources, the requirement for continuous grid estimation and optimization, as well as the use of two-way flows of energy have been brought to the power gird. In order to achieve efficient, reliable, sustainable, as well as secure delivery of electric power more and more information and communication technologies are used for the monitoring and the control of power grids. Consequently, the need for cybersecurity is dramatically increased and has converged into several standards which will be presented here. These standards for the smart grid must be designed to satisfy both performance and reliability requirements. An in depth investigation of the effect of retrospectively embedded security in existing grids on it’s dynamic behavior is required. Therefore, a retrofitting plan for existing meters is offered, and it’s performance in a test low voltage microgrid is investigated. As a result of this, integration of security measures into measurement architectures of smart grids at the design phase is strongly recommended.

Synthesis and Characterization of Nickel and Sulphur Sensitized Zinc Oxide Structures

The use of nanostructured semiconducting material to catalyze degradation of environmental pollutants still receives much attention to date. One of the desired characteristics for pollutant degradation under ultra-violet visible light is the materials with extended carrier charge separation that allows for electronic transfer between the catalyst and the pollutants. In this work, zinc oxide n-type semiconductor vertically aligned structures were fabricated on silicon (100) substrates using the chemical bath deposition method. The as-synthesized structures were treated with nickel and sulphur. X-ray diffraction, scanning electron microscopy, energy dispersive X-ray spectroscopy were used to characterize the phase purity, structural dimensions and elemental composition of the obtained structures respectively. Photoluminescence emission measurements showed a decrease in both the near band edge emission as well as the defect band emission upon addition of nickel and sulphur with different concentrations. This was attributed to increased charger-carrier-separation due to the presence of Ni-S material on ZnO surface, which is linked to improved charge transfer during photocatalytic reactions.

Comparison of Electrical Parameters of Oil-Immersed and Dry-Type Transformer Using Finite Element Method

The choice evaluation between oil-immersed and dry-type transformers is often controlled by cost, location, and application. This paper compares the electrical performance of liquid- filled and dry-type transformers, which will assist the customer to choose the right and efficient ones for particular applications. An accurate assessment of the time-average flux density, electric field intensity and voltage distribution in an oil-insulated and a dry-type transformer have been computed and investigated. The detailed transformer modeling and analysis has been carried out to determine electrical parameter distributions. The models of oil-immersed and dry-type transformers are developed and solved by using the finite element method (FEM) to compare the electrical parameters. The effects of non-uniform and non-coherent voltage gradient, flux density and electric field distribution on the power losses and insulation properties of transformers are studied in detail. The results show that, for the same voltage and kilo-volt-ampere (kVA) rating, oil-immersed transformers have better insulation properties and less hysteresis losses than the dry-type.

Lead in The Soil-Plant System Following Aged Contamination from Ceramic Wastes

Lead contamination of agricultural land mainly vegetated with perennial ryegrass (Lolium perenne) has been investigated. The metal derived from the discharge of sludge from a ceramic industry in the past had used lead paints. The results showed very high values of lead concentration in many soil samples. In order to assess the lead soil contamination, a sequential extraction with H2O, KNO3, EDTA was performed, and the chemical forms of lead in the soil were evaluated. More than 70% of lead was in a potentially bioavailable form. Analysis of Lolium perenne showed elevated lead concentration. A Freundlich-like model was used to describe the transferability of the metal from the soil to the plant.

Improvement of Water Distillation Plant by Using Statistical Process Control System

Water supply and sanitation in Saudi Arabia is portrayed by difficulties and accomplishments. One of the fundamental difficulties is water shortage. With a specific end goal to beat water shortage, significant ventures have been attempted in sea water desalination, water circulation, sewerage, and wastewater treatment. The motivation behind Statistical Process Control (SPC) is to decide whether the execution of a procedure is keeping up an acceptable quality level [AQL]. SPC is an analytical decision-making method. A fundamental apparatus in the SPC is the Control Charts, which follow the inconstancy in the estimations of the item quality attributes. By utilizing the suitable outline, administration can decide whether changes should be made with a specific end goal to keep the procedure in charge. The two most important quality factors in the distilled water which were taken into consideration were pH (Potential of Hydrogen) and TDS (Total Dissolved Solids). There were three stages at which the quality checks were done. The stages were as follows: (1) Water at the source, (2) water after chemical treatment & (3) water which is sent for packing. The upper specification limit, central limit and lower specification limit are taken as per Saudi water standards. The procedure capacity to accomplish the particulars set for the quality attributes of Berain water Factory chose to be focused by the proposed SPC system.

Identification of Disease Causing DNA Motifs in Human DNA Using Clustering Approach

Studying DNA (deoxyribonucleic acid) sequence is useful in biological processes and it is applied in the fields such as diagnostic and forensic research. DNA is the hereditary information in human and almost all other organisms. It is passed to their generations. Earlier stage detection of defective DNA sequence may lead to many developments in the field of Bioinformatics. Nowadays various tedious techniques are used to identify defective DNA. The proposed work is to analyze and identify the cancer-causing DNA motif in a given sequence. Initially the human DNA sequence is separated as k-mers using k-mer separation rule. The separated k-mers are clustered using Self Organizing Map (SOM). Using Levenshtein distance measure, cancer associated DNA motif is identified from the k-mer clusters. Experimental results of this work indicate the presence or absence of cancer causing DNA motif. If the cancer associated DNA motif is found in DNA, it is declared as the cancer disease causing DNA sequence. Otherwise the input human DNA is declared as normal sequence. Finally, elapsed time is calculated for finding the presence of cancer causing DNA motif using clustering formation. It is compared with normal process of finding cancer causing DNA motif. Locating cancer associated motif is easier in cluster formation process than the other one. The proposed work will be an initiative aid for finding genetic disease related research.

Comparing Test Equating by Item Response Theory and Raw Score Methods with Small Sample Sizes on a Study of the ARTé: Mecenas Learning Game

The purpose of the present research is to equate two test forms as part of a study to evaluate the educational effectiveness of the ARTé: Mecenas art history learning game. The researcher applied Item Response Theory (IRT) procedures to calculate item, test, and mean-sigma equating parameters. With the sample size n=134, test parameters indicated “good” model fit but low Test Information Functions and more acute than expected equating parameters. Therefore, the researcher applied equipercentile equating and linear equating to raw scores and compared the equated form parameters and effect sizes from each method. Item scaling in IRT enables the researcher to select a subset of well-discriminating items. The mean-sigma step produces a mean-slope adjustment from the anchor items, which was used to scale the score on the new form (Form R) to the reference form (Form Q) scale. In equipercentile equating, scores are adjusted to align the proportion of scores in each quintile segment. Linear equating produces a mean-slope adjustment, which was applied to all core items on the new form. The study followed a quasi-experimental design with purposeful sampling of students enrolled in a college level art history course (n=134) and counterbalancing design to distribute both forms on the pre- and posttests. The Experimental Group (n=82) was asked to play ARTé: Mecenas online and complete Level 4 of the game within a two-week period; 37 participants completed Level 4. Over the same period, the Control Group (n=52) did not play the game. The researcher examined between group differences from post-test scores on test Form Q and Form R by full-factorial Two-Way ANOVA. The raw score analysis indicated a 1.29% direct effect of form, which was statistically non-significant but may be practically significant. The researcher repeated the between group differences analysis with all three equating methods. For the IRT mean-sigma adjusted scores, form had a direct effect of 8.39%. Mean-sigma equating with a small sample may have resulted in inaccurate equating parameters. Equipercentile equating aligned test means and standard deviations, but resultant skewness and kurtosis worsened compared to raw score parameters. Form had a 3.18% direct effect. Linear equating produced the lowest Form effect, approaching 0%. Using linearly equated scores, the researcher conducted an ANCOVA to examine the effect size in terms of prior knowledge. The between group effect size for the Control Group versus Experimental Group participants who completed the game was 14.39% with a 4.77% effect size attributed to pre-test score. Playing and completing the game increased art history knowledge, and individuals with low prior knowledge tended to gain more from pre- to post test. Ultimately, researchers should approach test equating based on their theoretical stance on Classical Test Theory and IRT and the respective  assumptions. Regardless of the approach or method, test equating requires a representative sample of sufficient size. With small sample sizes, the application of a range of equating approaches can expose item and test features for review, inform interpretation, and identify paths for improving instruments for future study.

Knowledge Reactor: A Contextual Computing Work in Progress for Eldercare

The world-wide population of people over 60 years of age is growing rapidly. The explosion is placing increasingly onerous demands on individual families, multiple industries and entire countries. Current, human-intensive approaches to eldercare are not sustainable, but IoT and AI technologies can help. The Knowledge Reactor (KR) is a contextual, data fusion engine built to address this and other similar problems. It fuses and centralizes IoT and System of Record/Engagement data into a reactive knowledge graph. Cognitive applications and services are constructed with its multiagent architecture. The KR can scale-up and scaledown, because it exploits container-based, horizontally scalable services for graph store (JanusGraph) and pub-sub (Kafka) technologies. While the KR can be applied to many domains that require IoT and AI technologies, this paper describes how the KR specifically supports the challenging domain of cognitive eldercare. Rule- and machine learning-based analytics infer activities of daily living from IoT sensor readings. KR scalability, adaptability, flexibility and usability are demonstrated.

Design of Collaborative Web System: Based on Case Study of PBL Support Systems

This paper describes the design and implementation of web system for continuable and viable collaboration. This study proposes the improvement of the system based on a result of a certain practice. As contemporary higher education information environments transform, this study highlights the significance of university identity and college identity that are formed continuously through independent activities of the students. Based on these discussions, the present study proposes a practical media environment design which facilitates the processes of organizational identity formation based on a continuous and cyclical model. Even if users change by this system, the communication system continues operation and cooperation. The activity becomes the archive and produces new activity. Based on the result, this study elaborates a plan with a re-design by a system from the viewpoint of second-order cybernetics. Systems theory is a theoretical foundation for our study.

Design Development of Floating Performance Structure for Coastal Areas in the Maltese Islands

Background: Islands in the Mediterranean region offer opportunities for various industries to take advantage of the facilitation and use of versatile floating structures in coastal areas. In the context of dense land use, marine structures can contribute to ensure both terrestrial and marine resource sustainability. Objective: The aim of this paper is to present and critically discuss an array of issues that characterize the design process of a floating structure for coastal areas and to present the challenges and opportunities of providing such multifunctional and versatile structures around the Maltese coastline. Research Design: A three-tier research design commenced with a systematic literature review. Semi-structured interviews with stakeholders including a naval architect, a marine engineer and civil designers were conducted. A second stage preceded a focus group with stakeholders in design and construction of marine lightweight structures. The three tier research design ensured triangulation of issues. All phases of the study were governed by research ethics. Findings: Findings were grouped into three main themes: excellence, impact and implementation. These included design considerations, applications and potential impacts on local industry. Literature for the design and construction of marine structures in the Maltese Islands presented multiple gaps in the application of marine structures for local industries. Weather conditions, depth of sea bed and wave actions presented limitations on the design capabilities of the structure. Conclusion: Water structures offer great potential and conclusions demonstrate the applicability of such designs for Maltese waters. There is still no such provision within Maltese coastal areas for multi-purpose use. The introduction of such facilities presents a range of benefits for visiting tourists and locals thereby offering wide range of services to tourism and marine industry. Costs for construction and adverse weather conditions were amongst the main limitations that shaped design capacities of the water structures.

Experimental Simulation Set-Up for Validating Out-Of-The-Loop Mitigation when Monitoring High Levels of Automation in Air Traffic Control

An increasing degree of automation in air traffic will also change the role of the air traffic controller (ATCO). ATCOs will fulfill significantly more monitoring tasks compared to today. However, this rather passive role may lead to Out-Of-The-Loop (OOTL) effects comprising vigilance decrement and less situation awareness. The project MINIMA (Mitigating Negative Impacts of Monitoring high levels of Automation) has conceived a system to control and mitigate such OOTL phenomena. In order to demonstrate the MINIMA concept, an experimental simulation set-up has been designed. This set-up consists of two parts: 1) a Task Environment (TE) comprising a Terminal Maneuvering Area (TMA) simulator as well as 2) a Vigilance and Attention Controller (VAC) based on neurophysiological data recording such as electroencephalography (EEG) and eye-tracking devices. The current vigilance level and the attention focus of the controller are measured during the ATCO’s active work in front of the human machine interface (HMI). The derived vigilance level and attention trigger adaptive automation functionalities in the TE to avoid OOTL effects. This paper describes the full-scale experimental set-up and the component development work towards it. Hence, it encompasses a pre-test whose results influenced the development of the VAC as well as the functionalities of the final TE and the two VAC’s sub-components.

Commercialization of Technologies, Productivity and Problems of Technological Audit in the Russian Economy

The problems of technological development for the Russian Federation take on special significance in the context of modernization of the production base. The complexity of the position of the Russian economy is that it cannot be attributed fully to developing ones. Russia is a strong industrial power that has gone through the processes of destructive de-industrialization in the conditions of changing its economic and political structure. The need to find ways for re-industrialization is not a unique task for the economies of industrially developed countries. Under the influence of production outsourcing for 20 years, the industrial potential of leading economies of the world was regressed against the backdrop of the ascent of China, a new industrial giant. Therefore, methods, tools, and techniques utilized for industrial renaissance in EU may be used to achieve a technological leap in the Russian Federation, especially since the temporary gap of 5-7 years makes it possible to analyze best practices and use those technological transfer tools that have shown the greatest efficiency. In this article, methods of technological transfer are analyzed, the role of technological audit is justified, and factors are analyzed that influence the successful process of commercialization of technologies.

Traffic Congestion Problem and Possible Solution in Kabul City

Traffic congestion is a worldwide issue, especially in developing countries. This is also the case of Afghanistan, especially in Kabul-the capital city, whose rapid population growth makes it the fifth fastest growing city in the world. Traffic congestion affects not only the mobility of people and goods but also the air quality that leads to numerous deaths (3000 people) every year. There are many factors that contribute to traffic congestion. The insufficiency and inefficiency of public transportation system along with the increase of private vehicles can be considered among the most important contributing factors. This paper addresses the traffic congestion and attempts to suggest possible solutions that can help improve the current public transportation system in Kabul. To this end, the methodology used in this paper includes field work conducted in Kabul city and literature review. The outcome suggests that improving the public transportation system is likely to contribute to the reduction of traffic congestion and the improvement of air quality, thereby reducing the number of death related to air quality.

Evaluation of NH3-Slip from Diesel Vehicles Equipped with Selective Catalytic Reduction Systems by Neural Networks Approach

Selective catalytic reduction systems for nitrogen oxides reduction by ammonia has been the chosen technology by most of diesel vehicle (i.e. bus and truck) manufacturers in Brazil, as also in Europe. Furthermore, at some conditions, over-stoichiometric ammonia availability is also needed that increases the NH3 slips even more. Ammonia (NH3) by this vehicle exhaust aftertreatment system provides a maximum efficiency of NOx removal if a significant amount of NH3 is stored on its catalyst surface. In the other words, the practice shows that slightly less than 100% of the NOx conversion is usually targeted, so that the aqueous urea solution hydrolyzes to NH3 via other species formation, under relatively low temperatures. This paper presents a model based on neural networks integrated with a road vehicle simulator that allows to estimate NH3-slip emission factors for different driving conditions and patterns. The proposed model generates high NH3slips which are not also limited in Brazil, but more efforts needed to be made to elucidate the contribution of vehicle-emitted NH3 to the urban atmosphere.