The Use of Lane-Centering to Assure the Visible Light Communication Connectivity for a Platoon of Autonomous Vehicles

The new emerging Visible Light Communication (VLC) technology has been subjected to intensive investigation, evaluation, and lately, deployed in the context of convoy-based applications for Intelligent Transportations Systems (ITS). The technology limitations were defined and supported by different solutions proposals to enhance the crucial alignment and mobility limitations. In this paper, we propose the incorporation of VLC technology and Lane-Centering (LC) technique to assure the VLC-connectivity by keeping the autonomous vehicle aligned to the lane center using vision-based lane detection in a convoy-based formation. Such combination can ensure the optical communication connectivity with a lateral error less than 30 cm. As soon as the road lanes are detectable, the evaluated system showed stable behavior independently from the inter-vehicle distances and without the need for any exchanged information of the remote vehicles. The evaluation of the proposed system is verified using VLC prototype and an empirical result of LC running application over 60 km in Madrid M40 highway.

Hybrid Anomaly Detection Using Decision Tree and Support Vector Machine

Intrusion detection systems (IDS) are the main components of network security. These systems analyze the network events for intrusion detection. The design of an IDS is through the training of normal traffic data or attack. The methods of machine learning are the best ways to design IDSs. In the method presented in this article, the pruning algorithm of C5.0 decision tree is being used to reduce the features of traffic data used and training IDS by the least square vector algorithm (LS-SVM). Then, the remaining features are arranged according to the predictor importance criterion. The least important features are eliminated in the order. The remaining features of this stage, which have created the highest level of accuracy in LS-SVM, are selected as the final features. The features obtained, compared to other similar articles which have examined the selected features in the least squared support vector machine model, are better in the accuracy, true positive rate, and false positive. The results are tested by the UNSW-NB15 dataset.

Optimum Design of Tall Tube-Type Building: An Approach to Structural Height Premium

In last decades, tubular systems employed for tall buildings were efficient structural systems. However, increasing the height of a building leads to an increase in structural material corresponding to the loads imposed by lateral loads. Based on this approach, new structural systems are emerging to provide strength and stiffness with the minimum premium for height. In this research, selected tube-type structural systems such as framed tubes, braced tubes, diagrids and hexagrid systems were applied as a single tube, tubular structures combined with braced core and outrigger trusses on a set of 48, 72, and 96-story, respectively, to improve integrated structural systems. This paper investigated structural material consumption by model structures focusing on the premium for height. Compared analytical results indicated that as the height of the building increased, combination of the structural systems caused the framed tube, hexagrid and braced tube system to pay fewer premiums to material tonnage while in diagrid system, combining the structural system reduced insignificantly the steel material consumption.

Classifying and Predicting Efficiencies Using Interval DEA Grid Setting

The classification and the prediction of efficiencies in Data Envelopment Analysis (DEA) is an important issue, especially in large scale problems or when new units frequently enter the under-assessment set. In this paper, we contribute to the subject by proposing a grid structure based on interval segmentations of the range of values for the inputs and outputs. Such intervals combined, define hyper-rectangles that partition the space of the problem. This structure, exploited by Interval DEA models and a dominance relation, acts as a DEA pre-processor, enabling the classification and prediction of efficiency scores, without applying any DEA models.

Association of Maternal Diet Quality Indices and Dietary Patterns during Lactation and the Growth of Exclusive Breastfed Infant

Maternal dietary intake during lactation might affect the growth rate of an exclusive breastfed infant. The present study was conducted to evaluate the effect of maternal dietary patterns and quality during lactation on the growth of the exclusive breastfed infant. Methods: 484 healthy lactating mothers with their infant were enrolled in this study. Only exclusive breastfed infants were included in this study which was conducted in Iran. Dietary intake of lactating mothers was assessed using a validated and reliable semi-quantitative food frequency questionnaire. Diet quality indices such as alternative Healthy eating index (HEI), Dietary energy density (DED), and adherence to Mediterranean dietary pattern score, Nordic and dietary approaches to stop hypertension (DASH) eating pattern were created. Anthropometric features of infant (weight, height, and head circumference) were recorded at birth, two and four months. Results: Weight, length, weight for height and head circumference of infants at two months and four months age were mostly in the normal range among those that mothers adhered more to the HEI in lactation period (normal weight: 61%; normal height: 59%). The prevalence of stunting at four months of age among those whose mothers adhered more to the HEI was 31% lower than those with the least adherence to HEI. Mothers in the top tertiles of HEI score had the lowest frequency of having underweight infants (18% vs. 33%; P=0.03). Odds ratio of being overweight or obese at four months age was the lowest among those infants whose mothers adhered more to the HEI (OR: 0.67 vs 0.91; Ptrend=0.03). However, there was not any significant association between adherence of mothers to Mediterranean diet as well as DASH diet and Nordic eating pattern and the growth of infants (none of weight, height or head circumference). Infant weight, length, weight for height and head circumference at two months and four months did not show significant differences among different tertile categories of mothers’ DED. Conclusions: Higher diet quality indices and more adherence of lactating mother to HEI (as an indicator of diet quality) may be associated with better growth indices of the breastfed infant. However, it seems that DED of the lactating mother does not affect the growth of the breastfed infant. Adherence to the different dietary patterns such as Mediterranean, DASH or Nordic among mothers had no different effect on the growth indices of the infants. However, higher diet quality indices and more adherence of lactating mother to HEI may be associated with better growth indices of the breastfed infant. Breastfeeding is a complete way that is not affected much by the dietary patterns of the mother. However, better diet quality might be associated with better growth.

Electricity Price Forecasting: A Comparative Analysis with Shallow-ANN and DNN

Electricity prices have sophisticated features such as high volatility, nonlinearity and high frequency that make forecasting quite difficult. Electricity price has a volatile and non-random character so that, it is possible to identify the patterns based on the historical data. Intelligent decision-making requires accurate price forecasting for market traders, retailers, and generation companies. So far, many shallow-ANN (artificial neural networks) models have been published in the literature and showed adequate forecasting results. During the last years, neural networks with many hidden layers, which are referred to as DNN (deep neural networks) have been using in the machine learning community. The goal of this study is to investigate electricity price forecasting performance of the shallow-ANN and DNN models for the Turkish day-ahead electricity market. The forecasting accuracy of the models has been evaluated with publicly available data from the Turkish day-ahead electricity market. Both shallow-ANN and DNN approach would give successful result in forecasting problems. Historical load, price and weather temperature data are used as the input variables for the models. The data set includes power consumption measurements gathered between January 2016 and December 2017 with one-hour resolution. In this regard, forecasting studies have been carried out comparatively with shallow-ANN and DNN models for Turkish electricity markets in the related time period. The main contribution of this study is the investigation of different shallow-ANN and DNN models in the field of electricity price forecast. All models are compared regarding their MAE (Mean Absolute Error) and MSE (Mean Square) results. DNN models give better forecasting performance compare to shallow-ANN. Best five MAE results for DNN models are 0.346, 0.372, 0.392, 0,402 and 0.409.

Deep Learning for Renewable Power Forecasting: An Approach Using LSTM Neural Networks

Load forecasting has become crucial in recent years and become popular in forecasting area. Many different power forecasting models have been tried out for this purpose. Electricity load forecasting is necessary for energy policies, healthy and reliable grid systems. Effective power forecasting of renewable energy load leads the decision makers to minimize the costs of electric utilities and power plants. Forecasting tools are required that can be used to predict how much renewable energy can be utilized. The purpose of this study is to explore the effectiveness of LSTM-based neural networks for estimating renewable energy loads. In this study, we present models for predicting renewable energy loads based on deep neural networks, especially the Long Term Memory (LSTM) algorithms. Deep learning allows multiple layers of models to learn representation of data. LSTM algorithms are able to store information for long periods of time. Deep learning models have recently been used to forecast the renewable energy sources such as predicting wind and solar energy power. Historical load and weather information represent the most important variables for the inputs within the power forecasting models. The dataset contained power consumption measurements are gathered between January 2016 and December 2017 with one-hour resolution. Models use publicly available data from the Turkish Renewable Energy Resources Support Mechanism. Forecasting studies have been carried out with these data via deep neural networks approach including LSTM technique for Turkish electricity markets. 432 different models are created by changing layers cell count and dropout. The adaptive moment estimation (ADAM) algorithm is used for training as a gradient-based optimizer instead of SGD (stochastic gradient). ADAM performed better than SGD in terms of faster convergence and lower error rates. Models performance is compared according to MAE (Mean Absolute Error) and MSE (Mean Squared Error). Best five MAE results out of 432 tested models are 0.66, 0.74, 0.85 and 1.09. The forecasting performance of the proposed LSTM models gives successful results compared to literature searches.

Incorporation of Safety into Design by Safety Cube

Safety is often seen as a requirement or a performance indicator through the design process, and this does not always result in optimally safe products or systems. This paper suggests integrating the best safety practices with the design process to enrich the exploration experience for designers and add extra values for customers. For this purpose, the commonly practiced safety standards and design methods have been reviewed and their common blocks have been merged forming Safety Cube. Safety Cube combines common blocks for design, hazard identification, risk assessment and risk reduction through an integral approach. An example application presents the use of Safety Cube for design of machinery.

Lighting Consumption Analysis in Retail Industry: Comparative Study

This article is referring to a comparative study regarding the electrical energy consumption for lighting on diverse types of big sizes commercial buildings built in Romania after 2007, having 3, 4, 5 versus 8, 9, 10 operational years. Some buildings have installed building management systems (BMS) to monitor also the lighting performances starting with the opening days till the present days but some have chosen only local meters to implement. Firstly, for each analyzed building, the total required energy power and the energy power consumption for lighting were calculated depending on the lamps number, the unit power and the average daily running hours. All objects and installations were chosen depending on the destination/location of the lighting (exterior parking or access, interior or covering parking, building interior and building perimeter). Secondly, to all lighting objects and installations, mechanical counters were installed, and to the ones linked to BMS there were installed the digital meters as well for a better monitoring. Some efficient solutions are proposed to improve the power consumption, for example the 1/3 lighting functioning for the covered and exterior parking lighting to those buildings if can be done. This type of lighting share can be performed on each level, especially on the night shifts. Another example is to use the dimmers to reduce the light level, depending on the executed work in the respective area, and a 30% power energy saving can be achieved. Using the right BMS to monitor, the energy consumption depending on the average operational daily hours and changing the non-performant unit lights with the ones having LED technology or economical ones might increase significantly the energy performances and reduce the energy consumption of the buildings.

Tools for Analysis and Optimization of Standalone Green Microgrids

Green microgrids using mostly renewable energy (RE) for generation, are complex systems with inherent nonlinear dynamics. Among a variety of different optimization tools there are only a few ones that adequately consider this complexity. This paper evaluates applicability of two somewhat similar optimization tools tailored for standalone RE microgrids and also assesses a machine learning tool for performance prediction that can enhance the reliability of any chosen optimization tool. It shows that one of these microgrid optimization tools has certain advantages over another and presents a detailed routine of preparing input data to simulate RE microgrid behavior. The paper also shows how neural-network-based predictive modeling can be used to validate and forecast solar power generation based on weather time series data, which improves the overall quality of standalone RE microgrid analysis.

An Analysis of the Representation of the Translator and Translation Process into Brazilian Social Networking Groups

In the digital era, in which we have an avalanche of information, it is not new that the Internet has brought new modes of communication and knowledge access. Characterized by the multiplicity of discourses, opinions, beliefs and cultures, the web is a space of political-ideological dimensions where people (who often do not know each other) interact and create representations, deconstruct stereotypes, and redefine identities. Currently, the translator needs to be able to deal with digital spaces ranging from specific software to social media, which inevitably impact on his professional life. One of the most impactful ways of being seen in cyberspace is the participation in social networking groups. In addition to its ability to disseminate information among participants, social networking groups allow a significant personal and social exposure. Such exposure is due to the visibility of each participant achieved not only on its personal profile page, but also in each comment or post the person makes in the groups. The objective of this paper is to study the representations of translators and translation process on the Internet, more specifically in publications in two Brazilian groups of great influence on the Facebook: "Translators/Interpreters" and "Translators, Interpreters and Curious". These chosen groups represent the changes the network has brought to the profession, including the way translators are seen and see themselves. The analyzed posts allowed a reading of what common sense seems to think about the translator as opposed to what the translators seem to think about themselves as a professional class. The results of the analysis lead to the conclusion that these two positions are antagonistic and sometimes represent conflict of interests: on the one hand, the society in general consider the translator’s work something easy, therefore it is not necessary to be well remunerated; on the other hand, the translators who know how complex a translation process is and how much it takes to be a good professional. The results also reveal that social networking sites such as Facebook provide more visibility, but it takes a more active role from the translator to achieve a greater appreciation of the profession and more recognition of the role of the translator, especially in face of increasingly development of automatic translation programs.

The Role of Blended Modality in Enhancing Active Learning Strategies in Higher Education: A Case Study of a Hybrid Course of Oral Production and Listening of French

Learning oral skills in an Arabic speaking environment is challenging. A blended course (material, activities, and individual/ group work tasks …) was implemented in a module of level B1 for undergraduate students of French as a foreign language in order to increase their opportunities to practice listening and speaking skills. This research investigates the influence of this modality on enhancing active learning and examines the effectiveness of provided strategies. Moreover, it aims at discovering how it allows teacher to flip the traditional classroom and create a learner-centered framework. Which approaches were integrated to motivate students and urge them to search, analyze, criticize, create and accomplish projects? What was the perception of students? This paper is based on the qualitative findings of a questionnaire and a focus group interview with learners. Despite the doubled time and effort both “teacher” and “student” needed, results revealed that the NTIC allowed a shift into a learning paradigm where learners were the “chiefs” of the process. Tasks and collaborative projects required higher intellectual capacities from them. Learners appreciated this experience and developed new life-long learning competencies at many levels: social, affective, ethical and cognitive. To conclude, they defined themselves as motivated young researchers, motivators and critical thinkers.

A Condition-Based Maintenance Policy for Multi-Unit Systems Subject to Deterioration

In this paper, we propose a condition-based maintenance policy for multi-unit systems considering the existence of economic dependency among units. We consider a system composed of N identical units, where each unit deteriorates independently. Deterioration process of each unit is modeled as a three-state continuous time homogeneous Markov chain with two working states and a failure state. The average production rate of units varies in different working states and demand rate of the system is constant. Units are inspected at equidistant time epochs, and decision regarding performing maintenance is determined by the number of units in the failure state. If the total number of units in the failure state exceeds a critical level, maintenance is initiated, where units in failed state are replaced correctively and deteriorated state units are maintained preventively. Our objective is to determine the optimal number of failed units to initiate maintenance minimizing the long run expected average cost per unit time. The problem is formulated and solved in the semi-Markov decision process (SMDP) framework. A numerical example is developed to demonstrate the proposed policy and the comparison with the corrective maintenance policy is presented.

Critical Buckling Load of Carbon Nanotube with Non-Local Timoshenko Beam Using the Differential Transform Method

In this paper, the Differential Transform Method (DTM) is employed to predict and to analysis the non-local critical buckling loads of carbon nanotubes with various end conditions and the non-local Timoshenko beam described by single differential equation. The equation differential of buckling of the nanobeams is derived via a non-local theory and the solution for non-local critical buckling loads is finding by the DTM. The DTM is introduced briefly. It can easily be applied to linear or nonlinear problems and it reduces the size of computational work. Influence of boundary conditions, the chirality of carbon nanotube and aspect ratio on non-local critical buckling loads are studied and discussed. Effects of nonlocal parameter, ratios L/d, the chirality of single-walled carbon nanotube, as well as the boundary conditions on buckling of CNT are investigated.

Software Improvements of the Accuracy in the Air-Electronic Measurement Systems for Geometrical Dimensions

Due to the constant development of measurement systems and the aim for computerization, unavoidable improvements are made for the main disadvantages of air gauges. With the appearance of the air-electronic measuring devices, some of their disadvantages are solved. The output electrical signal allows them to be included in the modern systems for measuring information processing and process management. Producer efforts are aimed at reducing the influence of supply pressure and measurement system setup errors. Increased accuracy requirements and preventive error measures are due to the main uses of air electronic systems - measurement of geometric dimensions in the automotive industry where they are applied as modules in measuring systems to measure geometric parameters, form, orientation and location of the elements.

A World Map of Seabed Sediment Based on 50 Years of Knowledge

Production of a global sedimentological seabed map has been initiated in 1995 to provide the necessary tool for searches of aircraft and boats lost at sea, to give sedimentary information for nautical charts, and to provide input data for acoustic propagation modelling. This original approach had already been initiated one century ago when the French hydrographic service and the University of Nancy had produced maps of the distribution of marine sediments of the French coasts and then sediment maps of the continental shelves of Europe and North America. The current map of the sediment of oceans presented was initiated with a UNESCO's general map of the deep ocean floor. This map was adapted using a unique sediment classification to present all types of sediments: from beaches to the deep seabed and from glacial deposits to tropical sediments. In order to allow good visualization and to be adapted to the different applications, only the granularity of sediments is represented. The published seabed maps are studied, if they present an interest, the nature of the seabed is extracted from them, the sediment classification is transcribed and the resulted map is integrated in the world map. Data come also from interpretations of Multibeam Echo Sounder (MES) imagery of large hydrographic surveys of deep-ocean. These allow a very high-quality mapping of areas that until then were represented as homogeneous. The third and principal source of data comes from the integration of regional maps produced specifically for this project. These regional maps are carried out using all the bathymetric and sedimentary data of a region. This step makes it possible to produce a regional synthesis map, with the realization of generalizations in the case of over-precise data. 86 regional maps of the Atlantic Ocean, the Mediterranean Sea, and the Indian Ocean have been produced and integrated into the world sedimentary map. This work is permanent and permits a digital version every two years, with the integration of some new maps. This article describes the choices made in terms of sediment classification, the scale of source data and the zonation of the variability of the quality. This map is the final step in a system comprising the Shom Sedimentary Database, enriched by more than one million punctual and surface items of data, and four series of coastal seabed maps at 1:10,000, 1:50,000, 1:200,000 and 1:1,000,000. This step by step approach makes it possible to take into account the progresses in knowledge made in the field of seabed characterization during the last decades. Thus, the arrival of new classification systems for seafloor has improved the recent seabed maps, and the compilation of these new maps with those previously published allows a gradual enrichment of the world sedimentary map. But there is still a lot of work to enhance some regions, which are still based on data acquired more than half a century ago.

Introduction of an Approach of Complex Virtual Devices to Achieve Device Interoperability in Smart Building Systems

One of the major challenges for sustainable smart building systems is to support device interoperability, i.e. connecting sensor or actuator devices from different vendors, and present their functionality to the external applications. Furthermore, smart building systems are supposed to connect with devices that are not available yet, i.e. devices that become available on the market sometime later. It is of vital importance that a sustainable smart building platform provides an appropriate external interface that can be leveraged by external applications and smart services. An external platform interface must be stable and independent of specific devices and should support flexible and scalable usage scenarios. A typical approach applied in smart home systems is based on a generic device interface used within the smart building platform. Device functions, even of rather complex devices, are mapped to that generic base type interface by means of specific device drivers. Our new approach, presented in this work, extends that approach by using the smart building system’s rule engine to create complex virtual devices that can represent the most diverse properties of real devices. We examined and evaluated both approaches by means of a practical case study using a smart building system that we have developed. We show that the solution we present allows the highest degree of flexibility without affecting external application interface stability and scalability. In contrast to other systems our approach supports complex virtual device configuration on application layer (e.g. by administration users) instead of device configuration at platform layer (e.g. platform operators). Based on our work, we can show that our approach supports almost arbitrarily flexible use case scenarios without affecting the external application interface stability. However, the cost of this approach is additional appropriate configuration overhead and additional resource consumption at the IoT platform level that must be considered by platform operators. We conclude that the concept of complex virtual devices presented in this work can be applied to improve the usability and device interoperability of sustainable intelligent building systems significantly.

Time and Cost Efficiency Analysis of Quick Die Change System on Metal Stamping Industry

Manufacturing cost and setup time are the hot topics to improve in Metal Stamping industry because material and components price are always rising up while costumer requires to cut down the component price year by year. The Single Minute Exchange of Die (SMED) is one of many methods to reduce waste in stamping industry. The Japanese Quick Die Change (QDC) dies system is one of SMED systems that could reduce both of setup time and manufacturing cost. However, this system is rarely used in stamping industries. This paper will analyze how deep the QDC dies system could reduce setup time and the manufacturing cost. The research is conducted by direct observation, simulating and comparing of QDC dies system with conventional dies system. In this research, we found that the QDC dies system could save up to 35% of manufacturing cost and reduce 70% of setup times. This simulation proved that the QDC die system is effective for cost reduction but must be applied in several parallel production processes.

A Comprehensive Evaluation of Supervised Machine Learning for the Phase Identification Problem

Power distribution circuits undergo frequent network topology changes that are often left undocumented. As a result, the documentation of a circuit’s connectivity becomes inaccurate with time. The lack of reliable circuit connectivity information is one of the biggest obstacles to model, monitor, and control modern distribution systems. To enhance the reliability and efficiency of electric power distribution systems, the circuit’s connectivity information must be updated periodically. This paper focuses on one critical component of a distribution circuit’s topology - the secondary transformer to phase association. This topology component describes the set of phase lines that feed power to a given secondary transformer (and therefore a given group of power consumers). Finding the documentation of this component is call Phase Identification, and is typically performed with physical measurements. These measurements can take time lengths on the order of several months, but with supervised learning, the time length can be reduced significantly. This paper compares several such methods applied to Phase Identification for a large range of real distribution circuits, describes a method of training data selection, describes preprocessing steps unique to the Phase Identification problem, and ultimately describes a method which obtains high accuracy (> 96% in most cases, > 92% in the worst case) using only 5% of the measurements typically used for Phase Identification.

Plasma Arc Burner for Pulverized Coal Combustion

Development of new highly efficient plasma arc combustion system of pulverized coal is presented. As it is well-known, coal is one of the main energy carriers by means of which electric and heat energy is produced in thermal power stations. The quality of the extracted coal decreases very rapidly. Therefore, the difficulties associated with its firing and complete combustion arise and thermo-chemical preparation of pulverized coal becomes necessary. Usually, other organic fuels (mazut-fuel oil or natural gas) are added to low-quality coal for this purpose. The fraction of additional organic fuels varies within 35-40% range. This decreases dramatically the economic efficiency of such systems. At the same time, emission of noxious substances in the environment increases. Because of all these, intense development of plasma combustion systems of pulverized coal takes place in whole world. These systems are equipped with Non-Transferred Plasma Arc Torches. They allow practically complete combustion of pulverized coal (without organic additives) in boilers, increase of energetic and financial efficiency. At the same time, emission of noxious substances in the environment decreases dramatically. But, the non-transferred plasma torches have numerous drawbacks, e.g. complicated construction, low service life (especially in the case of high power), instability of plasma arc and most important – up to 30% of energy loss due to anode cooling. Due to these reasons, intense development of new plasma technologies that are free from these shortcomings takes place. In our proposed system, pulverized coal-air mixture passes through plasma arc area that burns between to carbon electrodes directly in pulverized coal muffler burner. Consumption of the carbon electrodes is low and does not need a cooling system, but the main advantage of this method is that radiation of plasma arc directly impacts on coal-air mixture that accelerates the process of thermo-chemical preparation of coal to burn. To ensure the stability of the plasma arc in such difficult conditions, we have developed a power source that provides fixed current during fluctuations in the arc resistance automatically compensated by the voltage change as well as regulation of plasma arc length over a wide range. Our combustion system where plasma arc acts directly on pulverized coal-air mixture is simple. This should allow a significant improvement of pulverized coal combustion (especially low-quality coal) and its economic efficiency. Preliminary experiments demonstrated the successful functioning of the system.