Introduction of an Approach of Complex Virtual Devices to Achieve Device Interoperability in Smart Building Systems

One of the major challenges for sustainable smart building systems is to support device interoperability, i.e. connecting sensor or actuator devices from different vendors, and present their functionality to the external applications. Furthermore, smart building systems are supposed to connect with devices that are not available yet, i.e. devices that become available on the market sometime later. It is of vital importance that a sustainable smart building platform provides an appropriate external interface that can be leveraged by external applications and smart services. An external platform interface must be stable and independent of specific devices and should support flexible and scalable usage scenarios. A typical approach applied in smart home systems is based on a generic device interface used within the smart building platform. Device functions, even of rather complex devices, are mapped to that generic base type interface by means of specific device drivers. Our new approach, presented in this work, extends that approach by using the smart building system’s rule engine to create complex virtual devices that can represent the most diverse properties of real devices. We examined and evaluated both approaches by means of a practical case study using a smart building system that we have developed. We show that the solution we present allows the highest degree of flexibility without affecting external application interface stability and scalability. In contrast to other systems our approach supports complex virtual device configuration on application layer (e.g. by administration users) instead of device configuration at platform layer (e.g. platform operators). Based on our work, we can show that our approach supports almost arbitrarily flexible use case scenarios without affecting the external application interface stability. However, the cost of this approach is additional appropriate configuration overhead and additional resource consumption at the IoT platform level that must be considered by platform operators. We conclude that the concept of complex virtual devices presented in this work can be applied to improve the usability and device interoperability of sustainable intelligent building systems significantly.

Economized Sensor Data Processing with Vehicle Platooning

We present vehicular platooning as a special case of crowd-sensing framework where sharing sensory information among a crowd is used for their collective benefit. After offering an abstract policy that governs processes involving a vehicular platoon, we review several common scenarios and components surrounding vehicular platooning. We then present a simulated prototype that illustrates efficiency of road usage and vehicle travel time derived from platooning. We have argued that one of the paramount benefits of platooning that is overlooked elsewhere, is the substantial computational savings (i.e., economizing benefits) in acquisition and processing of sensory data among vehicles sharing the road. The most capable vehicle can share data gathered from its sensors with nearby vehicles grouped into a platoon.

Requirement Engineering and Software Product Line Scoping Paradigm

Requirement Engineering (RE) is a part being created for programming structure during the software development lifecycle. Software product line development is a new topic area within the domain of software engineering. It also plays important role in decision making and it is ultimately helpful in rising business environment for productive programming headway. Decisions are central to engineering processes and they hold them together. It is argued that better decisions will lead to better engineering. To achieve better decisions requires that they are understood in detail. In order to address the issues, companies are moving towards Software Product Line Engineering (SPLE) which helps in providing large varieties of products with minimum development effort and cost. This paper proposed a new framework for software product line and compared with other models. The results can help to understand the needs in SPL testing, by identifying points that still require additional investigation. In our future scenario, we will combine this model in a controlled environment with industrial SPL projects which will be the new horizon for SPL process management testing strategies.

Lightweight and Seamless Distributed Scheme for the Smart Home

Security of the smart home in terms of behavior activity pattern recognition is a totally dissimilar and unique issue as compared to the security issues of other scenarios. Sensor devices (low capacity and high capacity) interact and negotiate each other by detecting the daily behavior activity of individuals to execute common tasks. Once a device (e.g., surveillance camera, smart phone and light detection sensor etc.) is compromised, an adversary can then get access to a specific device and can damage daily behavior activity by altering the data and commands. In this scenario, a group of common instruction processes may get involved to generate deadlock. Therefore, an effective suitable security solution is required for smart home architecture. This paper proposes seamless distributed Scheme which fortifies low computational wireless devices for secure communication. Proposed scheme is based on lightweight key-session process to upheld cryptic-link for trajectory by recognizing of individual’s behavior activities pattern. Every device and service provider unit (low capacity sensors (LCS) and high capacity sensors (HCS)) uses an authentication token and originates a secure trajectory connection in network. Analysis of experiments is revealed that proposed scheme strengthens the devices against device seizure attack by recognizing daily behavior activities, minimum utilization memory space of LCS and avoids network from deadlock. Additionally, the results of a comparison with other schemes indicate that scheme manages efficiency in term of computation and communication.

Steady State Power Flow Calculations with STATCOM under Load Increase Scenario and Line Contingencies

Flexible AC transmission system controllers play an important role in controlling the line power flow and in improving voltage profiles of the power system network. They can be used to increase the reliability and efficiency of transmission and distribution system. The modeling of these FACTS controllers in power flow calculations have become a challenging research problem. This paper presents a simple and systematic approach for a steady state power flow calculations of power system with STATCOM (Static Synchronous Compensator). It shows how systematically STATCOM can be implemented in conventional power flow calculations. The main contribution of this paper is to investigate this approach for two special conditions i.e. consideration of load increase pattern incorporating load change (active, reactive and both active and reactive) at all load buses simultaneously and the line contingencies under such load change. Such investigation proves to be relevant for determination of strategy for the optimal placement of STATCOM to enhance the voltage stability. The performance has been evaluated on many standard IEEE test systems. The results for standard IEEE-30 bus test system are presented here.

Extending BDI Multiagent Systems with Agent Norms

Open Multiagent Systems (MASs) are societies in which heterogeneous and independently designed entities (agents) work towards similar, or different ends. Software agents are autonomous and the diversity of interests among different members living in the same society is a fact. In order to deal with this autonomy, these open systems use mechanisms of social control (norms) to ensure a desirable social order. This paper considers the following types of norms: (i) obligation — agents must accomplish a specific outcome; (ii) permission — agents may act in a particular way, and (iii) prohibition — agents must not act in a specific way. All of these characteristics mean to encourage the fulfillment of norms through rewards and to discourage norm violation by pointing out the punishments. Once the software agent decides that its priority is the satisfaction of its own desires and goals, each agent must evaluate the effects associated to the fulfillment of one or more norms before choosing which one should be fulfilled. The same applies when agents decide to violate a norm. This paper also introduces a framework for the development of MASs that provide support mechanisms to the agent’s decision-making, using norm-based reasoning. The applicability and validation of this approach is demonstrated applying a traffic intersection scenario.

When Psychology Meets Ecology: Cognitive Flexibility for Quarry Rehabilitation

Ecological projects are often faced with reluctance from local communities hosting the project, especially when this project involves variation from preset ideas or classical practices. This paper aims at appreciating the contribution of environmental psychology through cognitive flexibility exercises to improve the acceptability of local communities in adopting more ecological rehabilitation scenarios. The study is based on a quarry site located in Bekaa- Lebanon. Four groups were considered with different levels of involvement, as follows: Group 1 is Training (T) – 50 hours of on-site training over 8 months, Group 2 is Awareness (A) – 2 hours of awareness raising session, Group 3 is Flexibility (F) – 2 hours of flexibility exercises and Group 4 is the Control (C). The results show that individuals in Group 3 (F) who followed flexibility sessions accept comparably the ecological rehabilitation option over the more classical one. This is also the case for the people in Group 1 (T) who followed a more time-demanding “on-site training”. Another experience was conducted on a second quarry site combining flexibility with awareness-raising. This research confirms that it is possible to reduce resistance to change thanks to a limited in-time intervention using cognitive flexibility. This methodological approach could be transferable to other environmental problems involving local communities and changes in preset perceptions.

Mapping of Solar Radiation Anomalies Based on Climate Change

The use of alternative energy sources to meet energy demand reduces environmental damage. To diversify an energy matrix and to minimize global warming, a solar energy is gaining space, being an important source of renewable energy, and its potential depends on the climatic conditions of the region. Brazil presents a great solar potential for a generation of electric energy, so the knowledge of solar radiation and its characteristics are fundamental for the study of energy use. Due to the above reasons, this article aims to verify the climatic variability corresponding to the variations in solar radiation anomalies, in the face of climate change scenarios. The data used in this research are part of the Intercomparison of Interconnected Models, Phase 5 (CMIP5), which contributed to the preparation of the fifth IPCC-AR5 report. The solar radiation data were extracted from The Australian Community Climate and Earth System Simulator (ACCESS) model using the RCP 4.5 and RCP 8.5 scenarios that represent an intermediate structure and a pessimistic framework, the latter being the most worrisome in all cases. In order to allow the use of solar radiation as a source of energy in a given location and/or region, it is important, first, to determine its availability, thus justifying the importance of the study. The results pointed out, for the 75-year period (2026-2100), based on a pessimistic scenario, indicate a drop in solar radiation of the approximately 12% in the eastern region of Rio Grande do Sul. Factors that influence the pessimistic prospects of this scenario should be better observed by the responsible authorities, since they can affect the possibility to produce electricity from solar radiation.

Model-Driven and Data-Driven Approaches for Crop Yield Prediction: Analysis and Comparison

Crop yield prediction is a paramount issue in agriculture. The main idea of this paper is to find out efficient way to predict the yield of corn based meteorological records. The prediction models used in this paper can be classified into model-driven approaches and data-driven approaches, according to the different modeling methodologies. The model-driven approaches are based on crop mechanistic modeling. They describe crop growth in interaction with their environment as dynamical systems. But the calibration process of the dynamic system comes up with much difficulty, because it turns out to be a multidimensional non-convex optimization problem. An original contribution of this paper is to propose a statistical methodology, Multi-Scenarios Parameters Estimation (MSPE), for the parametrization of potentially complex mechanistic models from a new type of datasets (climatic data, final yield in many situations). It is tested with CORNFLO, a crop model for maize growth. On the other hand, the data-driven approach for yield prediction is free of the complex biophysical process. But it has some strict requirements about the dataset. A second contribution of the paper is the comparison of these model-driven methods with classical data-driven methods. For this purpose, we consider two classes of regression methods, methods derived from linear regression (Ridge and Lasso Regression, Principal Components Regression or Partial Least Squares Regression) and machine learning methods (Random Forest, k-Nearest Neighbor, Artificial Neural Network and SVM regression). The dataset consists of 720 records of corn yield at county scale provided by the United States Department of Agriculture (USDA) and the associated climatic data. A 5-folds cross-validation process and two accuracy metrics: root mean square error of prediction(RMSEP), mean absolute error of prediction(MAEP) were used to evaluate the crop prediction capacity. The results show that among the data-driven approaches, Random Forest is the most robust and generally achieves the best prediction error (MAEP 4.27%). It also outperforms our model-driven approach (MAEP 6.11%). However, the method to calibrate the mechanistic model from dataset easy to access offers several side-perspectives. The mechanistic model can potentially help to underline the stresses suffered by the crop or to identify the biological parameters of interest for breeding purposes. For this reason, an interesting perspective is to combine these two types of approaches.

A Systematic Approach for Analyzing Multiple Cyber-Physical Attacks on the Smart Grid

In this paper, we evaluate the resilience of the smart grid system in the presence of multiple cyber-physical attacks on its distinct functional components. We discuss attack-defense scenarios and their effect on smart grid resilience. Through contingency simulations in the Network and PowerWorld Simulator, we analyze multiple cyber-physical attacks that propagate from the cyber domain to power systems and discuss how such attacks destabilize the underlying power grid. The analysis of such simulations helps system administrators develop more resilient systems and improves the response of the system in the presence of cyber-physical attacks.

Dynamic Reroute Modeling for Emergency Evacuation: Case Study of Brunswick City, Germany

The human behaviors during evacuations are quite complex. One of the critical behaviors which affect the efficiency of evacuation is route choice. Therefore, the respective simulation modeling work needs to function properly. In this paper, Simulation of Urban Mobility’s (SUMO) current dynamic route modeling during evacuation, i.e. the rerouting functions, is examined with a real case study. The result consistency of the simulation and the reality is checked as well. Four influence factors (1) time to get information, (2) probability to cancel a trip, (3) probability to use navigation equipment, and (4) rerouting and information updating period are considered to analyze possible traffic impacts during the evacuation and to examine the rerouting functions in SUMO. Furthermore, some behavioral characters of the case study are analyzed with use of the corresponding detector data and applied in the simulation. The experiment results show that the dynamic route modeling in SUMO can deal with the proposed scenarios properly. Some issues and function needs related to route choice are discussed and further improvements are suggested.

Risk Based Maintenance Planning for Loading Equipment in Underground Hard Rock Mine: Case Study

Mining industry is known for its appetite to spend sizeable capital on mine equipment. However, in the current scenario, the mining industry is challenged by daunting factors of non-uniform geological conditions, uneven ore grade, uncontrollable and volatile mineral commodity prices and the ever increasing quest to optimize the capital and operational costs. Thus, the role of equipment reliability and maintenance planning inherits a significant role in augmenting the equipment availability for the operation and in turn boosting the mine productivity. This paper presents the Risk Based Maintenance (RBM) planning conducted on mine loading equipment namely Load Haul Dumpers (LHDs) at Vedanta Resources Ltd subsidiary Hindustan Zinc Limited operated Sindesar Khurd Mines, an underground zinc and lead mine situated in Dariba, Rajasthan, India. The mining equipment at the location is maintained by the Original Equipment Manufacturers (OEMs) namely Sandvik and Atlas Copco, who carry out the maintenance and inspection operations for the equipment. Based on the downtime data extracted for the equipment fleet over the period of 6 months spanning from 1st January 2017 until 30th June 2017, it was revealed that significant contribution of three downtime issues related to namely Engine, Hydraulics, and Transmission to be common among all the loading equipment fleet and substantiated by Pareto Analysis. Further scrutiny through Bubble Matrix Analysis of the given factors revealed the major influence of selective factors namely Overheating, No Load Taken (NTL) issues, Gear Changing issues and Hose Puncture and leakage issues. Utilizing the equipment wise analysis of all the downtime factors obtained, spares consumed, and the alarm logs extracted from the machines, technical design changes in the equipment and pre shift critical alarms checklist were proposed for the equipment maintenance. The given analysis is beneficial to allow OEMs or mine management to focus on the critical issues hampering the reliability of mine equipment and design necessary maintenance strategies to mitigate them.

Performance Study of ZigBee-Based Wireless Sensor Networks

The IEEE 802.15.4 standard is designed for low-rate wireless personal area networks (LR-WPAN) with focus on enabling wireless sensor networks. It aims to give a low data rate, low power consumption, and low cost wireless networking on the device-level communication. The objective of this study is to investigate the performance of IEEE 802.15.4 based networks using simulation tool. In this project the network simulator 2 NS2 was used to several performance measures of wireless sensor networks. Three scenarios were considered, multi hop network with a single coordinator, star topology, and an ad hoc on demand distance vector AODV. Results such as packet delivery ratio, hop delay, and number of collisions are obtained from these scenarios.

Structural Analysis and Strengthening of the National Youth Foundation Building in Igoumenitsa, Greece

The current paper presents a structural assessment and proposals for retrofit of the National Youth Foundation Building, an existing reinforced concrete (RC) building in the city of Igoumenitsa, Greece. The building is scheduled to be renovated in order to create a Municipal Cultural Center. The bearing capacity and structural integrity have been investigated in relation to the provisions and requirements of the Greek Retrofitting Code (KAN.EPE.) and European Standards (Eurocodes). The capacity of the existing concrete structure that makes up the two central buildings in the complex (buildings II and IV) has been evaluated both in its present form and after including several proposed architectural interventions. The structural system consists of spatial frames of columns and beams that have been simulated using beam elements. Some RC elements of the buildings have been strengthened in the past by means of concrete jacketing and have had cracks sealed with epoxy injections. Static-nonlinear analysis (Pushover) has been used to assess the seismic performance of the two structures with regard to performance level B1 from KAN.EPE. Retrofitting scenarios are proposed for the two buildings, including type Λ steel bracings and placement of concrete shear walls in the transverse direction in order to achieve the design-specification deformation in each applicable situation, improve the seismic performance, and reduce the number of interventions required.

Energy Saving, Heritage Conserving Renovation Methods in Case of Historical Building Stock

The majority of the building stock of Budapest inner districts was built around the turn of the 19th and 20th century. Although the structural stability of the buildings is not questioned, as the load bearing structures are in sufficient state, the secondary structures are aged, resulting unsatisfactory energetic state. The renovation of these historical buildings requires special methodology and technology: their ornamented facades and custom-made fenestration cannot be insulated or exchanged with conventional solutions without damaging the heritage values. The present paper aims to introduce and systematize the possible technological solutions for heritage respecting energy retrofit in case of a historical residential building stock. Through case study, the possible energy saving potential is also calculated using multiple renovation scenarios.

Nearly Zero-Energy Regulation and Buildings Built with Prefabricated Technology: The Case of Hungary

There is an urgent need nowadays to reduce energy demand and the current level of greenhouse gas emission and use renewable energy sources increase in energy efficiency. On the other hand, the European Union (EU) countries are largely dependent on energy imports and are vulnerable to disruption in energy supply, which may, in turn, threaten the functioning of their current economic structure. Residential buildings represent a significant part of the energy consumption of the building stock. Only a small part of the building stock is exchanged every year, thus it is essential to increase the energy efficiency of the existing buildings. Present paper focuses on the buildings built with industrialized technology only, and their opportunities in the boundaries of nearly zero-energy regulation. Current paper shows the emergence of panel construction method, and past and present of the ‘panel’ problem in Hungary with a short outlook to Europe. The study shows as well as the possibilities for meeting the nearly zero and cost optimized requirements for residential buildings by analyzing the renovation scenarios of an existing residential typology.

A Discrete Event Simulation Model to Manage Bed Usage for Non-Elective Admissions in a Geriatric Medicine Speciality

Over the past decade, the non-elective admissions in the UK have increased significantly. Taking into account limited resources (i.e. beds), the related service managers are obliged to manage their resources effectively due to the non-elective admissions which are mostly admitted to inpatient specialities via A&E departments. Geriatric medicine is one of specialities that have long length of stay for the non-elective admissions. This study aims to develop a discrete event simulation model to understand how possible increases on non-elective demand over the next 12 months affect the bed occupancy rate and to determine required number of beds in a geriatric medicine speciality in a UK hospital. In our validated simulation model, we take into account observed frequency distributions which are derived from a big data covering the period April, 2009 to January, 2013, for the non-elective admission and the length of stay. An experimental analysis, which consists of 16 experiments, is carried out to better understand possible effects of case studies and scenarios related to increase on demand and number of bed. As a result, the speciality does not achieve the target level in the base model although the bed occupancy rate decreases from 125.94% to 96.41% by increasing the number of beds by 30%. In addition, the number of required beds is more than the number of beds considered in the scenario analysis in order to meet the bed requirement. This paper sheds light on bed management for service managers in geriatric medicine specialities.

Pre- and Post-Analyses of Disruptive Quay Crane Scheduling Problem

In the past, the quay crane operations have been well studied. There were a certain number of scheduling algorithms for quay crane operations, but without considering some nuisance factors that might disrupt the quay crane operations. For example, bad grapples make a crane unable to load or unload containers or a sudden strong breeze stops operations temporarily. Although these disruptive conditions randomly occur, they influence the efficiency of quay crane operations. The disruption is not considered in the operational procedures nor is evaluated in advance for its impacts. This study applies simulation and optimization approaches to develop structures of pre-analysis and post-analysis for the Quay Crane Scheduling Problem to deal with disruptive scenarios for quay crane operation. Numerical experiments are used for demonstrations for the validity of the developed approaches.

Analytical Comparison of Conventional Algorithms with Vedic Algorithm for Digital Multiplier

In today’s scenario, the complexity of digital signal processing (DSP) applications and various microcontroller architectures have been increasing to such an extent that the traditional approaches to multiplier design in most processors are becoming outdated for being comparatively slow. Modern processing applications require suitable pipelined approaches, and therefore, algorithms that are friendlier with pipelined architectures. Traditional algorithms like Wallace Tree, Radix-4 Booth, Radix-8 Booth, Dadda architectures have been proven to be comparatively slow for pipelined architectures. These architectures, therefore, need to be optimized or combined with other architectures amongst them to enhance its performances and to be made suitable for pipelined hardware/architectures. Recently, Vedic algorithm mathematically has proven to be efficient by appearing to be less complex and with fewer steps for its output establishment and have assumed renewed importance. This paper describes and shows how the Vedic algorithm can be better suited for pipelined architectures and also can be combined with traditional architectures and algorithms for enhancing its ability even further. In this paper, we also established that for complex applications on DSP and other microcontroller architectures, using Vedic approach for multiplication proves to be the best available and efficient option.

Comparison of Different Methods to Produce Fuzzy Tolerance Relations for Rainfall Data Classification in the Region of Central Greece

The aim of this paper is the comparison of three different methods, in order to produce fuzzy tolerance relations for rainfall data classification. More specifically, the three methods are correlation coefficient, cosine amplitude and max-min method. The data were obtained from seven rainfall stations in the region of central Greece and refers to 20-year time series of monthly rainfall height average. Three methods were used to express these data as a fuzzy relation. This specific fuzzy tolerance relation is reformed into an equivalence relation with max-min composition for all three methods. From the equivalence relation, the rainfall stations were categorized and classified according to the degree of confidence. The classification shows the similarities among the rainfall stations. Stations with high similarity can be utilized in water resource management scenarios interchangeably or to augment data from one to another. Due to the complexity of calculations, it is important to find out which of the methods is computationally simpler and needs fewer compositions in order to give reliable results.