Development of Energy Benchmarks Using Mandatory Energy and Emissions Reporting Data: Ontario Post-Secondary Residences

Governments are playing an increasingly active role in reducing carbon emissions, and a key strategy has been the introduction of mandatory energy disclosure policies. These policies have resulted in a significant amount of publicly available data, providing researchers with a unique opportunity to develop location-specific energy and carbon emission benchmarks from this data set, which can then be used to develop building archetypes and used to inform urban energy models. This study presents the development of such a benchmark using the public reporting data. The data from Ontario’s Ministry of Energy for Post-Secondary Educational Institutions are being used to develop a series of building archetype dynamic building loads and energy benchmarks to fill a gap in the currently available building database. This paper presents the development of a benchmark for college and university residences within ASHRAE climate zone 6 areas in Ontario using the mandatory disclosure energy and greenhouse gas emissions data. The methodology presented includes data cleaning, statistical analysis, and benchmark development, and lessons learned from this investigation are presented and discussed to inform the development of future energy benchmarks from this larger data set. The key findings from this initial benchmarking study are: (1) the importance of careful data screening and outlier identification to develop a valid dataset; (2) the key features used to develop a model of the data are building age, size, and occupancy schedules and these can be used to estimate energy consumption; and (3) policy changes affecting the primary energy generation significantly affected greenhouse gas emissions, and consideration of these factors was critical to evaluate the validity of the reported data.

Risk in the South African Sectional Title Industry: An Assurance Perspective

The sectional title industry has been a part of the property landscape in South Africa for almost half a century, and plays a significant role in addressing the housing problem in the country. Stakeholders such as owners and investors in sectional title property are in most cases not directly involved in the management thereof, and place reliance on the audited annual financial statements of bodies corporate for decision-making purposes. Although the industry seems to be highly regulated, the legislation regarding accounting and auditing of sectional title is vague and ambiguous. Furthermore, there are no industry-specific auditing and accounting standards to guide accounting and auditing practitioners in performing their work and industry financial benchmarks are not readily available. In addition, financial pressure on sectional title schemes is often very high due to the fact that some owners exercise unrealistic pressure to keep monthly levies as low as possible. All these factors have an impact on the business risk as well as audit risk of bodies corporate. Very little academic research has been undertaken on the sectional title industry in South Africa from an accounting and auditing perspective. The aim of this paper is threefold: Firstly, to discuss the findings of a literature review on uncertainties, ambiguity and confusing aspects in current legislation regarding the audit of a sectional title property that may cause or increase audit and business risk. Secondly, empirical findings of risk-related aspects from the results of interviews with three groups of body corporate role-players will be discussed. The role-players were body corporate trustee chairpersons, body corporate managing agents and accounting and auditing practitioners of bodies corporate. Specific reference will be made to business risk and audit risk. Thirdly, practical recommendations will be made on possibilities of closing the audit expectation gap, and further research opportunities in this regard will be discussed.

Understanding Walkability in the Libyan Urban Space: Policies, Perceptions and Smart Design for Sustainable Tripoli

Walkability in civic and public spaces in Libyan cities is challenging due to the lack of accessibility design, informal merging into car traffic, and the general absence of adequate urban and space planning. The lack of accessible and pedestrian-friendly public spaces in Libyan cities has emerged as a major concern for the government if it is to develop smart and sustainable spaces for the 21st century. A walkable urban space has become a driver for urban development and redistribution of land use to ensure pedestrian and walkable routes between sites of living and workplaces. The characteristics of urban open space in the city centre play a main role in attracting people to walk when attending their daily needs, recreation and daily sports. There is significant gap in the understanding of perceptions, feasibility and capabilities of Libyan urban space to accommodate enhance or support the smart design of a walkable pedestrian-friendly environment that is safe and accessible to everyone. The paper aims to undertake observations of walkability and walkable space in the city of Tripoli as a benchmark for Libyan cities; assess the validity and consistency of the seven principal aspects of smart design, safety, accessibility and 51 factors that affect the walkability in open urban space in Tripoli, through the analysis of 10 local urban spaces experts (town planner, architect, transport engineer and urban designer); and explore user groups’ perceptions of accessibility in walkable spaces in Libyan cities through questionnaires. The study sampled 200 respondents in 2015-16. The results of this study are useful for urban planning, to classify the walkable urban space elements which affect to improve the level of walkability in the Libyan cities and create sustainable and liveable urban spaces.

Effects of Introducing Similarity Measures into Artificial Bee Colony Approach for Optimization of Vehicle Routing Problem

Vehicle Routing Problem (VRP) is a complex combinatorial optimization problem and it is quite difficult to find an optimal solution consisting of a set of routes for vehicles whose total cost is minimum. Evolutionary and swarm intelligent (SI) algorithms play a vital role in solving optimization problems. While the SI algorithms perform search, the diversity between the solutions they exploit is very important. This is because of the need to avoid early convergence and to get an appropriate balance between the exploration and exploitation. Therefore, it is important to check how far the solutions are diverse. In this paper, we measure the similarity between solutions, which ABC exploits while optimizing VRP. The similar solutions found are discarded at the end of the iteration and only unique solutions are passed on to the next iteration. The bees of discarded solutions become scouts and they start searching for new solutions. This process is continued and results show that the solution is optimized at lesser number of iterations but with the overhead of computing similarity in all the iterations. The problem instance from Solomon benchmarked dataset has been used for evaluating the presented methodology.

Importance of Standards in Engineering and Technology Education

During the past several decades, the economy of each nation has been significantly affected by globalization and technology. Government regulations and private sector standards affect a majority of world trade. Countries have been working together to establish international standards in almost every field. As a result, workers in all sectors need to have an understanding of standards. Engineering and technology students must not only possess an understanding of engineering standards and applicable government codes, but also learn to apply them in designing, developing, testing and servicing products, processes and systems. Accreditation Board for Engineering & Technology (ABET) criteria for engineering and technology education require students to learn and apply standards in their class projects. This paper is a follow-up of a 2006-2009 NSF initiative awarded to IEEE to help develop tutorials and case study modules for students and encourage standards education at college campuses. It presents the findings of a faculty/institution survey conducted through various U.S.-based listservs representing the major engineering and technology disciplines. The intent of the survey was to the gauge the status of use of standards and regulations in engineering and technology coursework and to identify benchmark practices. In light of survey findings, recommendations are made to standards development organizations, industry, and academia to help enhance the use of standards in engineering and technology curricula.

Image Retrieval Based on Multi-Feature Fusion for Heterogeneous Image Databases

Selecting an appropriate image representation is the most important factor in implementing an effective Content-Based Image Retrieval (CBIR) system. This paper presents a multi-feature fusion approach for efficient CBIR, based on the distance distribution of features and relative feature weights at the time of query processing. It is a simple yet effective approach, which is free from the effect of features' dimensions, ranges, internal feature normalization and the distance measure. This approach can easily be adopted in any feature combination to improve retrieval quality. The proposed approach is empirically evaluated using two benchmark datasets for image classification (a subset of the Corel dataset and Oliva and Torralba) and compared with existing approaches. The performance of the proposed approach is confirmed with the significantly improved performance in comparison with the independently evaluated baseline of the previously proposed feature fusion approaches.

Effects of Corruption and Logistics Performance Inefficiencies on Container Throughput: The Latin America Case

Trade liberalizations measures, as import tariff cuts, are not a sufficient trigger for trade growth. Given that price margins are narrow, traders and cargo operators tend to opt out of markets where the process of goods clearance is slow and costly. Excess paperwork and slow customs dispatch not only lead to institutional breakdowns and corruption but also to increasing transaction cost and trade constraints. The objective of this paper is, therefore, two-fold: First, to evaluate the relationship between institutional and infrastructural performance indexes and trade growth in container throughput; and, second, to investigate the causes for differences in container demurrage and detention fees in Latin American countries (using other emerging countries as benchmarking). The analysis is focused on manufactured goods, typically transported by containers. Institutional and infrastructure bottlenecks and, therefore, the country logistics efficiency – measured by the Logistics Performance Index (LPI, World Bank-WB) – are compared with other indexes, such as the Doing Business index (WB) and the Corruption Perception Index (Transparency International). The main results based on the comparison between Latin American countries and the others emerging countries point out in that the growth in containers trade is directly related to LPI performance. It has also been found that the main hypothesis is valid as aspects that more specifically identify trade facilitation and corruption are significant drivers of logistics performance. The exam of port efficiency (demurrage and detention fees) has demonstrated that not necessarily higher level of efficiency is related to lower charges; however, reductions in fees have been more significant within non-Latin American emerging countries.

The Study of Tourists’ Behavior in Water Usage in Hotel Business: Case Study of Phuket Province, Thailand

Tourism is very important to the economy of many countries due to the large contribution in the areas of employment and income generation. However, the rapid growth of tourism can also be considered as one of the major uses of water user, and therefore also have a significant and detrimental impact on the environment. Guest behavior in water usage can be used to manage water in hotels for sustainable water resources management. This research presents a study of hotel guest water usage behavior at two hotels, namely Hotel A (located in Kathu district) and Hotel B (located in Muang district) in Phuket Province, Thailand, as case studies. Primary and secondary data were collected from the hotel manager through interview and questionnaires. The water flow rate was measured in-situ from each water supply device in the standard room type at each hotel, including hand washing faucets, bathroom faucets, shower and toilet flush. For the interview, the majority of respondents (n = 204 for Hotel A and n = 244 for Hotel B) were aged between 21 years and 30 years (53% for Hotel A and 65% for Hotel B) and the majority were foreign (78% in Hotel A, and 92% in Hotel B) from American, France and Austria for purposes of tourism (63% in Hotel A, and 55% in Hotel B). The data showed that water consumption ranged from 188 litres to 507 liters, and 383 litres to 415 litres per overnight guest in Hotel A and Hotel B (n = 244), respectively. These figures exceed the water efficiency benchmark set for Tropical regions by the International Tourism Partnership (ITP). It is recommended that guest water saving initiatives should be implemented at hotels. Moreover, the results showed that guests have high satisfaction for the hotels, the front office service reveal the top rates of average score of 4.35 in Hotel A and 4.20 in Hotel B, respectively, while the luxury decoration and room cleanliness exhibited the second satisfaction scored by the guests in Hotel A and B, respectively. On the basis of this information, the findings can be very useful to improve customer service satisfaction and pay attention to this particular aspect for better hotel management.

Static and Dynamic Analysis of Hyperboloidal Helix Having Thin Walled Open and Close Sections

The static and dynamic analyses of hyperboloidal helix having the closed and the open square box sections are investigated via the mixed finite element formulation based on Timoshenko beam theory. Frenet triad is considered as local coordinate systems for helix geometry. Helix domain is discretized with a two-noded curved element and linear shape functions are used. Each node of the curved element has 12 degrees of freedom, namely, three translations, three rotations, two shear forces, one axial force, two bending moments and one torque. Finite element matrices are derived by using exact nodal values of curvatures and arc length and it is interpolated linearly throughout the element axial length. The torsional moments of inertia for close and open square box sections are obtained by finite element solution of St. Venant torsion formulation. With the proposed method, the torsional rigidity of simply and multiply connected cross-sections can be also calculated in same manner. The influence of the close and the open square box cross-sections on the static and dynamic analyses of hyperboloidal helix is investigated. The benchmark problems are represented for the literature.

A Coupled Extended-Finite-Discrete Element Method: On the Different Contact Schemes between Continua and Discontinua

Recently, advanced geotechnical engineering problems related to soil movement, particle loss, and modeling of local failure (i.e. discontinua) as well as modeling the in-contact structures (i.e. continua) are of the great interest among researchers. The aim of this research is to meet the requirements with respect to the modeling of the above-mentioned two different domains simultaneously. To this end, a coupled numerical method is introduced based on Discrete Element Method (DEM) and eXtended-Finite Element Method (X-FEM). In the coupled procedure, DEM is employed to capture the interactions and relative movements of soil particles as discontinua, while X-FEM is utilized to model in-contact structures as continua, which may consist of different types of discontinuities. For verification purposes, the new coupled approach is utilized to examine benchmark problems including different contacts between/within continua and discontinua. Results are validated by comparison with those of existing analytical and numerical solutions. This study proves that extended-finite-discrete element method can be used to robustly analyze not only contact problems, but also other types of discontinuities in continua such as (i) crack formations and propagations, (ii) voids and bimaterial interfaces, and (iii) combination of previous cases. In essence, the proposed method can be used vastly in advanced soil-structure interaction problems to investigate the micro and macro behaviour of the surrounding soil and the response of the embedded structure that contains discontinuities.

Model Updating-Based Approach for Damage Prognosis in Frames via Modal Residual Force

This paper presents an effective model updating strategy for damage localization and quantification in frames by defining damage detection problem as an optimization issue. A generalized version of the Modal Residual Force (MRF) is employed for presenting a new damage-sensitive cost function. Then, Grey Wolf Optimization (GWO) algorithm is utilized for solving suggested inverse problem and the global extremums are reported as damage detection results. The applicability of the presented method is investigated by studying different damage patterns on the benchmark problem of the IASC-ASCE, as well as a planar shear frame structure. The obtained results emphasize good performance of the method not only in free-noise cases, but also when the input data are contaminated with different levels of noises.

Identification of Key Parameters for Benchmarking of Combined Cycle Power Plants Retrofit

Benchmarking of a process with respect to energy consumption, without accomplishing a full retrofit study, can save both engineering time and money. In order to achieve this goal, the first step is to develop a conceptual-mathematical model that can easily be applied to a group of similar processes. In this research, we have aimed to identify a set of key parameters for the model which is supposed to be used for benchmarking of combined cycle power plants. For this purpose, three similar combined cycle power plants were studied. The results showed that ambient temperature, pressure and relative humidity, number of HRSG evaporator pressure levels and relative power in part load operation are the main key parameters. Also, the relationships between these parameters and produced power (by gas/ steam turbine), gas turbine and plant efficiency, temperature and mass flow rate of the stack flue gas were investigated.

Parametrization of Piezoelectric Vibration Energy Harvesters for Low Power Embedded Systems

Matching an embedded electronic application with a cantilever vibration energy harvester remains a difficult endeavour due to the large number of factors influencing the output power. In the presented work, complementary balanced energy harvester parametrization is used as a methodology for simplification of harvester integration in electronic applications. This is achieved by a dual approach consisting of an adaptation of the general parametrization methodology in conjunction with a straight forward harvester benchmarking strategy. For this purpose, the design and implementation of a suitable user friendly cantilever energy harvester benchmarking platform is discussed. Its effectiveness is demonstrated by applying the methodology to a commercially available Mide V21BL vibration energy harvester, with excitation amplitude and frequency as variables.

Satellite Imagery Classification Based on Deep Convolution Network

Satellite imagery classification is a challenging problem with many practical applications. In this paper, we designed a deep convolution neural network (DCNN) to classify the satellite imagery. The contributions of this paper are twofold — First, to cope with the large-scale variance in the satellite image, we introduced the inception module, which has multiple filters with different size at the same level, as the building block to build our DCNN model. Second, we proposed a genetic algorithm based method to efficiently search the best hyper-parameters of the DCNN in a large search space. The proposed method is evaluated on the benchmark database. The results of the proposed hyper-parameters search method show it will guide the search towards better regions of the parameter space. Based on the found hyper-parameters, we built our DCNN models, and evaluated its performance on satellite imagery classification, the results show the classification accuracy of proposed models outperform the state of the art method.

Impact of Stack Caches: Locality Awareness and Cost Effectiveness

Treating data based on its location in memory has received much attention in recent years due to its different properties, which offer important aspects for cache utilization. Stack data and non-stack data may interfere with each other’s locality in the data cache. One of the important aspects of stack data is that it has high spatial and temporal locality. In this work, we simulate non-unified cache design that split data cache into stack and non-stack caches in order to maintain stack data and non-stack data separate in different caches. We observe that the overall hit rate of non-unified cache design is sensitive to the size of non-stack cache. Then, we investigate the appropriate size and associativity for stack cache to achieve high hit ratio especially when over 99% of accesses are directed to stack cache. The result shows that on average more than 99% of stack cache accuracy is achieved by using 2KB of capacity and 1-way associativity. Further, we analyze the improvement in hit rate when adding small, fixed, size of stack cache at level1 to unified cache architecture. The result shows that the overall hit rate of unified cache design with adding 1KB of stack cache is improved by approximately, on average, 3.9% for Rijndael benchmark. The stack cache is simulated by using SimpleScalar toolset.

Adaptive Fuzzy Control of a Nonlinear Tank Process

Liquid level control of conical tank system is known to be a great challenge in many industries such as food processing, hydrometallurgical industries and wastewater treatment plant due to its highly nonlinear characteristics. In this research, an adaptive fuzzy PID control scheme is applied to the problem of liquid level control in a nonlinear tank process. A conical tank process is first modeled and primarily simulated. A PID controller is then applied to the plant model as a suitable benchmark for comparison and the dynamic responses of the control system to different step inputs were investigated. It is found that the conventional PID controller is not able to fulfill the controller design criteria such as desired time constant due to highly nonlinear characteristics of the plant model. Consequently, a nonlinear control strategy based on gain-scheduling adaptive control incorporating a fuzzy logic observer is proposed to accurately control the nonlinear tank system. The simulation results clearly demonstrated the superiority of the proposed adaptive fuzzy control method over the conventional PID controller.

Discriminant Analysis as a Function of Predictive Learning to Select Evolutionary Algorithms in Intelligent Transportation System

In this paper, we present the use of the discriminant analysis to select evolutionary algorithms that better solve instances of the vehicle routing problem with time windows. We use indicators as independent variables to obtain the classification criteria, and the best algorithm from the generic genetic algorithm (GA), random search (RS), steady-state genetic algorithm (SSGA), and sexual genetic algorithm (SXGA) as the dependent variable for the classification. The discriminant classification was trained with classic instances of the vehicle routing problem with time windows obtained from the Solomon benchmark. We obtained a classification of the discriminant analysis of 66.7%.

A Two-Stage Adaptation towards Automatic Speech Recognition System for Malay-Speaking Children

Recently, Automatic Speech Recognition (ASR) systems were used to assist children in language acquisition as it has the ability to detect human speech signal. Despite the benefits offered by the ASR system, there is a lack of ASR systems for Malay-speaking children. One of the contributing factors for this is the lack of continuous speech database for the target users. Though cross-lingual adaptation is a common solution for developing ASR systems for under-resourced language, it is not viable for children as there are very limited speech databases as a source model. In this research, we propose a two-stage adaptation for the development of ASR system for Malay-speaking children using a very limited database. The two stage adaptation comprises the cross-lingual adaptation (first stage) and cross-age adaptation. For the first stage, a well-known speech database that is phonetically rich and balanced, is adapted to the medium-sized Malay adults using supervised MLLR. The second stage adaptation uses the speech acoustic model generated from the first adaptation, and the target database is a small-sized database of the target users. We have measured the performance of the proposed technique using word error rate, and then compare them with the conventional benchmark adaptation. The two stage adaptation proposed in this research has better recognition accuracy as compared to the benchmark adaptation in recognizing children’s speech.

The Proposal of a Shared Mobility City Index to Support Investment Decision Making for Carsharing

One of the biggest challenges entering a market with a carsharing or any other shared mobility (SM) service is sound investment decision-making. To support this process, the authors think that a city index evaluating different criteria is necessary. The goal of such an index is to benchmark cities along a set of external measures to answer the main two challenges: financially viability and the understanding of its specific requirements. The authors have consulted several shared mobility projects and industry experts to create such a Shared Mobility City Index (SMCI). The current proposal of the SMCI consists of 11 individual index measures: general data (demographics, geography, climate and city culture), shared mobility landscape (current SM providers, public transit options, commuting patterns and driving culture) and political vision and goals (vision of the Mayor, sustainability plan, bylaws/tenders supporting SM). To evaluate the suitability of the index, 16 cities on the East Coast of North America were selected and secondary research was conducted. The main sources of this study were census data, organisational records, independent press releases and informational websites. Only non-academic sources where used because the relevant data for the chosen cities is not published in academia. Applying the index measures to the selected cities resulted in three major findings. Firstly, density (city area divided by number of inhabitants) is not an indicator for the number of SM services offered: the city with the lowest density has five bike and carsharing options. Secondly, there is a direct correlation between commuting patterns and how many shared mobility services are offered. New York, Toronto and Washington DC have the highest public transit ridership and the most shared mobility providers. Lastly, except one, all surveyed cities support shared mobility with their sustainability plan. The current version of the shared mobility index is proving a practical tool to evaluate cities, and to understand functional, political, social and environmental considerations. More cities will have to be evaluated to refine the criteria further. However, the current version of the index can be used to assess cities on their suitability for shared mobility services and will assist investors deciding which city is a financially viable market.

A Cuckoo Search with Differential Evolution for Clustering Microarray Gene Expression Data

A DNA microarray technology is a collection of microscopic DNA spots attached to a solid surface. Scientists use DNA microarrays to measure the expression levels of large numbers of genes simultaneously or to genotype multiple regions of a genome. Elucidating the patterns hidden in gene expression data offers a tremendous opportunity for an enhanced understanding of functional genomics. However, the large number of genes and the complexity of biological networks greatly increase the challenges of comprehending and interpreting the resulting mass of data, which often consists of millions of measurements. It is handled by clustering which reveals the natural structures and identifying the interesting patterns in the underlying data. In this paper, gene based clustering in gene expression data is proposed using Cuckoo Search with Differential Evolution (CS-DE). The experiment results are analyzed with gene expression benchmark datasets. The results show that CS-DE outperforms CS in benchmark datasets. To find the validation of the clustering results, this work is tested with one internal and one external cluster validation indexes.