Mathematical Study for Traffic Flow and Traffic Density in Kigali Roads

This work investigates a mathematical study for traffic flow and traffic density in Kigali city roads and the data collected from the national police of Rwanda in 2012. While working on this topic, some mathematical models were used in order to analyze and compare traffic variables. This work has been carried out on Kigali roads specifically at roundabouts from Kigali Business Center (KBC) to Prince House as our study sites. In this project, we used some mathematical tools to analyze the data collected and to understand the relationship between traffic variables. We applied the Poisson distribution method to analyze and to know the number of accidents occurred in this section of the road which is from KBC to Prince House. The results show that the accidents that occurred in 2012 were at very high rates due to the fact that this section has a very narrow single lane on each side which leads to high congestion of vehicles, and consequently, accidents occur very frequently. Using the data of speeds and densities collected from this section of road, we found that the increment of the density results in a decrement of the speed of the vehicle. At the point where the density is equal to the jam density the speed becomes zero. The approach is promising in capturing sudden changes on flow patterns and is open to be utilized in a series of intelligent management strategies and especially in noncurrent congestion effect detection and control.

A Comparison of Tsunami Impact to Sydney Harbour, Australia at Different Tidal Stages

Sydney Harbour is an iconic location with a dense population and low-lying development. On the east coast of Australia, facing the Pacific Ocean, it is exposed to several tsunamigenic trenches. This paper presents a component of the most detailed assessment of the potential for earthquake-generated tsunami impact on Sydney Harbour to date. Models in this study use dynamic tides to account for tide-tsunami interaction. Sydney Harbour’s tidal range is 1.5 m, and the spring tides from January 2015 that are used in the modelling for this study are close to the full tidal range. The tsunami wave trains modelled include hypothetical tsunami generated from earthquakes of magnitude 7.5, 8.0, 8.5, and 9.0 MW from the Puysegur and New Hebrides trenches as well as representations of the historical 1960 Chilean and 2011 Tohoku events. All wave trains are modelled for the peak wave to coincide with both a low tide and a high tide. A single wave train, representing a 9.0 MW earthquake at the Puysegur trench, is modelled for peak waves to coincide with every hour across a 12-hour tidal phase. Using the hydrodynamic model ANUGA, results are compared according to the impact parameters of inundation area, depth variation and current speeds. Results show that both maximum inundation area and depth variation are tide dependent. Maximum inundation area increases when coincident with a higher tide, however, hazardous inundation is only observed for the larger waves modelled: NH90high and P90high. The maximum and minimum depths are deeper on higher tides and shallower on lower tides. The difference between maximum and minimum depths varies across different tidal phases although the differences are slight. Maximum current speeds are shown to be a significant hazard for Sydney Harbour; however, they do not show consistent patterns according to tide-tsunami phasing. The maximum current speed hazard is shown to be greater in specific locations such as Spit Bridge, a narrow channel with extensive marine infrastructure. The results presented for Sydney Harbour are novel, and the conclusions are consistent with previous modelling efforts in the greater area. It is shown that tide must be a consideration for both tsunami modelling and emergency management planning. Modelling with peak tsunami waves coinciding with a high tide would be a conservative approach; however, it must be considered that maximum current speeds may be higher on other tides.

Design and Development of Real-Time Optimal Energy Management System for Hybrid Electric Vehicles

This paper describes a strategy to develop an energy management system (EMS) for a charge-sustaining power-split hybrid electric vehicle. This kind of hybrid electric vehicles (HEVs) benefit from the advantages of both parallel and series architecture. However, it gets relatively more complicated to manage power flow between the battery and the engine optimally. The applied strategy in this paper is based on nonlinear model predictive control approach. First of all, an appropriate control-oriented model which was accurate enough and simple was derived. Towards utilization of this controller in real-time, the problem was solved off-line for a vast area of reference signals and initial conditions and stored the computed manipulated variables inside look-up tables. Look-up tables take a little amount of memory. Also, the computational load dramatically decreased, because to find required manipulated variables the controller just needed a simple interpolation between tables.

Reliability Analysis of Computer Centre at Yobe State University Using LRU Algorithm

In this paper, we focus on the reliability and performance analysis of Computer Centre (CC) at Yobe State University, Damaturu, Nigeria. The CC consists of three servers: one database mail server, one redundant and one for sharing with the client computers in the CC (called as a local server). Observing the different possibilities of the functioning of the CC, the analysis has been done to evaluate the various popular measures of reliability such as availability, reliability, mean time to failure (MTTF), profit analysis due to the operation of the system. The system can ultimately fail due to the failure of router, redundant server before repairing the mail server and switch failure. The system can also partially fail when a local server fails. The failed devices have restored according to Least Recently Used (LRU) techniques. The system can also fail entirely due to a cooling failure of the server, electricity failure or some natural calamity like earthquake, fire tsunami, etc. All the failure rates are assumed to be constant and follow exponential time distribution, while the repair follows two types of distributions: i.e. general and Gumbel-Hougaard family copula distribution.

Unbalanced Distribution Optimal Power Flow to Minimize Losses with Distributed Photovoltaic Plants

Electric power systems are likely to operate with minimum losses and voltage meeting international standards. This is made possible generally by control actions provide by automatic voltage regulators, capacitors and transformers with on-load tap changer (OLTC). With the development of photovoltaic (PV) systems technology, their integration on distribution networks has increased over the last years to the extent of replacing the above mentioned techniques. The conventional analysis and simulation tools used for electrical networks are no longer able to take into account control actions necessary for studying distributed PV generation impact. This paper presents an unbalanced optimal power flow (OPF) model that minimizes losses with association of active power generation and reactive power control of single-phase and three-phase PV systems. Reactive power can be generated or absorbed using the available capacity and the adjustable power factor of the inverter. The unbalance OPF is formulated by current balance equations and solved by primal-dual interior point method. Several simulation cases have been carried out varying the size and location of PV systems and the results show a detailed view of the impact of PV distributed generation on distribution systems.

Bee Colony Optimization Applied to the Bin Packing Problem

We treat the two-dimensional bin packing problem which involves packing a given set of rectangles into a minimum number of larger identical rectangles called bins. This combinatorial problem is NP-hard. We propose a pretreatment for the oriented version of the problem that allows the valorization of the lost areas in the bins and the reduction of the size problem. A heuristic method based on the strategy first-fit adapted to this problem is presented. We present an approach of resolution by bee colony optimization. Computational results express a comparison of the number of bins used with and without pretreatment.

A Wall Law for Two-Phase Turbulent Boundary Layers

The presence of bubbles in the boundary layer introduces corrections into the log law, which must be taken into account. In this work, a logarithmic wall law was presented for bubbly two phase flows. The wall law presented in this work was based on the postulation of additional turbulent viscosity associated with bubble wakes in the boundary layer. The presented wall law contained empirical constant accounting both for shear induced turbulence interaction and for non-linearity of bubble. This constant was deduced from experimental data. The wall friction prediction achieved with the wall law was compared to the experimental data, in the case of a turbulent boundary layer developing on a vertical flat plate in the presence of millimetric bubbles. A very good agreement between experimental and numerical wall friction prediction was verified. The agreement was especially noticeable for the low void fraction when bubble induced turbulence plays a significant role.

High Cycle Fatigue Analysis of a Lower Hopper Knuckle Connection of a Large Bulk Carrier under Dynamic Loading

The fatigue of ship structural details is of major concern in the maritime industry as it can generate fracture issues that may compromise structural integrity. In the present study, a fatigue analysis of the lower hopper knuckle connection of a bulk carrier was conducted using the Finite Element Method by means of ABAQUS/CAE software. The fatigue life was calculated using Miner’s Rule and the long-term distribution of stress range by the use of the two-parameter Weibull distribution. The cumulative damage ratio was estimated using the fatigue damage resulting from the stress range occurring at each load condition. For this purpose, a cargo hold model was first generated, which extends over the length of two holds (the mid-hold and half of each of the adjacent holds) and transversely over the full breadth of the hull girder. Following that, a submodel of the area of interest was extracted in order to calculate the hot spot stress of the connection and to estimate the fatigue life of the structural detail. Two hot spot locations were identified; one at the top layer of the inner bottom plate and one at the top layer of the hopper plate. The IACS Common Structural Rules (CSR) require that specific dynamic load cases for each loading condition are assessed. Following this, the dynamic load case that causes the highest stress range at each loading condition should be used in the fatigue analysis for the calculation of the cumulative fatigue damage ratio. Each load case has a different effect on ship hull response. Of main concern, when assessing the fatigue strength of the lower hopper knuckle connection, was the determination of the maximum, i.e. the critical value of the stress range, which acts in a direction normal to the weld toe line. This acts in the transverse direction, that is, perpendicularly to the ship's centerline axis. The load cases were explored both theoretically and numerically in order to establish the one that causes the highest damage to the location examined. The most severe one was identified to be the load case induced by beam sea condition where the encountered wave comes from the starboard. At the level of the cargo hold model, the model was assumed to be simply supported at its ends. A coarse mesh was generated in order to represent the overall stiffness of the structure. The elements employed were quadrilateral shell elements, each having four integration points. A linear elastic analysis was performed because linear elastic material behavior can be presumed, since only localized yielding is allowed by most design codes. At the submodel level, the displacements of the analysis of the cargo hold model to the outer region nodes of the submodel acted as boundary conditions and applied loading for the submodel. In order to calculate the hot spot stress at the hot spot locations, a very fine mesh zone was generated and used. The fatigue life of the detail was found to be 16.4 years which is lower than the design fatigue life of the structure (25 years), making this location vulnerable to fatigue fracture issues. Moreover, the loading conditions that induce the most damage to the location were found to be the various ballasting conditions.

On-Chip Aging Sensor Circuit Based on Phase Locked Loop Circuit

In sub micrometer technology, the aging phenomenon starts to have a significant impact on the reliability of integrated circuits by bringing performance degradation. For that reason, it is important to have a capability to evaluate the aging effects accurately. This paper presents an accurate aging measurement approach based on phase-locked loop (PLL) and voltage-controlled oscillator (VCO) circuit. The architecture is rejecting the circuit self-aging effect from the characteristics of PLL, which is generating the frequency without any aging phenomena affects. The aging monitor is implemented in low power 32 nm CMOS technology, and occupies a pretty small area. Aging simulation results show that the proposed aging measurement circuit improves accuracy by about 2.8% at high temperature and 19.6% at high voltage.

PM10 Chemical Characteristics in a Background Site at the Universidad Libre Bogotá

One of the most important factors for air pollution is that the concentrations of PM10 maintain a constant trend, with the exception of some places where that frequently surpasses the allowed ranges established by Colombian legislation. The community that surrounds the Universidad Libre Bogotá is inhabited by a considerable number of students and workers, all of whom are possibly being exposed to PM10 for long periods of time while on campus. Thus, the chemical characterization of PM10 found in the ambient air at the Universidad Libre Bogotá was identified as a problem. A Hi-Vol sampler and EPA Test Method 5 were used to determine if the quality of air is adequate for the human respiratory system. Additionally, quartz fiber filters were utilized during sampling. Samples were taken three days a week during a dry period throughout the months of November and December 2015. The gravimetric analysis method was used to determine PM10 concentrations. The chemical characterization includes non-conventional carcinogenic pollutants. Atomic absorption spectrophotometry (AAS) was used for the determination of metals and VOCs were analyzed using the FTIR (Fourier transform infrared spectroscopy) method. In this way, concentrations of PM10, ranging from values of 13 µg/m3 to 66 µg/m3, were obtained; these values were below standard conditions. This evidence concludes that the PM10 concentrations during an exposure period of 24 hours are lower than the values established by Colombian law, Resolution 610 of 2010; however, when comparing these with the limits set by the World Health Organization (WHO), these concentrations could possibly exceed permissible levels.

Food Security Model and the Role of Community Empowerment: The Case of a Marginalized Village in Mexico, Tatoxcac, Puebla

Community empowerment has been proved to be a key element in the solution of the food security problem. As a result of a conceptual analysis, it was found that agricultural production, economic development and governance, are the traditional basis of food security models. Although the literature points to social inclusion as an important factor for food security, no model has considered it as the basis of it. The aim of this research is to identify different dimensions that make an integral model for food security, with emphasis on community empowerment. A diagnosis was made in the study community (Tatoxcac, Zacapoaxtla, Puebla), to know the aspects that impact the level of food insecurity. With a statistical sample integrated by 200 families, the Latin American and Caribbean Food Security Scale (ELCSA) was applied, finding that: in households composed by adults and children, have moderated food insecurity, (ELCSA scale has three levels, low, moderated and high); that result is produced mainly by the economic income capacity and the diversity of the diet on its food. With that being said, a model was developed to promote food security through five dimensions: 1. Regional context of the community; 2. Structure and system of local food; 3. Health and nutrition; 4. Information and technology access; and 5. Self-awareness and empowerment. The specific actions on each axis of the model, allowed a systemic approach needed to attend food security in the community, through the empowerment of society. It is concluded that the self-awareness of local communities is an area of extreme importance, which must be taken into account for participatory schemes to improve food security. In the long term, the model requires the integrated participation of different actors, such as government, companies and universities, to solve something such vital as food security.

Electromagnetic Assessment of Submarine Power Cable Degradation Using Finite Element Method and Sensitivity Analysis

Submarine power cables used for offshore wind farms electric energy distribution and transmission are subject to numerous threats. Some of the risks are associated with transport, installation and operating in harsh marine environment. This paper describes the feasibility of an electromagnetic low frequency sensing technique for submarine power cable failure prediction. The impact of a structural damage shape and material variability on the induced electric field is evaluated. The analysis is performed by modeling the cable using the finite element method, we use sensitivity analysis in order to identify the main damage characteristics affecting electric field variation. Lastly, we discuss the results obtained.

Transport of Analytes under Mixed Electroosmotic and Pressure Driven Flow of Power Law Fluid

In this study, we have analyzed the transport of analytes under a two dimensional steady incompressible flow of power-law fluids through rectangular nanochannel. A mathematical model based on the Cauchy momentum-Nernst-Planck-Poisson equations is considered to study the combined effect of mixed electroosmotic (EO) and pressure driven (PD) flow. The coupled governing equations are solved numerically by finite volume method. We have studied extensively the effect of key parameters, e.g., flow behavior index, concentration of the electrolyte, surface potential, imposed pressure gradient and imposed electric field strength on the net average flow across the channel. In addition to study the effect of mixed EOF and PD on the analyte distribution across the channel, we consider a nonlinear model based on general convective-diffusion-electromigration equation. We have also presented the retention factor for various values of electrolyte concentration and flow behavior index.

Absence of Developmental Change in Epenthetic Vowel Duration in Japanese Speakers’ English

This study examines developmental change in the production of epenthetic vowels by Japanese learners of English in relation to acquisition of L2 English speech rhythm. Seventy-two Japanese learners of English in the J-AESOP corpus were divided into lower- and higher-level learners according to their proficiency score and the frequency of vowel epenthesis. Three learners were excluded because no vowel epenthesis was observed in their utterances. The analysis of their read English speech data showed no statistical difference between lower- and higher-level learners, implying the absence of any developmental change in durations of epenthetic vowels. This result, together with the findings of previous studies, will be discussed in relation to the transfer of L1 phonology and manifestation of L2 English rhythm.

Comparative Study of Conventional and Satellite Based Agriculture Information System

The purpose of this study is to compare the conventional crop monitoring system with the satellite based crop monitoring system in Pakistan. This study is conducted for SUPARCO (Space and Upper Atmosphere Research Commission). The study focused on the wheat crop, as it is the main cash crop of Pakistan and province of Punjab. This study will answer the following: Which system is better in terms of cost, time and man power? The man power calculated for Punjab CRS is: 1,418 personnel and for SUPARCO: 26 personnel. The total cost calculated for SUPARCO is almost 13.35 million and CRS is 47.705 million. The man hours calculated for CRS (Crop Reporting Service) are 1,543,200 hrs (136 days) and man hours for SUPARCO are 8, 320hrs (40 days). It means that SUPARCO workers finish their work 96 days earlier than CRS workers. The results show that the satellite based crop monitoring system is efficient in terms of manpower, cost and time as compared to the conventional system, and also generates early crop forecasts and estimations. The research instruments used included: Interviews, physical visits, group discussions, questionnaires, study of reports and work flows. A total of 93 employees were selected using Yamane’s formula for data collection, which is done with the help questionnaires and interviews. Comparative graphing is used for the analysis of data to formulate the results of the research. The research findings also demonstrate that although conventional methods have a strong impact still in Pakistan (for crop monitoring) but it is the time to bring a change through technology, so that our agriculture will also be developed along modern lines.

Closing the Loop between Building Sustainability and Stakeholder Engagement: Case Study of an Australian University

Rapid population growth and urbanization is creating pressure throughout the world. This has a dramatic effect on a lot of elements which include water, food, transportation, energy, infrastructure etc. as few of the key services. Built environment sector is growing concurrently to meet the needs of urbanization. Due to such large scale development of buildings, there is a need for them to be monitored and managed efficiently. Along with appropriate management, climate adaptation is highly crucial as well because buildings are one of the major sources of greenhouse gas emission in their operation phase. Buildings to be adaptive need to provide a triple bottom approach to sustainability i.e., being socially, environmentally and economically sustainable. Hence, in order to deliver these sustainability outcomes, there is a growing understanding and thrive towards switching to green buildings or renovating new ones as per green standards wherever possible. Academic institutions in particular have been following this trend globally. This is highly significant as universities usually have high occupancy rates because they manage a large building portfolio. Also, as universities accommodate the future generation of architects, policy makers etc., they have the potential of setting themselves as a best industry practice model for research and innovation for the rest to follow. Hence their climate adaptation, sustainable growth and performance management becomes highly crucial in order to provide the best services to users. With the objective of evaluating appropriate management mechanisms within academic institutions, a feasibility study was carried out in a recent 5-Star Green Star rated university building (housing the School of Construction) in Victoria (south-eastern state of Australia). The key aim was to understand the behavioral and social aspect of the building users, management and the impact of their relationship on overall building sustainability. A survey was used to understand the building occupant’s response and reactions in terms of their work environment and management. A report was generated based on the survey results complemented with utility and performance data which were then used to evaluate the management structure of the university. Followed by the report, interviews were scheduled with the facility and asset managers in order to understand the approach they use to manage the different buildings in their university campuses (old, new, refurbished), respective building and parameters incorporated in maintaining the Green Star performance. The results aimed at closing the communication and feedback loop within the respective institutions and assist the facility managers to deliver appropriate stakeholder engagement. For the wider design community, analysis of the data highlights the applicability and significance of prioritizing key stakeholders, integrating desired engagement policies within an institution’s management structures and frameworks and their effect on building performance

A Numerical Study on Electrophoresis of a Soft Particle with Charged Core Coated with Polyelectrolyte Layer

Migration of a core-shell soft particle under the influence of an external electric field in an electrolyte solution is studied numerically. The soft particle is coated with a positively charged polyelectrolyte layer (PEL) and the rigid core is having a uniform surface charge density. The Darcy-Brinkman extended Navier-Stokes equations are solved for the motion of the ionized fluid, the non-linear Nernst-Planck equations for the ion transport and the Poisson equation for the electric potential. A pressure correction based iterative algorithm is adopted for numerical computations. The effects of convection on double layer polarization (DLP) and diffusion dominated counter ions penetration are investigated for a wide range of Debye layer thickness, PEL fixed surface charge density, and permeability of the PEL. Our results show that when the Debye layer is in order of the particle size, the DLP effect is significant and produces a reduction in electrophoretic mobility. However, the double layer polarization effect is negligible for a thin Debye layer or low permeable cases. The point of zero mobility and the existence of mobility reversal depending on the electrolyte concentration are also presented.

Quantification of E-Waste: A Case Study in Federal University of Espírito Santo, Brazil

The segregation of waste of electrical and electronic equipment (WEEE) in the generating source, its characterization (quali-quantitative) and identification of origin, besides being integral parts of classification reports, are crucial steps to the success of its integrated management. The aim of this paper was to count WEEE generation at the Federal University of Espírito Santo (UFES), Brazil, as well as to define sources, temporary storage sites, main transportations routes and destinations, the most generated WEEE and its recycling potential. Quantification of WEEE generated at the University in the years between 2010 and 2015 was performed using data analysis provided by UFES’s sector of assets management. EEE and WEEE flow in the campuses information were obtained through questionnaires applied to the University workers. It was recorded 6028 WEEEs units of data processing equipment disposed by the university between 2010 and 2015. Among these waste, the most generated were CRT screens, desktops, keyboards and printers. Furthermore, it was observed that these WEEEs are temporarily stored in inappropriate places at the University campuses. In general, these WEEE units are donated to NGOs of the city, or sold through auctions (2010 and 2013). As for recycling potential, from the primary processing and further sale of printed circuit boards (PCB) from the computers, the amount collected could reach U$ 27,839.23. The results highlight the importance of a WEEE management policy at the University.

Anisotropic Shear Strength of Sand Containing Plastic Fine Materials

Anisotropy is one of the major aspects that affect soil behavior, and extensive efforts have investigated its effect on the mechanical properties of soil. However, very little attention has been given to the combined effect of anisotropy and fine contents. Therefore, in this paper, the anisotropic strength of sand containing different fine content (F) of 5%, 10%, 15%, and 20%, was investigated using hollow cylinder tests under different principal stress directions of α = 0° and α = 90°. For a given principal stress direction (α), it was found that increasing fine content resulted in decreasing deviator stress (q). Moreover, results revealed that all fine contents showed anisotropic strength where there is a clear difference between the strength under 0° and the strength under 90°. This anisotropy was greatest under F = 5% while it decreased with increasing fine contents, particularly at F = 10%. Mixtures with low fine content show low contractive behavior and tended to show more dilation. Moreover, all sand-clay mixtures exhibited less dilation and more compression at α = 90° compared with that at α = 0°.

The Formation of Mutual Understanding in Conversation: An Embodied Approach

The mutual understanding in conversation is very important for human relations. This study investigates the mental function of the formation of mutual understanding between two people in conversation using the embodied approach. Forty people participated in this study. They are divided into pairs randomly. Four conversation situations between two (make/listen to fun or pleasant talk, make/listen to regrettable talk) are set for four minutes each, and the finger plethysmogram (200 Hz) of each participant is measured. As a result, the attractors of the participants who reported “I did not understand my partner” show the collapsed shape, which means the fluctuation of their rhythm is too small to match their partner’s rhythm, and their cross correlation is low. The autonomic balance of both persons tends to resonate during conversation, and both LLEs tend to resonate, too. In human history, in order for human beings as weak mammals to live, they may have been with others; that is, they have brought about resonating characteristics, which is called self-organization. However, the resonant feature sometimes collapses, depending on the lifestyle that the person was formed by himself after birth. It is difficult for people who do not have a lifestyle of mutual gaze to resonate their biological signal waves with others’. These people have features such as anxiety, fatigue, and confusion tendency. Mutual understanding is thought to be formed as a result of cooperation between the features of self-organization of the persons who are talking and the lifestyle indicated by mutual gaze. Such an entanglement phenomenon is called a nonlinear relation. By this research, it is found that the formation of mutual understanding is expressed by the rhythm of a biological signal showing a nonlinear relationship.