Considering Aerosol Processes in Nuclear Transport Package Containment Safety Cases

Packages designed for transport of radioactive material must satisfy rigorous safety regulations specified by the International Atomic Energy Agency (IAEA). Higher Activity Waste (HAW) transport packages have to maintain containment of their contents during normal and accident conditions of transport (NCT and ACT). To ensure containment criteria is satisfied these packages are required to be leak-tight in all transport conditions to meet allowable activity release rates. Package design safety reports are the safety cases that provide the claims, evidence and arguments to demonstrate that packages meet the regulations and once approved by the competent authority (in the UK this is the Office for Nuclear Regulation) a licence to transport radioactive material is issued for the package(s). The standard approach to demonstrating containment in the RWM transport safety case is set out in BS EN ISO 12807. In this document a method for measuring a leak rate from the package is explained by way of a small interspace test volume situated between two O-ring seals on the underside of the package lid. The interspace volume is pressurised and a pressure drop measured. A small interspace test volume makes the method more sensitive enabling the measurement of smaller leak rates. By ascertaining the activity of the contents, identifying a releasable fraction of material and by treating that fraction of material as a gas, allowable leak rates for NCT and ACT are calculated. The adherence to basic safety principles in ISO12807 is very pessimistic and current practice in the demonstration of transport safety, which is accepted by the UK regulator. It is UK government policy that management of HAW will be through geological disposal. It is proposed that the intermediate level waste be transported to the geological disposal facility (GDF) in large cuboid packages. This poses a challenge for containment demonstration because such packages will have long seals and therefore large interspace test volumes. There is also uncertainty on the releasable fraction of material within the package ullage space. This is because the waste may be in many different forms which makes it difficult to define the fraction of material released by the waste package. Additionally because of the large interspace test volume, measuring the calculated leak rates may not be achievable. For this reason a justification for a lower releasable fraction of material is sought. This paper considers the use of aerosol processes to reduce the releasable fraction for both NCT and ACT. It reviews the basic coagulation and removal processes and applies the dynamic aerosol balance equation. The proposed solution includes only the most well understood physical processes namely; Brownian coagulation and gravitational settling. Other processes have been eliminated either on the basis that they would serve to reduce the release to the environment further (pessimistically in keeping with the essence of nuclear transport safety cases) or that they are not credible in the conditions of transport considered.

Analytical Solution of the Boundary Value Problem of Delaminated Doubly-Curved Composite Shells

Delamination is one of the major failure modes in laminated composite structures. Delamination tips are mostly captured by spatial numerical models in order to predict crack growth. This paper presents some mechanical models of delaminated composite shells based on shallow shell theories. The mechanical fields are based on a third-order displacement field in terms of the through-thickness coordinate of the laminated shell. The undelaminated and delaminated parts are captured by separate models and the continuity and boundary conditions are also formulated in a general way providing a large size boundary value problem. The system of differential equations is solved by the state space method for an elliptic delaminated shell having simply supported edges. The comparison of the proposed and a numerical model indicates that the primary indicator of the model is the deflection, the secondary is the widthwise distribution of the energy release rate. The model is promising and suitable to determine accurately the J-integral distribution along the delamination front. Based on the proposed model it is also possible to develop finite elements which are able to replace the computationally expensive spatial models of delaminated structures.

Pressure-Detecting Method for Estimating Levitation Gap Height of Swirl Gripper

The swirl gripper is an electrically activated noncontact handling device that uses swirling airflow to generate a lifting force. This force can be used to pick up a workpiece placed underneath the swirl gripper without any contact. It is applicable, for example, in the semiconductor wafer production line, where contact must be avoided during the handling and moving of a workpiece to minimize damage. When a workpiece levitates underneath a swirl gripper, the gap height between them is crucial for safe handling. Therefore, in this paper, we propose a method to estimate the levitation gap height by detecting pressure at two points. The method is based on theoretical model of the swirl gripper, and has been experimentally verified. Furthermore, the force between the gripper and the workpiece can also be estimated using the detected pressure. As a result, the nonlinear relationship between the force and gap height can be linearized by adjusting the rotating speed of the fan in the swirl gripper according to the estimated force and gap height. The linearized relationship is expected to enhance handling stability of the workpiece.

Machinability Analysis in Drilling Flax Fiber-Reinforced Polylactic Acid Bio-Composite Laminates

Interest in natural fiber-reinforced composites (NFRC) is progressively growing both in terms of academia research and industrial applications thanks to their abundant advantages such as low cost, biodegradability, eco-friendly nature and relatively good mechanical properties. However, their widespread use is still presumed as challenging because of the specificity of their non-homogeneous structure, limited knowledge on their machinability characteristics and parameter settings, to avoid defects associated with the machining process. The present work is aimed to investigate the effect of the cutting tool geometry and material on the drilling-induced delamination, thrust force and hole quality produced when drilling a fully biodegradable flax/poly (lactic acid) composite laminate. Three drills with different geometries and material were used at different drilling conditions to evaluate the machinability of the fabricated composites. The experimental results indicated that the choice of cutting tool, in terms of material and geometry, has a noticeable influence on the cutting thrust force and subsequently drilling-induced damages. The lower value of thrust force and better hole quality was observed using high-speed steel (HSS) drill, whereas Carbide drill (with point angle of 130o) resulted in the highest value of thrust force. Carbide drill presented higher wear resistance and stability in variation of thrust force with a number of holes drilled, while HSS drill showed the lower value of thrust force during the drilling process. Finally, within the selected cutting range, the delamination damage increased noticeably with feed rate and moderately with spindle speed.

Measuring the Effect of Ventilation on Cooking in Indoor Air Quality by Low-Cost Air Sensors

The concern of the indoor air quality (IAQ) has been increasing due to its risk to human health. The smoking, sweeping, and stove and stovetop use are the activities that have a major contribution to the indoor air pollution. Outdoor air pollution also affects IAQ. The most important factors over IAQ from cooking activities are the materials, fuels, foods, and ventilation. The low-cost, mobile air quality monitoring (LCMAQM) sensors, is reachable technology to assess the IAQ. This is because of the lower cost of LCMAQM compared to conventional instruments. The IAQ was assessed, using LCMAQM, during cooking activities in a University of Minnesota graduate-housing evaluating different ventilation systems. The gases measured are carbon monoxide (CO) and carbon dioxide (CO2). The particles measured are particle matter (PM) 2.5 micrometer (µm) and lung deposited surface area (LDSA). The measurements are being conducted during April 2019 in Como Student Community Cooperative (CSCC) that is a graduate housing at the University of Minnesota. The measurements are conducted using an electric stove for cooking. The amount and type of food and oil using for cooking are the same for each measurement. There are six measurements: two experiments measure air quality without any ventilation, two using an extractor as mechanical ventilation, and two using the extractor and windows open as mechanical and natural ventilation. 3The results of experiments show that natural ventilation is most efficient system to control particles and CO2. The natural ventilation reduces the concentration in 79% for LDSA and 55% for PM2.5, compared to the no ventilation. In the same way, CO2 reduces its concentration in 35%. A well-mixed vessel model was implemented to assess particle the formation and decay rates. Removal rates by the extractor were significantly higher for LDSA, which is dominated by smaller particles, than for PM2.5, but in both cases much lower compared to the natural ventilation. There was significant day to day variation in particle concentrations under nominally identical conditions. This may be related to the fat content of the food. Further research is needed to assess the impact of the fat in food on particle generations.

Optimizing Network Latency with Fast Path Assignment for Incoming Flows

Various flows in the network require to go through different types of middlebox. The improper placement of network middlebox and path assignment for flows could greatly increase the network latency and also decrease the performance of network. Minimizing the total end to end latency of all the ows requires to assign path for the incoming flows. In this paper, the flow path assignment problem in regard to the placement of various kinds of middlebox is studied. The flow path assignment problem is formulated to a linear programming problem, which is very time consuming. On the other hand, a naive greedy algorithm is studied. Which is very fast but causes much more latency than the linear programming algorithm. At last, the paper presents a heuristic algorithm named FPA, which takes bottleneck link information and estimated bandwidth occupancy into consideration, and achieves near optimal latency in much less time. Evaluation results validate the effectiveness of the proposed algorithm.

Design of a Telemetry, Tracking, and Command Radio-Frequency Receiver for Small Satellites Based on Commercial Off-The-Shelf Components

From several years till now the aerospace industry is developing more and more small satellites for Low-Earth Orbit (LEO) missions. Such satellites have a low cost of making and launching since they have a size and weight smaller than other types of satellites. However, because of size limitations, small satellites need integrated electronic equipment based on digital logic. Moreover, the LEOs require telecommunication modules with high throughput to transmit to earth a big amount of data in a short time. In order to meet such requirements, in this paper we propose a Telemetry, Tracking & Command module optimized through the use of the Commercial Off-The-Shelf components. The proposed approach exploits the major flexibility offered by these components in reducing costs and optimizing the performance. The method has been applied in detail for the design of the front-end receiver, which has a low noise figure (1.5 dB) and DC power consumption (smaller than 2 W). Such a performance is particularly attractive since it allows fulfilling the energy budget stringent constraints that are typical for LEO small platforms.

Impact of Changes of the Conceptual Framework for Financial Reporting on the Indicators of the Financial Statement

The International Accounting Standards Board updated the conceptual framework for financial reporting. The main reason behind it is to resolve the tasks of the accounting, which are caused by the market development and business-transactions of a new economic content. Also, the investors call for higher transparency of information and responsibility for the results in order to make a more accurate risk assessment and forecast. All these make it necessary to further develop the conceptual framework for financial reporting so that the users get useful information. The market development and certain shortcomings of the conceptual framework revealed in practice require its reconsideration and finding new solutions. Some issues and concepts, such as disclosure and supply of information, its qualitative characteristics, assessment, and measurement uncertainty had to be supplemented and perfected. The criteria of recognition of certain elements (assets and liabilities) of reporting had to be updated, too and all this is set out in the updated edition of the conceptual framework for financial reporting, a comprehensive collection of concepts underlying preparation of the financial statement. The main objective of conceptual framework revision is to improve financial reporting and development of clear concepts package. This will support International Accounting Standards Board (IASB) to set common “Approach & Reflection” for similar transactions on the basis of mutually accepted concepts. As a result, companies will be able to develop coherent accounting policies for those transactions or events that are occurred from particular deals to which no standard is used or when standard allows choice of accounting policy.

Influence of Concrete Cracking in the Tensile Strength of Cast-in Headed Anchors

Headed reinforcement bars are increasingly used for anchorage in concrete structures. Applications include connections in composite steel-concrete structures, such as beam-column joints, in several strengthening situations as well as in more traditional uses in cast-in-place and precast structural systems. This paper investigates the reduction in the ultimate tensile capacity of embedded cast-in headed anchors due to concrete cracking. A series of nine laboratory tests are carried out to evaluate the influence of cracking on the concrete breakout strength in tension. The experimental results show that cracking affects both the resistance and load-slip response of the headed bar anchors. The strengths measured in these tests are compared to theoretical resistances calculated following the recommendations presented by fib Bulletin no. 58 (2011), ETAG 001 (2010) and ACI 318 (2014). The influences of parameters such as the effective embedment depth (hef), bar diameter (ds), and the concrete compressive strength (fc) are analysed and discussed. The theoretical recommendations are shown to be over-conservative for both embedment depths and were, in general, inaccurate in comparison to the experimental trends. The ACI 318 (2014) was the design code which presented the best performance regarding to the predictions of the ultimate load, with an average of 1.42 for the ratio between the experimental and estimated strengths, standard deviation of 0.36, and coefficient of variation equal to 0.25.

PSRR Enhanced LDO Regulator Using Noise Sensing Circuit

In this paper, we presented the LDO (low-dropout) regulator which enhanced the PSRR by applying the constant current source generation technique through the BGR (Band Gap Reference) to form the noise sensing circuit. The current source through the BGR has a constant current value even if the applied voltage varies. Then, the noise sensing circuit, which is composed of the current source through the BGR, operated between the error amplifier and the pass transistor gate of the LDO regulator. As a result, the LDO regulator has a PSRR of -68.2 dB at 1k Hz, -45.85 dB at 1 MHz and -45 dB at 10 MHz. the other performance of the proposed LDO was maintained at the same level of the conventional LDO regulator.

Review of the Road Crash Data Availability in Iraq

Iraq is a middle income country where the road safety issue is considered one of the leading causes of deaths. To control the road risk issue, the Iraqi Ministry of Planning, General Statistical Organization started to organise a collection system of traffic accidents data with details related to their causes and severity. These data are published as an annual report. In this paper, a review of the available crash data in Iraq will be presented. The available data represent the rate of accidents in aggregated level and classified according to their types, road users’ details, and crash severity, type of vehicles, causes and number of causalities. The review is according to the types of models used in road safety studies and research, and according to the required road safety data in the road constructions tasks. The available data are also compared with the road safety dataset published in the United Kingdom as an example of developed country. It is concluded that the data in Iraq are suitable for descriptive and exploratory models, aggregated level comparison analysis, and evaluation and monitoring the progress of the overall traffic safety performance. However, important traffic safety studies require disaggregated level of data and details related to the factors of the likelihood of traffic crashes. Some studies require spatial geographic details such as the location of the accidents which is essential in ranking the roads according to their level of safety, and name the most dangerous roads in Iraq which requires tactic plan to control this issue. Global Road safety agencies interested in solve this problem in low and middle-income countries have designed road safety assessment methodologies which are basing on the road attributes data only. Therefore, in this research it is recommended to use one of these methodologies.

Evaluation of Model-Based Code Generation for Embedded Systems–Mature Approach for Development in Evolution

Model-based development approach is gaining more support and acceptance. Its higher abstraction level brings simplification of systems’ description that allows domain experts to do their best without particular knowledge in programming. The different levels of simulation support the rapid prototyping, verifying and validating the product even before it exists physically. Nowadays model-based approach is beneficial for modelling of complex embedded systems as well as a generation of code for many different hardware platforms. Moreover, it is possible to be applied in safety-relevant industries like automotive, which brings extra automation of the expensive device certification process and especially in the software qualification. Using it, some companies report about cost savings and quality improvements, but there are others claiming no major changes or even about cost increases. This publication demonstrates the level of maturity and autonomy of model-based approach for code generation. It is based on a real live automotive seat heater (ASH) module, developed using The Mathworks, Inc. tools. The model, created with Simulink, Stateflow and Matlab is used for automatic generation of C code with Embedded Coder. To prove the maturity of the process, Code generation advisor is used for automatic configuration. All additional configuration parameters are set to auto, when applicable, leaving the generation process to function autonomously. As a result of the investigation, the publication compares the quality of generated embedded code and a manually developed one. The measurements show that generally, the code generated by automatic approach is not worse than the manual one. A deeper analysis of the technical parameters enumerates the disadvantages, part of them identified as topics for our future work.

Modelling and Simulating CO2 Electro-Reduction to Formic Acid Using Microfluidic Electrolytic Cells: The Influence of Bi-Sn Catalyst and 1-Ethyl-3-Methyl Imidazolium Tetra-Fluoroborate Electrolyte on Cell Performance

A modified steady-state numerical model is developed for the electrochemical reduction of CO2 to formic acid. The numerical model achieves a CD (current density) (~60 mA/cm2), FE-faradaic efficiency (~98%) and conversion (~80%) for CO2 electro-reduction to formic acid in a microfluidic cell. The model integrates charge and species transport, mass conservation, and momentum with electrochemistry. Specifically, the influences of Bi-Sn based nanoparticle catalyst (on the cathode surface) at different mole fractions and 1-ethyl-3-methyl imidazolium tetra-fluoroborate ([EMIM][BF4]) electrolyte, on CD, FE and CO2 conversion to formic acid is studied. The reaction is carried out at a constant concentration of electrolyte (85% v/v., [EMIM][BF4]). Based on the mass transfer characteristics analysis (concentration contours), mole ratio 0.5:0.5 Bi-Sn catalyst displays the highest CO2 mole consumption in the cathode gas channel. After validating with experimental data (polarisation curves) from literature, extensive simulations reveal performance measure: CD, FE and CO2 conversion. Increasing the negative cathode potential increases the current densities for both formic acid and H2 formations. However, H2 formations are minimal as a result of insufficient hydrogen ions in the ionic liquid electrolyte. Moreover, the limited hydrogen ions have a negative effect on formic acid CD. As CO2 flow rate increases, CD, FE and CO2 conversion increases.

Predictive Analysis for Big Data: Extension of Classification and Regression Trees Algorithm

Since its inception, predictive analysis has revolutionized the IT industry through its robustness and decision-making facilities. It involves the application of a set of data processing techniques and algorithms in order to create predictive models. Its principle is based on finding relationships between explanatory variables and the predicted variables. Past occurrences are exploited to predict and to derive the unknown outcome. With the advent of big data, many studies have suggested the use of predictive analytics in order to process and analyze big data. Nevertheless, they have been curbed by the limits of classical methods of predictive analysis in case of a large amount of data. In fact, because of their volumes, their nature (semi or unstructured) and their variety, it is impossible to analyze efficiently big data via classical methods of predictive analysis. The authors attribute this weakness to the fact that predictive analysis algorithms do not allow the parallelization and distribution of calculation. In this paper, we propose to extend the predictive analysis algorithm, Classification And Regression Trees (CART), in order to adapt it for big data analysis. The major changes of this algorithm are presented and then a version of the extended algorithm is defined in order to make it applicable for a huge quantity of data.

Transfer of Information Heritage between Algerian Veterinarians and Breeders: Assessment of Information and Communication Technology Using Mobile Phone

Our research shows the use of the mobile phone that consolidates the relationship between veterinarians, and that between breeders and veterinarians. On the other hand it asserts that the tool in question is a means of economic development. The results of our survey reveal a positive return to the veterinary community, which shows that the mobile phone has become an effective means of sustainable development through the transfer of a rapid and punctual information inheritance via social networks; including many Internet applications. Our results show that almost all veterinarians use the mobile phone for interprofessional communication. We therefore believe that the use of the mobile phone by livestock operators has greatly improved the working conditions, just as the use of this tool contributes to a better management of the exploitation as long as it allows limit travel but also save time. These results show that we are witnessing a growth in the use of mobile telephony technologies that impact is as much in terms of sustainable development. Allowing access to information, especially technical information, the mobile phone, and Information and Communication of Technology (ICT) in general, give livestock sector players not only security, by limiting losses, but also an efficiency that allows them a better production and productivity.

Sustainability of Healthcare Insurance in India: A Review of Health Insurance Scheme Launched by States in India

This paper presents an overview of the accessibility, design, and functioning of health insurance plans launched by state governments in India. In recent years, the governments of several states in India have come forward to provide health insurance coverage for the low-income group and rural population to reduce the out of pocket expenditure (OPE) on healthcare. Different health insurance schemes have different structures and offerings which differ in the different demographic factors. This study will portray a comparative analysis of the various health insurance schemes by analyzing different offerings and finance generation of the schemes. The comparative analysis will explain the lesson to be learned from these schemes and extend the existing knowledge of the health insurance in India. This would help in recognizing tension between various drivers and identifying issues pertaining to the sustainability of health insurance schemes in India.

Influence of Power Flow Controller on Energy Transaction Charges in Restructured Power System

The demand for power supply increases day by day in developing countries like India henceforth demand of reactive power support in the form of ancillary services provider also has been increased. The multi-line and multi-type Flexible alternating current transmission system (FACTS) controllers are playing a vital role to regulate power flow through the transmission line. Unified power flow controller and interline power flow controller can be utilized to control reactive power flow through the transmission line. In a restructured power system, the demand of such controller is being popular due to their inherent capability. The transmission pricing by using reactive power cost allocation through modified matrix methodology has been proposed. The FACTS technologies have quite costly assembly, so it is very useful to apportion the expenses throughout the restructured electricity industry. Therefore, in this work, after embedding the FACTS devices into load flow, the impact on the costs allocated to users in fraction to the transmission framework utilization has been analyzed. From the obtained results, it is clear that the total cost recovery is enhanced towards the Reactive Power flow through the different transmission line for 5 bus test system. The fair pricing policy towards reactive power can be achieved by the proposed method incorporating FACTS controller towards cost recovery of the transmission network.

Estimation of Tensile Strength for Granitic Rocks by Using Discrete Element Approach

Tensile strength which is an important parameter of the rock for engineering applications is difficult to measure directly through physical experiment (i.e. uniaxial tensile test). Therefore, indirect experimental methods such as Brazilian test have been taken into consideration and some relations have been proposed in order to obtain the tensile strength for rocks indirectly. In this research, to calculate numerically the tensile strength for granitic rocks, Particle Flow Code in three-dimension (PFC3D) software were used. First, uniaxial compression tests were simulated and the tensile strength was determined for Inada granite (from a quarry in Kasama, Ibaraki, Japan). Then, by simulating Brazilian test condition for Inada granite, the tensile strength was indirectly calculated again. Results show that the tensile strength calculated numerically agrees well with the experimental results obtained from uniaxial tensile tests on Inada granite samples.

Clinical Utility of Salivary Cytokines for Children with Attention Deficit Hyperactivity Disorder

The goal of this study was to examine the possibility of salivary cytokines for the screening of attention deficit hyperactivity disorder (ADHD) in children. We carried out a case-control study, including 19 children with ADHD and 17 healthy children (controls). A multiplex bead array immunoassay was used to conduct a multi-analysis of 27 different salivary cytokines. Six salivary cytokines (interleukin (IL)-1β, IL-8, IL12p70, granulocyte colony-stimulating factor (G-CSF), interferon gamma (IFN-γ), and vascular endothelial growth factor (VEGF)) were significantly associated with the presence of ADHD (p < 0.05). An informative salivary cytokine panel was developed using VEGF by logistic regression analysis (odds ratio: 0.251). Receiver operating characteristic analysis revealed that assessment of a panel using VEGF showed “good” capability for discriminating between ADHD patients and controls (area under the curve: 0.778). ADHD has been hypothesized to be associated with reduced cerebral blood flow in the frontal cortex, due to reduced VEGF levels. Our study highlights the possibility of utilizing differential salivary cytokine levels for point-of-care testing (POCT) of biomarkers in children with ADHD.

How Children Synchronize with Their Teacher: Evidence from a Real-World Elementary School Classroom

This paper reports on how synchrony occurs between children and their teacher, and what prevents or facilitates synchrony. The aim of the experiment conducted in this study was to precisely analyze their movements and synchrony and reveal the process of synchrony in a real-world classroom. Specifically, the experiment was conducted for around 20 minutes during an English as a foreign language (EFL) lesson. The participants were 11 fourth-grade school children and their classroom teacher in a public elementary school in Japan. Previous researchers assert that synchrony causes the state of flow in a class. For checking the level of flow, Short Flow State Scale (SFSS) was adopted. The experimental procedure had four steps: 1) The teacher read aloud the first half of an English storybook to the children. Both the teacher and the children were at their own desks. 2) The children were subjected to an SFSS check. 3) The teacher read aloud the remaining half of the storybook to the children. She made the children remove their desks before reading. 4) The children were again subjected to an SFSS check. The movements of all participants were recorded with a video camera. From the movement analysis, it was found that the children synchronized better with the teacher in Step 3 than in Step 1, and that the teacher’s movement became free and outstanding without a desk. This implies that the desk acted as a barrier between the children and the teacher. Removal of this barrier resulted in the children’s reactions becoming synchronized with those of the teacher. The SFSS results proved that the children experienced more flow without a barrier than with a barrier. Apparently, synchrony is what caused flow or social emotions in the classroom. The main conclusion is that synchrony leads to cognitive outcomes such as children’s academic performance in EFL learning.