Abstract: Analysis of the uncertainty quantification related to nuclear safety margins applied to the nuclear reactor is an important concept to prevent future radioactive accidents. The nuclear fuel performance code may involve the tolerance level determined by traditional deterministic models producing acceptable results at burn cycles under 62 GWd/MTU. The behavior of nuclear fuel can simulate applying a series of material properties under irradiation and physics models to calculate the safety limits. In this study, theoretical predictions of nuclear fuel failure under transient conditions investigate extended radiation cycles at 75 GWd/MTU, considering the behavior of fuel rods in light-water reactors under reactivity accident conditions. The fuel pellet can melt due to the quick increase of reactivity during a transient. Large power excursions in the reactor are the subject of interest bringing to a treatment that is known as the Fuchs-Hansen model. The point kinetic neutron equations show similar characteristics of non-linear differential equations. In this investigation, the multivariate logistic regression is employed to a probabilistic forecast of fuel failure. A comparison of computational simulation and experimental results was acceptable. The experiments carried out use the pre-irradiated fuels rods subjected to a rapid energy pulse which exhibits the same behavior during a nuclear accident. The propagation of uncertainty utilizes the Wilk's formulation. The variables chosen as essential to failure prediction were the fuel burnup, the applied peak power, the pulse width, the oxidation layer thickness, and the cladding type.
Abstract: The spoke type rotor can be used to obtain magnetic
flux concentration in permanent magnet machines. This allows the
air gap magnetic flux density to exceed the remanent flux density
of the permanent magnets but gives problems with leakage fluxes
in the magnetic circuit. The end leakage flux of one spoke type
permanent magnet rotor design is studied through measurements and
finite element simulations. The measurements are performed in the
end regions of a 12 kW prototype generator for a vertical axis
wind turbine. The simulations are made using three dimensional
finite elements to calculate the magnetic field distribution in the
end regions of the machine. Also two dimensional finite element
simulations are performed and the impact of the two dimensional
approximation is studied. It is found that the magnetic leakage flux
in the end regions of the machine is equal to about 20% of the flux
in the permanent magnets. The overestimation of the performance by
the two dimensional approximation is quantified and a curve-fitted
expression for its behavior is suggested.
Abstract: Robotic rovers which are designed to work in
extra-terrestrial environments present a unique challenge in terms
of the reliability and availability of systems throughout the mission.
Should some fault occur, with the nearest human potentially millions
of kilometres away, detection and identification of the fault must
be performed solely by the robot and its subsystems. Faults in
the system sensors are relatively straightforward to detect, through
the residuals produced by comparison of the system output with
that of a simple model. However, faults in the input, that is, the
actuators of the system, are harder to detect. A step change in
the input signal, caused potentially by the loss of an actuator,
can propagate through the system, resulting in complex residuals
in multiple outputs. These residuals can be difficult to isolate or
distinguish from residuals caused by environmental disturbances.
While a more complex fault detection method or additional sensors
could be used to solve these issues, an alternative is presented here.
Using inverse simulation (InvSim), the inputs and outputs of the
mathematical model of the rover system are reversed. Thus, for a
desired trajectory, the corresponding actuator inputs are obtained.
A step fault near the input then manifests itself as a step change
in the residual between the system inputs and the input trajectory
obtained through inverse simulation. This approach avoids the need
for additional hardware on a mass- and power-critical system such
as the rover. The InvSim fault detection method is applied to a
simple four-wheeled rover in simulation. Additive system faults and
an external disturbance force and are applied to the vehicle in turn,
such that the dynamic response and sensor output of the rover
are impacted. Basic model-based fault detection is then employed
to provide output residuals which may be analysed to provide
information on the fault/disturbance. InvSim-based fault detection
is then employed, similarly providing input residuals which provide
further information on the fault/disturbance. The input residuals are
shown to provide clearer information on the location and magnitude
of an input fault than the output residuals. Additionally, they can
allow faults to be more clearly discriminated from environmental
disturbances.
Abstract: Grid of computing nodes has emerged as a
representative means of connecting distributed computers or
resources scattered all over the world for the purpose of computing
and distributed storage. Since fault tolerance becomes complex due
to the availability of resources in decentralized grid environment,
it can be used in connection with replication in data grids. The
objective of our work is to present fault tolerance in data grids
with data replication-driven model based on clustering. The
performance of the protocol is evaluated with Omnet++ simulator.
The computational results show the efficiency of our protocol in
terms of recovery time and the number of process in rollbacks.
Abstract: Feeder protection is important in transmission and distribution side because if any fault occurs in any feeder or transformer, man power is needed to identify the problem and it will take more time. In the existing system, directional overcurrent elements with load further secured by a load encroachment function can be used to provide necessary security and sensitivity for faults on remote points in a circuit. It is validated only in renewable plant collector circuit protection applications over a wide range of operating conditions. In this method, the directional overcurrent feeder protection is developed by using monitoring of feeder section through internet. In this web based monitoring, the fault and power theft are identified by using Toro dial sensor and its information is received by SCADA (Supervisory Control and Data Acquisition) and controlled by ARM microcontroller. This web based monitoring is also used to monitor the feeder management, directional current detection, demand side management, overload fault. This monitoring system is capable of monitoring the distribution feeder over a large area depending upon the cost. It is also used to reduce the power theft, time and man power. The simulation is done by MATLAB software.
Abstract: Implementation of LARG (Lean, Agile, Resilient, Green) practices in the supply chain management is a complex task mainly because ecological, economical and operational goals are usually in conflict. To implement these LARG practices successfully, companies’ need relevant decision making tools allowing processes performance control and improvement strategies visibility. To contribute to this issue, this work tries to answer the following research question: How to master performance and anticipate problems in supply chain LARG practices implementation? To answer this question, a risk management approach (RMA) is adopted. Indeed, the proposed RMA aims basically to assess the ability of a supply chain, guided by “Lean, Green and Achievement” performance goals, to face “agility and resilience risk” factors. To proof its relevance, a logistics academic case study based on simulation is used to illustrate all its stages. It shows particularly how to build the “LARG risk map” which is the main output of this approach.
Abstract: This paper presents modeling and control of a highly nonlinear system including, non-interacting two spherical tanks using iterative learning control (ILC). Consequently, the objective of the paper is to control the liquid levels in the nonlinear tanks. First, a proportional-integral-derivative (PID) controller is applied to the plant model as a suitable benchmark for comparison. Then, dynamic responses of the control system corresponding to different step inputs are investigated. It is found that the conventional PID control is not able to fulfill the design criteria such as desired time constant. Consequently, an iterative learning controller is proposed to accurately control the coupled nonlinear tanks system. The simulation results clearly demonstrate the superiority of the presented ILC approach over the conventional PID controller to cope with the nonlinearities presented in the dynamic system.
Abstract: Head injury in childhood is a common cause of death or permanent disability from injury. However, despite its frequency and significance, there is little understanding of how a child’s head responds during injurious loading. Whilst Infant Post Mortem Human Subject (PMHS) experimentation is a logical approach to understand injury biomechanics, it is the authors’ opinion that a lack of subject availability is hindering potential progress. Computer modelling adds great value when considering adult populations; however, its potential remains largely untapped for infant surrogates. The complexities of child growth and development, which result in age dependent changes in anatomy, geometry and physical response characteristics, present new challenges for computational simulation. Further geometric challenges are presented by the intricate infant cranial bones, which are separated by sutures and fontanelles and demonstrate a visible fibre orientation. This study presents an FE model of a newborn infant’s head, developed from high-resolution computer tomography scans, informed by published tissue material properties. To mimic the fibre orientation of immature cranial bone, anisotropic properties were applied to the FE cranial bone model, with elastic moduli representing the bone response both parallel and perpendicular to the fibre orientation. Biofiedility of the computational model was confirmed by global validation against published PMHS data, by replicating experimental impact tests with a series of computational simulations, in terms of head kinematic responses. Numerical results confirm that the FE head model’s mechanical response is in favourable agreement with the PMHS drop test results.
Abstract: The teaching of physics in Brazilian public schools emphasizes strongly the theoretical aspects of this science, showing its philosophical and mathematical basis, but neglecting its experimental character. Perhaps the lack of science laboratories explains this practice. In this work, we present a method of teaching physics using the computer. As alternatives to real experiments, we have the trials through simulators, many of which are free software available on the internet. In order to develop a study on the use of simulators in teaching, knowing the impossibility of simulations on all topics in a given subject, we combined these programs with phenomenological and/or experimental texts in order to mitigate this limitation. This study proposes the use of simulators and the debate using phenomenological/experimental texts on electrostatic theme in groups of the 3rd year of EJA (Adult and Youth Education) in order to verify the advantages of this methodology. Some benefits of the hybridization of the traditional method with the tools used were: Greater motivation of the students in learning, development of experimental notions, proactive socialization to learning, greater easiness to understand some concepts and the creation of collaborative activities that can reduce timidity of part of the students.
Abstract: Subjective time perception implies connection to cognitive functions, attention, memory and awareness, but a little is known about connections with homeostatic states of the body coordinated by circadian clock. In this paper, we present results from experimental study of subjective time perception in volunteers performing physical activity on treadmill in various phases of their circadian rhythms. Subjects were exposed to several time illusions simulated by programmed timing systems. This study brings better understanding for further improvement of of work quality in isolated areas.
Abstract: True Boiling Point distillation (TBP) is one of the most common experimental techniques for the determination of petroleum properties. This curve provides information about the performance of petroleum in terms of its cuts. The experiment is performed in a few days. Techniques are used to determine the properties faster with a software that calculates the distillation curve when a little information about crude oil is known. In order to evaluate the accuracy of distillation curve prediction, eight points of the TBP curve and specific gravity curve (348 K and 523 K) were inserted into the HYSYS Oil Manager, and the extended curve was evaluated up to 748 K. The methods were able to predict the curve with the accuracy of 0.6%-9.2% error (Software X ASTM), 0.2%-5.1% error (Software X Spaltrohr).
Abstract: In this paper, the design of integrated sleep scheduling for relay nodes and user equipments under a Donor eNB (DeNB) in the mode of Time Division Duplex (TDD) in LTE-A is presented. The idea of virtual time is proposed to deal with the discontinuous pattern of the available radio resource in TDD, and based on the estimation of the traffic load, three power saving schemes in the top-down strategy are presented. Associated mechanisms in each scheme including calculation of the virtual subframe capacity, the algorithm of integrated sleep scheduling, and the mapping mechanisms for the backhaul link and the access link are presented in the paper. Simulation study shows the advantage of the proposed schemes in energy saving over the standard DRX scheme.
Abstract: Trackside induced airflow velocities, also known as
slipstream velocities, are an important criterion for the design of
high-speed trains. The maximum permitted values are given by the
Technical Specifications for Interoperability (TSI) and have to be
checked in the approval process. For train manufactures it is of great
interest to know in advance, how new train geometries would perform
in TSI tests. The Reynolds number in moving model experiments is
lower compared to full-scale. Especially the limited model length
leads to a thinner boundary layer at the rear end. The hypothesis is
that the boundary layer rolls up to characteristic flow structures in the
train wake, in which the maximum flow velocities can be observed.
The idea is to enlarge the boundary layer using roughness elements
at the train model head so that the ratio between the boundary
layer thickness and the car width at the rear end is comparable to a
full-scale train. This may lead to similar flow structures in the wake
and better prediction accuracy for TSI tests. In this case, the design
of the roughness elements is limited by the moving model rig. Small
rectangular roughness shapes are used to get a sufficient effect on the
boundary layer, while the elements are robust enough to withstand
the high accelerating and decelerating forces during the test runs. For
this investigation, High-Speed Particle Image Velocimetry (HS-PIV)
measurements on an ICE3 train model have been realized in the
moving model rig of the DLR in Göttingen, the so called tunnel
simulation facility Göttingen (TSG). The flow velocities within the
boundary layer are analysed in a plain parallel to the ground. The
height of the plane corresponds to a test position in the EN standard
(TSI). Three different shapes of roughness elements are tested. The
boundary layer thickness and displacement thickness as well as the
momentum thickness and the form factor are calculated along the
train model. Conditional sampling is used to analyse the size and
dynamics of the flow structures at the time of maximum velocity
in the train wake behind the train. As expected, larger roughness
elements increase the boundary layer thickness and lead to larger
flow velocities in the boundary layer and in the wake flow structures.
The boundary layer thickness, displacement thickness and momentum
thickness are increased by using larger roughness especially when
applied in the height close to the measuring plane. The roughness
elements also cause high fluctuations in the form factors of the
boundary layer. Behind the roughness elements, the form factors
rapidly are approaching toward constant values. This indicates that
the boundary layer, while growing slowly along the second half of
the train model, has reached a state of equilibrium.
Abstract: This work contributes a statistical model and simulation
framework yielding the best estimate possible for the potential
herbicide reduction when using the MoDiCoVi algorithm all the
while requiring a efficacy comparable to conventional spraying. In
June 2013 a maize field located in Denmark were seeded. The field
was divided into parcels which was assigned to one of two main
groups: 1) Control, consisting of subgroups of no spray and full dose
spraty; 2) MoDiCoVi algorithm subdivided into five different leaf
cover thresholds for spray activation. In addition approximately 25%
of the parcels were seeded with additional weeds perpendicular to
the maize rows. In total 299 parcels were randomly assigned with
the 28 different treatment combinations. In the statistical analysis,
bootstrapping was used for balancing the number of replicates. The
achieved potential herbicide savings was found to be 70% to 95%
depending on the initial weed coverage. However additional field
trials covering more seasons and locations are needed to verify
the generalisation of these results. There is a potential for further
herbicide savings as the time interval between the first and second
spraying session was not long enough for the weeds to turn yellow,
instead they only stagnated in growth.
Abstract: Surrogate model has received increasing attention for use in detecting damage of structures based on vibration modal parameters. However, uncertainties existing in the measured vibration data may lead to false or unreliable output result from such model. In this study, an efficient approach based on Monte Carlo simulation is proposed to take into account the effect of uncertainties in developing a surrogate model. The probability of damage existence (PDE) is calculated based on the probability density function of the existence of undamaged and damaged states. The kriging technique allows one to genuinely quantify the surrogate error, therefore it is chosen as metamodeling technique. Enhanced version of ideal gas molecular movement (EIGMM) algorithm is used as main algorithm for model updating. The developed approach is applied to detect simulated damage in numerical models of 72-bar space truss and 120-bar dome truss. The simulation results show the proposed method can perform well in probability-based damage detection of structures with less computational effort compared to direct finite element model.
Abstract: This paper describes the development of a model of an impaired human arm performing a reaching motion, which will be used to predict hand path trajectories for people with reduced arm joint mobility. Assuming that the arm was in contact with a surface during the entire movement, the contact conditions at the initial and final task locations were determined and used to generate the entire trajectory. The model was validated by comparing it to experimental data, which simulated an arm joint impairment by physically constraining the joint motion with a brace. Future research will include using the model in the development of physical training protocols that avoid early recruitment of “healthy” Degrees-Of-Freedom (DOF) for reaching motions, thus facilitating an Active Range-Of-Motion Recovery (AROM) for a particular impaired joint.
Abstract: In response to a changing world and the fast growth of the Internet, more and more enterprises are replacing web-based services with cloud-based ones. Multi-tenancy technology is becoming more important especially with Software as a Service (SaaS). This in turn leads to a greater focus on the application of Identity and Access Management (IAM). Conventional Near-Field Communication (NFC) based verification relies on a computer browser and a card reader to access an NFC tag. This type of verification does not support mobile device login and user-based access management functions. This study designs an NFC-based third-party cloud identity and access management scheme (NFC-IAM) addressing this shortcoming. Data from simulation tests analyzed with Key Performance Indicators (KPIs) suggest that the NFC-IAM not only takes less time in identity identification but also cuts time by 80% in terms of two-factor authentication and improves verification accuracy to 99.9% or better. In functional performance analyses, NFC-IAM performed better in salability and portability. The NFC-IAM App (Application Software) and back-end system to be developed and deployed in mobile device are to support IAM features and also offers users a more user-friendly experience and stronger security protection. In the future, our NFC-IAM can be employed to different environments including identification for mobile payment systems, permission management for remote equipment monitoring, among other applications.
Abstract: In this study, a validated 3D finite volume model of human eye is developed to study the fluid flow and heat transfer in the human eye at steady state conditions. For this purpose, discretized bio-heat transfer equation coupled with Boussinesq equation is analyzed with different anatomical, environmental, and physiological conditions. It is demonstrated that the fluid circulation is formed as a result of thermal gradients in various regions of eye. It is also shown that posterior region of the human eye is less affected by the ambient conditions compared to the anterior segment which is sensitive to the ambient conditions and also to the way the gravitational field is defined compared to the geometry of the eye making the circulations and the thermal field complicated in transient states. The effect of variation in material and boundary conditions guides us to the conclusion that thermal field of a healthy and non-healthy eye can be distinguished via computer simulations.
Abstract: One of the main characteristics of Heavy Water Moderated Reactors is their high production of plutonium. This article demonstrates the possibility of reduction of plutonium and other actinides in Heavy Water Research Reactor. Among the many ways for reducing plutonium production in a heavy water reactor, in this research, changing the fuel from natural Uranium fuel to Thorium-Uranium mixed fuel was focused. The main fissile nucleus in Thorium-Uranium fuels is U-233 which would be produced after neutron absorption by Th-232, so the Thorium-Uranium fuels have some known advantages compared to the Uranium fuels. Due to this fact, four Thorium-Uranium fuels with different compositions ratios were chosen in our simulations; a) 10% UO2-90% THO2 (enriched= 20%); b) 15% UO2-85% THO2 (enriched= 10%); c) 30% UO2-70% THO2 (enriched= 5%); d) 35% UO2-65% THO2 (enriched= 3.7%). The natural Uranium Oxide (UO2) is considered as the reference fuel, in other words all of the calculated data are compared with the related data from Uranium fuel. Neutronic parameters were calculated and used as the comparison parameters. All calculations were performed by Monte Carol (MCNPX2.6) steady state reaction rate calculation linked to a deterministic depletion calculation (CINDER90). The obtained computational data showed that Thorium-Uranium fuels with four different fissile compositions ratios can satisfy the safety and operating requirements for Heavy Water Research Reactor. Furthermore, Thorium-Uranium fuels have a very good proliferation resistance and consume less fissile material than uranium fuels at the same reactor operation time. Using mixed Thorium-Uranium fuels reduced the long-lived α emitter, high radiotoxic wastes and the radio toxicity level of spent fuel.
Abstract: This paper addresses the shortcomings of architectural computation tools in representing human behavior in built environments, prior to construction and occupancy of those environments. Evaluating whether a design fits the needs of its future users is currently done solely post construction, or is based on the knowledge and intuition of the designer. This issue is of high importance when designing complex buildings such as hospitals, where the quality of treatment as well as patient and staff satisfaction are of major concern. Existing computational pre-occupancy human behavior evaluation methods are geared mainly to test ergonomic issues, such as wheelchair accessibility, emergency egress, etc. As such, they rely on Agent Based Modeling (ABM) techniques, which emphasize the individual user. Yet we know that most human activities are social, and involve a number of actors working together, which ABM methods cannot handle. Therefore, we present an event-based model that manages the interaction between multiple Actors, Spaces, and Activities, to describe dynamically how people use spaces. This approach requires expanding the computational representation of Actors beyond their physical description, to include psychological, social, cultural, and other parameters. The model presented in this paper includes cognitive abilities and rules that describe the response of actors to their physical and social surroundings, based on the actors’ internal status. The model has been applied in a simulation of hospital wards, and showed adaptability to a wide variety of situated behaviors and interactions.