Hybrid Temporal Correlation Based on Gaussian Mixture Model Framework for View Synthesis

As 3D video is explored as a hot research topic in the last few decades, free-viewpoint TV (FTV) is no doubt a promising field for its better visual experience and incomparable interactivity. View synthesis is obviously a crucial technology for FTV; it enables to render images in unlimited numbers of virtual viewpoints with the information from limited numbers of reference view. In this paper, a novel hybrid synthesis framework is proposed and blending priority is explored. In contrast to the commonly used View Synthesis Reference Software (VSRS), the presented synthesis process is driven in consideration of the temporal correlation of image sequences. The temporal correlations will be exploited to produce fine synthesis results even near the foreground boundaries. As for the blending priority, this scheme proposed that one of the two reference views is selected to be the main reference view based on the distance between the reference views and virtual view, another view is chosen as the auxiliary viewpoint, just assist to fill the hole pixel with the help of background information. Significant improvement of the proposed approach over the state-of –the-art pixel-based virtual view synthesis method is presented, the results of the experiments show that subjective gains can be observed, and objective PSNR average gains range from 0.5 to 1.3 dB, while SSIM average gains range from 0.01 to 0.05.

Implementing Delivery Drones in Logistics Business Process: Case of Pharmaceutical Industry

In this paper, we will present a research about feasibility of implementing unmanned aerial vehicles, also known as 'drones', in logistics. Research is based on available information about current incentives and experiments in application of delivery drones in commercial use. Overview of current pilot projects and literature, as well as an overview of detected challenges, will be compiled and presented. Based on these findings, we will present a conceptual model of business process that implements delivery drones in business to business logistic operations. Business scenario is based on a pharmaceutical supply chain. Simulation modeling will be used to create models for running experiments and collecting performance data. Comparative study of the presented conceptual model will be given. The work will outline the main advantages and disadvantages of implementing unmanned aerial vehicles in delivery services as a supplementary distribution channel along the supply chain.

Nuclear Fuel Safety Threshold Determined by Logistic Regression Plus Uncertainty

Analysis of the uncertainty quantification related to nuclear safety margins applied to the nuclear reactor is an important concept to prevent future radioactive accidents. The nuclear fuel performance code may involve the tolerance level determined by traditional deterministic models producing acceptable results at burn cycles under 62 GWd/MTU. The behavior of nuclear fuel can simulate applying a series of material properties under irradiation and physics models to calculate the safety limits. In this study, theoretical predictions of nuclear fuel failure under transient conditions investigate extended radiation cycles at 75 GWd/MTU, considering the behavior of fuel rods in light-water reactors under reactivity accident conditions. The fuel pellet can melt due to the quick increase of reactivity during a transient. Large power excursions in the reactor are the subject of interest bringing to a treatment that is known as the Fuchs-Hansen model. The point kinetic neutron equations show similar characteristics of non-linear differential equations. In this investigation, the multivariate logistic regression is employed to a probabilistic forecast of fuel failure. A comparison of computational simulation and experimental results was acceptable. The experiments carried out use the pre-irradiated fuels rods subjected to a rapid energy pulse which exhibits the same behavior during a nuclear accident. The propagation of uncertainty utilizes the Wilk's formulation. The variables chosen as essential to failure prediction were the fuel burnup, the applied peak power, the pulse width, the oxidation layer thickness, and the cladding type.

A Biomimetic Structural Form: Developing a Paradigm to Attain Vital Sustainability in Tall Architecture

This paper argues for sustainability as a necessity in the evolution of tall architecture. It provides a different mode for dealing with sustainability in tall architecture, taking into consideration the speciality of its typology. To this end, the article develops a Biomimetic Structural Form as a paradigm to attain Vital Sustainability. A Biomimetic Structural Form, which is derived from the amalgamation of biomimicry as an approach for sustainability defining nature as source of knowledge and inspiration in solving humans’ problems and a Structural Form as a catalyst for evolving tall architecture, is a dynamic paradigm emerging from a conceptualizing and morphological process. A Biomimetic Structural Form is a flow system whose different forces and functions tend to be “better”, more "fit", to “survive”, and to be efficient. Through geometry and function—the two aspects of knowledge extracted from nature—the attributes of the Biomimetic Structural Form are formulated. Vital Sustainability is the survival level of sustainability in natural systems through which a system enhances the performance of its internal working and its interaction with the external environment. A Biomimetic Structural Form, in this context, is a medium for evolving tall architecture to emulate natural models in their ways of coexistence with the environment. As an integral part of this article, the sustainable super tall building 3Ts is discussed as a case study of applying Biomimetic Structural Form.   

Comparative Quantitative Study on Learning Outcomes of Major Study Groups of an Information and Communication Technology Bachelor Educational Program

Higher Education system reforms, especially Finnish system of Universities of Applied Sciences in 2014 are discussed. The new steering model is based on major legislative changes, output-oriented funding and open information. The governmental steering reform, especially the financial model and the resulting institutional level responses, such as a curriculum reforms are discussed, focusing especially in engineering programs. The paper is motivated by management need to establish objective steering-related performance indicators and to apply them consistently across all educational programs. The close relationship to governmental steering and funding model imply that internally derived indicators can be directly applied. Metropolia University of Applied Sciences (MUAS) as a case institution is briefly introduced, focusing on engineering education in Information and Communications Technology (ICT), and its related programs. The reform forced consolidation of previously separate smaller programs into fewer units of student application. New curriculum ICT students have a common first year before they apply for a Major. A framework of parallel and longitudinal comparisons is introduced and used across Majors in two campuses. The new externally introduced performance criteria are applied internally on ICT Majors using data ex-ante and ex-post of program merger.  A comparative performance of the Majors after completion of joint first year is established, focusing on previously omitted Majors for completeness of analysis. Some new research questions resulting from transfer of Majors between campuses and quota setting are discussed. Practical orientation identifies best practices to share or targets needing most attention for improvement. This level of analysis is directly applicable at student group and teaching team level, where corrective actions are possible, when identified. The analysis is quantitative and the nature of the corrective actions are not discussed. Causal relationships and factor analysis are omitted, because campuses, their staff and various pedagogical implementation details contain still too many undetermined factors for our limited data. Such qualitative analysis is left for further research. Further study must, however, be guided by the relevance of the observations.

A Comparison of Inverse Simulation-Based Fault Detection in a Simple Robotic Rover with a Traditional Model-Based Method

Robotic rovers which are designed to work in extra-terrestrial environments present a unique challenge in terms of the reliability and availability of systems throughout the mission. Should some fault occur, with the nearest human potentially millions of kilometres away, detection and identification of the fault must be performed solely by the robot and its subsystems. Faults in the system sensors are relatively straightforward to detect, through the residuals produced by comparison of the system output with that of a simple model. However, faults in the input, that is, the actuators of the system, are harder to detect. A step change in the input signal, caused potentially by the loss of an actuator, can propagate through the system, resulting in complex residuals in multiple outputs. These residuals can be difficult to isolate or distinguish from residuals caused by environmental disturbances. While a more complex fault detection method or additional sensors could be used to solve these issues, an alternative is presented here. Using inverse simulation (InvSim), the inputs and outputs of the mathematical model of the rover system are reversed. Thus, for a desired trajectory, the corresponding actuator inputs are obtained. A step fault near the input then manifests itself as a step change in the residual between the system inputs and the input trajectory obtained through inverse simulation. This approach avoids the need for additional hardware on a mass- and power-critical system such as the rover. The InvSim fault detection method is applied to a simple four-wheeled rover in simulation. Additive system faults and an external disturbance force and are applied to the vehicle in turn, such that the dynamic response and sensor output of the rover are impacted. Basic model-based fault detection is then employed to provide output residuals which may be analysed to provide information on the fault/disturbance. InvSim-based fault detection is then employed, similarly providing input residuals which provide further information on the fault/disturbance. The input residuals are shown to provide clearer information on the location and magnitude of an input fault than the output residuals. Additionally, they can allow faults to be more clearly discriminated from environmental disturbances.

Hierarchical Checkpoint Protocol in Data Grids

Grid of computing nodes has emerged as a representative means of connecting distributed computers or resources scattered all over the world for the purpose of computing and distributed storage. Since fault tolerance becomes complex due to the availability of resources in decentralized grid environment, it can be used in connection with replication in data grids. The objective of our work is to present fault tolerance in data grids with data replication-driven model based on clustering. The performance of the protocol is evaluated with Omnet++ simulator. The computational results show the efficiency of our protocol in terms of recovery time and the number of process in rollbacks.

Influence of Valve Lift Timing on Producer Gas Combustion and Its Modeling Using Two-Stage Wiebe Function

Producer gas is a biomass derived gaseous fuel which is extensively used in internal combustion engines for power generation application. Unlike the conventional hydrocarbon fuels (Gasoline and Natural gas), the combustion properties of producer gas fuel are much different. Therefore, setting of optimal spark time for efficient engine operation is required. Owing to the fluctuating tendency of producer gas composition during gasification process, the heat release patterns (dictating the power output and emissions) obtained are quite different from conventional fuels. It was found that, valve lift timing is yet another factor which influences the burn rate of producer gas fuel, and thus, the heat release rate of the engine. Therefore, the present study was motivated to estimate the influence of valve lift timing analytically (Wiebe model) on the burn rate of producer gas through curve fitting against experimentally obtained mass fraction burn curves of several producer gas compositions. Furthermore, Wiebe models are widely used in zero-dimensional codes for engine parametric studies and are quite popular. This study also addresses the influence of hydrogen and methane concentration of producer gas on combustion trends, which are known to cause dynamics in engine combustion.

Gas Condensing Unit with Inner Heat Exchanger

Gas condensing units with inner tubes heat exchangers represent third generation technology and differ from second generation heat and mass transfer units, which are fulfilled by passive filling material layer. The first one improves heat and mass transfer by increasing cooled contact surface of gas and condensate drops and film formed in inner tubes heat exchanger. This paper presents a selection of significant factors which influence the heat and mass transfer. Experimental planning is based on the research and analysis of main three independent variables; velocity of water and gas as well as density of spraying. Empirical mathematical models show that the coefficient of heat transfer is used as dependent parameter which depends on two independent variables; water and gas velocity. Empirical model is proved by the use of experimental data of two independent gas condensing units in Lithuania and Russia. Experimental data are processed by the use of heat transfer criteria-Kirpichov number. Results allow drawing the graphical nomogram for the calculation of heat and mass transfer conditions in the innovative and energy efficient gas cooling unit.

Biotechonomy System Dynamics Modelling: Sustainability of Pellet Production

The paper discovers biotechonomy development analysis by use of system dynamics modelling. The research is connected with investigations of biomass application for production of bioproducts with higher added value. The most popular bioresource is wood, and therefore, the main question today is about future development and eco-design of products. The paper emphasizes and evaluates energy sector which is open for use of wood logs, wood chips, wood pellets and so on. The main aim for this research study was to build a framework to analyse development perspectives for wood pellet production. To reach the goal, a system dynamics model of energy wood supplies, processing, and consumption is built. Production capacity, energy consumption, changes in energy and technology efficiency, required labour source, prices of wood, energy and labour are taken into account. Validation and verification tests with available data and information have been carried out and indicate that the model constitutes the dynamic hypothesis. It is found that the more is invested into pellets production, the higher the specific profit per production unit compared to wood logs and wood chips. As a result, wood chips production is decreasing dramatically and is replaced by wood pellets. The limiting factor for pellet industry growth is availability of wood sources. This is governed by felling limit set by the government based on sustainable forestry principles.

Iterative Learning Control of Two Coupled Nonlinear Spherical Tanks

This paper presents modeling and control of a highly nonlinear system including, non-interacting two spherical tanks using iterative learning control (ILC). Consequently, the objective of the paper is to control the liquid levels in the nonlinear tanks. First, a proportional-integral-derivative (PID) controller is applied to the plant model as a suitable benchmark for comparison. Then, dynamic responses of the control system corresponding to different step inputs are investigated. It is found that the conventional PID control is not able to fulfill the design criteria such as desired time constant. Consequently, an iterative learning controller is proposed to accurately control the coupled nonlinear tanks system. The simulation results clearly demonstrate the superiority of the presented ILC approach over the conventional PID controller to cope with the nonlinearities presented in the dynamic system.

Development of a Paediatric Head Model for the Computational Analysis of Head Impact Interactions

Head injury in childhood is a common cause of death or permanent disability from injury. However, despite its frequency and significance, there is little understanding of how a child’s head responds during injurious loading. Whilst Infant Post Mortem Human Subject (PMHS) experimentation is a logical approach to understand injury biomechanics, it is the authors’ opinion that a lack of subject availability is hindering potential progress. Computer modelling adds great value when considering adult populations; however, its potential remains largely untapped for infant surrogates. The complexities of child growth and development, which result in age dependent changes in anatomy, geometry and physical response characteristics, present new challenges for computational simulation. Further geometric challenges are presented by the intricate infant cranial bones, which are separated by sutures and fontanelles and demonstrate a visible fibre orientation. This study presents an FE model of a newborn infant’s head, developed from high-resolution computer tomography scans, informed by published tissue material properties. To mimic the fibre orientation of immature cranial bone, anisotropic properties were applied to the FE cranial bone model, with elastic moduli representing the bone response both parallel and perpendicular to the fibre orientation. Biofiedility of the computational model was confirmed by global validation against published PMHS data, by replicating experimental impact tests with a series of computational simulations, in terms of head kinematic responses. Numerical results confirm that the FE head model’s mechanical response is in favourable agreement with the PMHS drop test results.

Analysis of Thermoelectric Coolers as Energy Harvesters for Low Power Embedded Applications

The growing popularity of solid state thermoelectric devices in cooling applications has sparked an increasing diversity of thermoelectric coolers (TECs) on the market, commonly known as “Peltier modules”. They can also be used as generators, converting a temperature difference into electric power, and opportunities are plentiful to make use of these devices as thermoelectric generators (TEGs) to supply energy to low power, autonomous embedded electronic applications. Their adoption as energy harvesters in this new domain of usage is obstructed by the complex thermoelectric models commonly associated with TEGs. Low cost TECs for the consumer market lack the required parameters to use the models because they are not intended for this mode of operation, thereby urging an alternative method to obtain electric power estimations in specific operating conditions. The design of the test setup implemented in this paper is specifically targeted at benchmarking commercial, off-the-shelf TECs for use as energy harvesters in domestic environments: applications with limited temperature differences and space available. The usefulness is demonstrated by testing and comparing single and multi stage TECs with different sizes. The effect of a boost converter stage on the thermoelectric end-to-end efficiency is also discussed.

Performance of Derna Steam Power Plant at Varying Super-Heater Operating Conditions Based on Exergy

In the current study, energy and exergy analysis of a 65 MW steam power plant was carried out. This study investigated the effect of variations of overall conductance of the super heater on the performance of an existing steam power plant located in Derna, Libya. The performance of the power plant was estimated by a mathematical modelling which considers the off-design operating conditions of each component. A fully interactive computer program based on the mass, energy and exergy balance equations has been developed. The maximum exergy destruction has been found in the steam generation unit. A 50% reduction in the design value of overall conductance of the super heater has been achieved, which accordingly decreases the amount of the net electrical power that would be generated by at least 13 MW, as well as the overall plant exergy efficiency by at least 6.4%, and at the same time that would cause an increase of the total exergy destruction by at least 14 MW. The achieved results showed that the super heater design and operating conditions play an important role on the thermodynamics performance and the fuel utilization of the power plant. Moreover, these considerations are very useful in the process of the decision that should be taken at the occasions of deciding whether to replace or renovate the super heater of the power plant.

A Linear Regression Model for Estimating Anxiety Index Using Wide Area Frontal Lobe Brain Blood Volume

Major depressive disorder (MDD) is one of the most common mental illnesses today. It is believed to be caused by a combination of several factors, including stress. Stress can be quantitatively evaluated using the State-Trait Anxiety Inventory (STAI), one of the best indices to evaluate anxiety. Although STAI scores are widely used in applications ranging from clinical diagnosis to basic research, the scores are calculated based on a self-reported questionnaire. An objective evaluation is required because the subject may intentionally change his/her answers if multiple tests are carried out. In this article, we present a modified index called the “multi-channel Laterality Index at Rest (mc-LIR)” by recording the brain activity from a wider area of the frontal lobe using multi-channel functional near-infrared spectroscopy (fNIRS). The presented index aims to measure multiple positions near the Fpz defined by the international 10-20 system positioning. Using 24 subjects, the dependencies on the number of measuring points used to calculate the mc-LIR and its correlation coefficients with the STAI scores are reported. Furthermore, a simple linear regression was performed to estimate the STAI scores from mc-LIR. The cross-validation error is also reported. The experimental results show that using multiple positions near the Fpz will improve the correlation coefficients and estimation than those using only two positions.

High-Speed Particle Image Velocimetry of the Flow around a Moving Train Model with Boundary Layer Control Elements

Trackside induced airflow velocities, also known as slipstream velocities, are an important criterion for the design of high-speed trains. The maximum permitted values are given by the Technical Specifications for Interoperability (TSI) and have to be checked in the approval process. For train manufactures it is of great interest to know in advance, how new train geometries would perform in TSI tests. The Reynolds number in moving model experiments is lower compared to full-scale. Especially the limited model length leads to a thinner boundary layer at the rear end. The hypothesis is that the boundary layer rolls up to characteristic flow structures in the train wake, in which the maximum flow velocities can be observed. The idea is to enlarge the boundary layer using roughness elements at the train model head so that the ratio between the boundary layer thickness and the car width at the rear end is comparable to a full-scale train. This may lead to similar flow structures in the wake and better prediction accuracy for TSI tests. In this case, the design of the roughness elements is limited by the moving model rig. Small rectangular roughness shapes are used to get a sufficient effect on the boundary layer, while the elements are robust enough to withstand the high accelerating and decelerating forces during the test runs. For this investigation, High-Speed Particle Image Velocimetry (HS-PIV) measurements on an ICE3 train model have been realized in the moving model rig of the DLR in Göttingen, the so called tunnel simulation facility Göttingen (TSG). The flow velocities within the boundary layer are analysed in a plain parallel to the ground. The height of the plane corresponds to a test position in the EN standard (TSI). Three different shapes of roughness elements are tested. The boundary layer thickness and displacement thickness as well as the momentum thickness and the form factor are calculated along the train model. Conditional sampling is used to analyse the size and dynamics of the flow structures at the time of maximum velocity in the train wake behind the train. As expected, larger roughness elements increase the boundary layer thickness and lead to larger flow velocities in the boundary layer and in the wake flow structures. The boundary layer thickness, displacement thickness and momentum thickness are increased by using larger roughness especially when applied in the height close to the measuring plane. The roughness elements also cause high fluctuations in the form factors of the boundary layer. Behind the roughness elements, the form factors rapidly are approaching toward constant values. This indicates that the boundary layer, while growing slowly along the second half of the train model, has reached a state of equilibrium.

Dicotyledon Weed Quantification Algorithm for Selective Herbicide Application in Maize Crops: Statistical Evaluation of the Potential Herbicide Savings

This work contributes a statistical model and simulation framework yielding the best estimate possible for the potential herbicide reduction when using the MoDiCoVi algorithm all the while requiring a efficacy comparable to conventional spraying. In June 2013 a maize field located in Denmark were seeded. The field was divided into parcels which was assigned to one of two main groups: 1) Control, consisting of subgroups of no spray and full dose spraty; 2) MoDiCoVi algorithm subdivided into five different leaf cover thresholds for spray activation. In addition approximately 25% of the parcels were seeded with additional weeds perpendicular to the maize rows. In total 299 parcels were randomly assigned with the 28 different treatment combinations. In the statistical analysis, bootstrapping was used for balancing the number of replicates. The achieved potential herbicide savings was found to be 70% to 95% depending on the initial weed coverage. However additional field trials covering more seasons and locations are needed to verify the generalisation of these results. There is a potential for further herbicide savings as the time interval between the first and second spraying session was not long enough for the weeds to turn yellow, instead they only stagnated in growth.

Sensitivity Analysis during the Optimization Process Using Genetic Algorithms

Genetic algorithms (GA) are applied to the solution of high-dimensional optimization problems. Additionally, sensitivity analysis (SA) is usually carried out to determine the effect on optimal solutions of changes in parameter values of the objective function. These two analyses (i.e., optimization and sensitivity analysis) are computationally intensive when applied to high-dimensional functions. The approach presented in this paper consists in performing the SA during the GA execution, by statistically analyzing the data obtained of running the GA. The advantage is that in this case SA does not involve making additional evaluations of the objective function and, consequently, this proposed approach requires less computational effort than conducting optimization and SA in two consecutive steps.

Probability-Based Damage Detection of Structures Using Kriging Surrogates and Enhanced Ideal Gas Molecular Movement Algorithm

Surrogate model has received increasing attention for use in detecting damage of structures based on vibration modal parameters. However, uncertainties existing in the measured vibration data may lead to false or unreliable output result from such model. In this study, an efficient approach based on Monte Carlo simulation is proposed to take into account the effect of uncertainties in developing a surrogate model. The probability of damage existence (PDE) is calculated based on the probability density function of the existence of undamaged and damaged states. The kriging technique allows one to genuinely quantify the surrogate error, therefore it is chosen as metamodeling technique. Enhanced version of ideal gas molecular movement (EIGMM) algorithm is used as main algorithm for model updating. The developed approach is applied to detect simulated damage in numerical models of 72-bar space truss and 120-bar dome truss. The simulation results show the proposed method can perform well in probability-based damage detection of structures with less computational effort compared to direct finite element model.

Realization of a Temperature Based Automatic Controlled Domestic Electric Boiling System

This paper presents a kind of analog circuit based temperature control system, which is mainly composed by threshold control signal circuit, synchronization signal circuit and trigger pulse circuit. Firstly, the temperature feedback signal function is realized by temperature sensor TS503F3950E. Secondly, the main control circuit forms the cycle controlled pulse signal to control the thyristor switching model. Finally two reverse paralleled thyristors regulate the output power by their switching state. In the consequence, this is a modernized and energy-saving domestic electric heating system.