Logistics Support as a Key Success Factor in Gastronomy

Gastronomy is one of the oldest forms of commercial activity. It is currently one of the most popular and still dynamically developing branches of business. Socio-economic changes, its widespread occurrence, new techniques or culinary styles affect the almost unlimited possibilities of its development. Importantly, regardless of the form of business adopted, foodservice is strongly related to logistics processes, and areas of foodservice that are closely linked to logistics are of strategic importance. Any inefficiency in logistics processes results in reduced chances for success and achieving competitive advantage by companies belonging to the catering industry. The aim of the paper is to identify the areas of logistic support, occurring in the catering business, and affecting the scope of the logistic processes implemented. The aim of the paper is implemented through a plural homogeneous approach, based on direct observation, text analysis of current documents, and in-depth free targeted interviews.

Spatial Indeterminacy: Destabilization of Dichotomies in Modern and Contemporary Architecture

Since the advent of modern architecture, notions of free plan and transparency have proliferated well into current trends. The movement’s notion of a spatially homogeneous, open and limitless ‘free plan’ contrasts with the spatially heterogeneous ‘series of rooms’ defined by load bearing walls, which in turn triggered new notions of transparency created by vast expanses of glazed walls. Similarly, transparency was also dichotomized as something that was physical or optical, as well as something conceptual, akin to spatial organization. As opposed to merely accepting the duality and possible incompatibility of these dichotomies, this paper seeks to ask how can space be both literally and phenomenally transparent, as well as exhibit both homogeneous and heterogeneous qualities? This paper explores this potential destabilization or blurring of spatial phenomena by dissecting the transparent layers and volumes of a series of selected case studies to investigate how different architects have devised strategies of spatial ambiguity and interpenetration. Projects by Peter Eisenman, Sou Fujimoto, and SANAA will be discussed and analyzed to show how the superimposition of geometries and spaces achieve different conditions of layering, transparency, and interstitiality. Their particular buildings will be explored to reveal various innovative kinds of spatial interpenetration produced through the articulate relations of the elements of architecture, which challenge conventional perceptions of interior and exterior whereby visual homogeneity blurs with spatial heterogeneity. The results show how spatial conceptions such as interpenetration and transparency have the ability to subvert not only inside-outside dialectics, but could also produce multiple degrees of interiority within complex and indeterminate spatial dimensions in constant flux as well as present alternative forms of social interaction.

Structural-Geotechnical Effects of the Foundation of a Medium-Height Structure

The interaction effects between the existing soil and the substructure of a 5-story building with an underground one, were evaluated in such a way that the structural-geotechnical concepts were validated through the method of impedance factors with a program based on the method of the finite elements. The continuous wall-type foundation had a constant thickness and followed inclined and orthogonal directions, while the ground had homogeneous and medium-type characteristics. The soil considered was type C according to the Ecuadorian Construction Standard (NEC) and the corresponding foundation comprised a depth of 4.00 meters and a basement wall thickness of 40 centimeters. This project is part of a mid-rise building in the city of Azogues (Ecuador). The hypotheses raised responded to the objectives in such a way that the model implemented with springs had a variation with respect to the embedded base, obtaining conservative results.

Influence of Inhomogeneous Wind Fields on the Aerostatic Stability of a Cable-Stayed Pedestrian Bridge without Backstays: Experiments and Numerical Simulations

Sightseeing glass bridges located in steep valley area are being built on a large scale owing to the development of tourism. Consequently, their aerostatic stability is seriously affected by the wind field characteristics created by strong wind and special terrain, such as wind speed and wind attack angle. For instance, a cable-stayed pedestrian bridge without backstays comprised of a 60-m cantilever girder and the glass bridge deck is located in an abrupt valley, acting as a viewing platform. The bridge’s nonlinear aerostatic stability was analyzed by the segmental model test and numerical simulation in this paper. Based on aerostatic coefficients of the main girder measured in wind tunnel tests, nonlinear influences caused by the structure and aerostatic load, inhomogeneous distribution of torsion angle along the bridge axis, and the influence of initial attack angle were analyzed by using the incremental double iteration method. The results show that the aerostatic response varying with speed shows an obvious nonlinearity, and the aerostatic instability mode is of the characteristic of space deformation of bending-twisting coupling mode. The vertical and torsional deformation of the main girder is larger than its lateral deformation, with the wind speed approaching the critical wind speed. The flow of negative attack angle will reduce the bridges’ critical stability wind speed, but the influence of the negative attack angle on the aerostatic stability is more significant than that of the positive attack angle. The critical wind speeds of torsional divergence and lateral buckling are both larger than 200 m/s; namely, the bridge will not occur aerostatic instability under the action of various wind attack angles.

Predicting the Lack of GDP Growth: A Logit Model for 40 Advanced and Developing Countries

This paper identifies leading triggers of deficient episodes in terms of GDP growth based on a sample of countries at different stages of development over 1994-2017. Using logit models, we build early warning systems (EWS) and our results show important differences between developing countries (DCs) and advanced economies (AEs). For AEs, the main predictors of the probability of entering in a GDP growth deficient episode are the deterioration of external imbalances and the vulnerability of fiscal position while DCs face different challenges that need to be considered. The key indicators for them are first, the low ability to pay its debts and second, their belonging or not to a common currency area. We also build homogeneous pools of countries inside AEs and DCs. For AEs, the evolution of the proportion of countries in the riskiest pool is marked first, by three distinct peaks just after the high-tech bubble burst, the global financial crisis and the European sovereign debt crisis, and second by a very low minimum level in 2006 and 2007. In contrast, the situation of DCs is characterized first by a relative stability of this proportion and then by an upward trend from 2006, that can be explained by more unfavorable socio-political environment leading to shortcomings in the fiscal consolidation.

Parametric Approach for Reserve Liability Estimate in Mortgage Insurance

Chain Ladder (CL) method, Expected Loss Ratio (ELR) method and Bornhuetter-Ferguson (BF) method, in addition to more complex transition-rate modeling, are commonly used actuarial reserving methods in general insurance. There is limited published research about their relative performance in the context of Mortgage Insurance (MI). In our experience, these traditional techniques pose unique challenges and do not provide stable claim estimates for medium to longer term liabilities. The relative strengths and weaknesses among various alternative approaches revolve around: stability in the recent loss development pattern, sufficiency and reliability of loss development data, and agreement/disagreement between reported losses to date and ultimate loss estimate. CL method results in volatile reserve estimates, especially for accident periods with little development experience. The ELR method breaks down especially when ultimate loss ratios are not stable and predictable. While the BF method provides a good tradeoff between the loss development approach (CL) and ELR, the approach generates claim development and ultimate reserves that are disconnected from the ever-to-date (ETD) development experience for some accident years that have more development experience. Further, BF is based on subjective a priori assumption. The fundamental shortcoming of these methods is their inability to model exogenous factors, like the economy, which impact various cohorts at the same chronological time but at staggered points along their life-time development. This paper proposes an alternative approach of parametrizing the loss development curve and using logistic regression to generate the ultimate loss estimate for each homogeneous group (accident year or delinquency period). The methodology was tested on an actual MI claim development dataset where various cohorts followed a sigmoidal trend, but levels varied substantially depending upon the economic and operational conditions during the development period spanning over many years. The proposed approach provides the ability to indirectly incorporate such exogenous factors and produce more stable loss forecasts for reserving purposes as compared to the traditional CL and BF methods.

Design Charts for Strip Footing on Untreated and Cement Treated Sand Mat over Underlying Natural Soft Clay

Shallow foundations on unimproved soft natural soils can undergo a high consolidation and secondary settlement. For low and medium rise building projects on such soil condition, pile foundation may not be cost effective. In such cases an alternative to pile foundations may be shallow strip footings placed on a double layered improved soil system soil. The upper layer of this system is untreated or cement treated compacted sand and underlying layer is natural soft clay. This system will reduce the settlement to an allowable limit. The current research has been conducted with the settlement of a rigid plane-strain strip footing of 2.5 m width placed on the surface of a soil consisting of an untreated or cement treated sand layer overlying a bed of homogeneous soft clay. The settlement of the mentioned shallow foundation has been studied considering both cases with the thicknesses of the sand layer are 0.3 to 0.9 times the width of footing. The response of the clay layer is assumed as undrained for plastic loading stages and drained during consolidation stages. The response of the sand layer is drained during all loading stages. FEM analysis was done using PLAXIS 2D Version 8.0. A natural clay deposit of 15 m thickness and 18 m width has been modeled using Hardening Soil Model, Soft Soil Model, Soft Soil Creep Model, and upper improvement layer has been modeled using only Hardening Soil Model. The groundwater level is at the top level of the clay deposit that made the system fully saturated. Parametric study has been conducted to determine the effect of thickness, density, cementation of the sand mat and density, shear strength of the soft clay layer on the settlement of strip foundation under the uniformly distributed vertical load of varying value. A set of the chart has been established for designing shallow strip footing on the sand mat over thick, soft clay deposit through obtaining the particular thickness of sand mat for particular subsoil parameter to ensure no punching shear failure and no settlement beyond allowable level. Design guideline in the form of non-dimensional charts has been developed for footing pressure equivalent to medium-rise residential or commercial building foundation with strip footing on soft inorganic Normally Consolidated (NC) soil of Bangladesh having void ratio from 1.0 to 1.45.

Flow Duration Curves and Recession Curves Connection through a Mathematical Link

This study helps Public Water Bureaus in giving reliable answers to water concession requests. Rapidly increasing water requests can be supported provided that further uses of a river course are not totally compromised, and environmental features are protected as well. Strictly speaking, a water concession can be considered a continuous drawing from the source and causes a mean annual streamflow reduction. Therefore, deciding if a water concession is appropriate or inappropriate seems to be easily solved by comparing the generic demand to the mean annual streamflow value at disposal. Still, the immediate shortcoming for such a comparison is that streamflow data are information available only for few catchments and, most often, limited to specific sites. Subsequently, comparing the generic water demand to mean daily discharge is indeed far from being completely satisfactory since the mean daily streamflow is greater than the water withdrawal for a long period of a year. Consequently, such a comparison appears to be of little significance in order to preserve the quality and the quantity of the river. In order to overcome such a limit, this study aims to complete the information provided by flow duration curves introducing a link between Flow Duration Curves (FDCs) and recession curves and aims to show the chronological sequence of flows with a particular focus on low flow data. The analysis is carried out on 25 catchments located in North-Eastern Italy for which daily data are provided. The results identify groups of catchments as hydrologically homogeneous, having the lower part of the FDCs (corresponding streamflow interval is streamflow Q between 300 and 335, namely: Q(300), Q(335)) smoothly reproduced by a common recession curve. In conclusion, the results are useful to provide more reliable answers to water request, especially for those catchments which show similar hydrological response and can be used for a focused regionalization approach on low flow data. A mathematical link between streamflow duration curves and recession curves is herein provided, thus furnishing streamflow duration curves information upon a temporal sequence of data. In such a way, by introducing assumptions on recession curves, the chronological sequence upon low flow data can also be attributed to FDCs, which are known to lack this information by nature.

The Explanation for Dark Matter and Dark Energy

The following assumptions of the Big Bang theory are challenged and found to be false: the cosmological principle, the assumption that all matter formed at the same time and the assumption regarding the cause of the cosmic microwave background radiation. The evolution of the universe is described based on the conclusion that the universe is finite with a space boundary. This conclusion is reached by ruling out the possibility of an infinite universe or a universe which is finite with no boundary. In a finite universe, the centre of the universe can be located with reference to our home galaxy (The Milky Way) using the speed relative to the Cosmic Microwave Background (CMB) rest frame and Hubble's law. This places our home galaxy at a distance of approximately 26 million light years from the centre of the universe. Because we are making observations from a point relatively close to the centre of the universe, the universe appears to be isotropic and homogeneous but this is not the case. The CMB is coming from a source located within the event horizon of the universe. There is sufficient mass in the universe to create an event horizon at the Schwarzschild radius. Galaxies form over time due to the energy released by the expansion of space. Conservation of energy must consider total energy which is mass (+ve) plus energy (+ve) plus spacetime curvature (-ve) so that the total energy of the universe is always zero. The predominant position of galaxy formation moves over time from the centre of the universe towards the boundary so that today the majority of new galaxy formation is taking place beyond our horizon of observation at 14 billion light years.

Physiological Effects on Scientist Astronaut Candidates: Hypobaric Training Assessment

This paper is addressed to expanding our understanding of the effects of hypoxia training on our bodies to better model its dynamics and leverage some of its implications and effects on human health. Hypoxia training is a recommended practice for military and civilian pilots that allow them to recognize their early hypoxia signs and symptoms, and Scientist Astronaut Candidates (SACs) who underwent hypobaric hypoxia (HH) exposure as part of a training activity for prospective suborbital flight applications. This observational-analytical study describes physiologic responses and symptoms experienced by a SAC group before, during and after HH exposure and proposes a model for assessing predicted versus observed physiological responses. A group of individuals with diverse Science Technology Engineering Mathematics (STEM) backgrounds conducted a hypobaric training session to an altitude up to 22,000 ft (FL220) or 6,705 meters, where heart rate (HR), breathing rate (BR) and core temperature (Tc) were monitored with the use of a chest strap sensor pre and post HH exposure. A pulse oximeter registered levels of saturation of oxygen (SpO2), number and duration of desaturations during the HH chamber flight. Hypoxia symptoms as described by the SACs during the HH training session were also registered. This data allowed to generate a preliminary predictive model of the oxygen desaturation and O2 pressure curve for each subject, which consists of a sixth-order polynomial fit during exposure, and a fifth or fourth-order polynomial fit during recovery. Data analysis showed that HR and BR showed no significant differences between pre and post HH exposure in most of the SACs, while Tc measures showed slight but consistent decrement changes. All subjects registered SpO2 greater than 94% for the majority of their individual HH exposures, but all of them presented at least one clinically significant desaturation (SpO2 < 85% for more than 5 seconds) and half of the individuals showed SpO2 below 87% for at least 30% of their HH exposure time. Finally, real time collection of HH symptoms presented temperature somatosensory perceptions (SP) for 65% of individuals, and task-focus issues for 52.5% of individuals as the most common HH indications. 95% of the subjects experienced HH onset symptoms below FL180; all participants achieved full recovery of HH symptoms within 1 minute of donning their O2 mask. The current HH study performed on this group of individuals suggests a rapid and fully reversible physiologic response after HH exposure as expected and obtained in previous studies. Our data showed consistent results between predicted versus observed SpO2 curves during HH suggesting a mathematical function that may be used to model HH performance deficiencies. During the HH study, real-time HH symptoms were registered providing evidenced SP and task focusing as the earliest and most common indicators. Finally, an assessment of HH signs of symptoms in a group of heterogeneous, non-pilot individuals showed similar results to previous studies in homogeneous populations of pilots.

Using Lagrange Equations to Study the Relative Motion of a Mechanism

The relative motion of a robotic arm formed by homogeneous bars of different lengths and masses, hinged to each other is investigated. The first bar of the mechanism is articulated on a platform, considered initially fixed on the surface of the Earth, while for the second case the platform is considered to be in rotation with respect to the Earth. For both analyzed cases the motion equations are determined using the Lagrangian formalism, applied in its traditional form, valid with respect to an inertial reference system, conventionally considered as fixed. However, in the second case, a generalized form of the formalism valid with respect to a non-inertial reference frame will also be applied. The numerical calculations were performed using a MATLAB program.

Mapping the Digital Landscape: An Analysis of Party Differences between Conventional and Digital Policy Positions

Although digitization is a buzzword in almost every election campaign, the political parties leave voters largely in the dark about their specific positions on digital issues. In the run-up to the 2019 elections in Switzerland, the ‘Digitization Monitor’ project (DMP) was launched in order to change this situation. Within the framework of the DMP, all 4,736 candidates were surveyed about their digital policy positions and values. The DMP is designed as a digital policy supplement to the existing ‘smartvote’ voting advice application. This enabled a direct comparison of the digital policy attitudes according to the DMP with the topics of the ‘smartvote’ questionnaire which are comprehensive in content but mainly related to conventional policy areas. This paper’s main research goal is to analyze and visualize possible differences between conventional and digital policy areas in terms of response patterns between and within political parties. The analysis is based on dimensionality reduction methods (multidimensional scaling and principal component analysis) for the visualization of inter-party differences, and on standard deviation as a measure of variation for the evaluation of intra-party unity. The results reveal that digital issues show a lower degree of inter-party polarization compared to conventional policy areas. Thus, the parties have more common ground in issues on digitization than in conventional policy areas. In contrast, the study reveals a mixed picture regarding intra-party unity. Homogeneous parties show a lower degree of unity in digitization issues whereas parties with heterogeneous positions in conventional areas have more united positions in digital areas. All things considered, the findings are encouraging as less polarized conditions apply to the debate on digital development compared to conventional politics. For the future, it would be desirable if in further countries similar projects to the DMP could emerge to broaden the basis for conclusions.

Numerical Simulation of Different Configurations for a Combined Gasification/Carbonization Reactors

Gasification and carbonization are two of the most common ways for biomass utilization. Both processes are using part of the waste to be accomplished, either by incomplete combustion or for heating for both gasification and carbonization, respectively. The focus of this paper is to minimize the part of the waste that is used for heating biomass for gasification and carbonization. This will occur by combining both gasifiers and carbonization reactors in a single unit to utilize the heat in the product biogas to heating up the wastes in the carbonization reactors. Three different designs are proposed for the combined gasification/carbonization (CGC) reactor. These include a parallel combination of two gasifiers and carbonized syngas, carbonizer and combustion chamber, and one gasifier, carbonizer, and combustion chamber. They are tested numerically using ANSYS Fluent Computational Fluid Dynamics to ensure homogeneity of temperature distribution inside the carbonization part of the CGC reactor. 2D simulations are performed for the three cases after performing both mesh-size and time-step independent solutions. The carbonization part is common among the three different cases, and the difference among them is how this carbonization reactor is heated. The simulation results showed that the first design could provide only partial homogeneous temperature distribution, not across the whole reactor. This means that the produced carbonized biomass will be reduced as it will only fill a specified height of the reactor. To keep the carbonized product production high, a series combination is proposed. This series configuration resulted in a uniform temperature distribution across the whole reactor as it has only one source for heat with no temperature distribution on any surface of the carbonization section. The simulations provided a satisfactory result that either the first parallel combination of gasifier and carbonization reactor could be used with a reduced carbonized amount or a series configuration to keep the production rate high.

Pure and Mixed Nash Equilibria Domain of a Discrete Game Model with Dichotomous Strategy Space

We present a discrete game theoretical model with homogeneous individuals who make simultaneous decisions. In this model the strategy space of all individuals is a discrete and dichotomous set which consists of two strategies. We fully characterize the coherent, split and mixed strategies that form Nash equilibria and we determine the corresponding Nash domains for all individuals. We find all strategic thresholds in which individuals can change their mind if small perturbations in the parameters of the model occurs.

Validity of Universe Structure Conception as Nested Vortexes

This paper introduces the Nested Vortexes conception of the universe structure and interprets all the physical phenomena according this conception. The paper first reviews recent physics theories, either in microscopic scale or macroscopic scale, to collect evidence that the space is not empty. But, these theories describe the property of the space medium without determining its structure. Determining the structure of space medium is essential to understand the mechanism that leads to its properties. Without determining the space medium structure, many phenomena; such as electric and magnetic fields, gravity, or wave-particle duality remain uninterpreted. Thus, this paper introduces a conception about the structure of the universe. It assumes that the universe is a medium of ultra-tiny homogeneous particles which are still undiscovered. Like any medium with certain movements, possibly because of a great asymmetric explosion, vortexes have occurred. A vortex condenses the ultra-tiny particles in its center forming a bigger particle, the bigger particles, in turn, could be trapped in a bigger vortex and condense in its center forming a much bigger particle and so on. This conception describes galaxies, stars, protons as particles at different levels. Existing of the particle’s vortexes make the consistency of the speed of light postulate is not true. This conception shows that the vortex motion dynamic agrees with the motion of all the universe particles at any level. An experiment has been carried out to detect the orbiting effect of aggregated vortexes of aligned atoms of a permanent magnet. Based on the described particle’s structure, the gravity force of a particle and attraction between particles as well as charge, electric and magnetic fields and quantum mechanics characteristics are interpreted. All augmented physics phenomena are solved.

Investigations on the Seismic Performance of Hot-Finished Hollow Steel Sections

In seismic applications, hollow steel sections show, beyond undeniable esthetical appeal, promising structural advantages since, unlike open section counterparts, they are not susceptible to weak-axis and lateral-torsional buckling. In particular, hot-finished hollow steel sections have homogeneous material properties and favorable ductility but have been underutilized for cyclic bending. The main reason is that the parameters affecting their hysteretic behaviors are not yet well understood and, consequently, are not well exploited in existing codes of practice. Therefore, experimental investigations have been conducted on a wide range of hot-finished rectangular hollow section beams with the aim to providing basic knowledge for evaluating their seismic performance. The section geometry (width-to-thickness and depth-to-thickness ratios) and the type of loading (monotonic and cyclic) have been chosen as the key parameters to investigate the cyclic effect on the rotational capacity and to highlight the differences between monotonic and cyclic load conditions. The test results provide information on the parameters that affect the cyclic performance of hot-finished hollow steel beams and can be used to assess the design provisions stipulated in the current seismic codes of practice.

Characterizing the Geometry of Envy Human Behaviour Using Game Theory Model with Two Types of Homogeneous Players

An envy behavioral game theoretical model with two types of homogeneous players is considered in this paper. The strategy space of each type of players is a discrete set with only two alternatives. The preferences of each type of players is given by a discrete utility function. All envy strategies that form Nash equilibria and the corresponding envy Nash domains for each type of players have been characterized. We use geometry to construct two dimensional envy tilings where the horizontal axis reflects the preference for players of type one, while the vertical axis reflects the preference for the players of type two. The influence of the envy behavior parameters on the Cartesian position of the equilibria has been studied, and in each envy tiling we determine the envy Nash equilibria. We observe that there are 1024 combinatorial classes of envy tilings generated from envy chromosomes: 256 of them are being structurally stable while 768 are with bifurcation. Finally, some conditions for the disparate envy Nash equilibria are stated.

Parametric Knowledge in Linguistic Structure

The linguistic and conceptual systems exhibit a tight relationship considering that words are access sites to conceptual structure. However, linguistic and conceptual structures seem to combine into a sort of homogeneous system which makes the distinction between them fuzzy. The article explores the possibility of positing a type of schematic linguistic content that is unique to the linguistic system. This linguistic content comes in the form of lexical concepts and linguistic parameters. These notions will shed some light on the parametric linguistic knowledge that might be encoded in and externalized via language. This in turn, could be the feature about language that differentiates it from the closely related conceptual system.

Comparison of Two-Phase Critical Flow Models for Estimation of Leak Flow Rate through Cracks

The estimation of leak flow rates through narrow cracks in structures is of importance for nuclear reactor safety, since the leak flow could be detected before occurrence of loss-of-coolant accidents. The two-phase critical leak flow rates are calculated using the system analysis code, and two representative non-homogeneous critical flow models, Henry-Fauske model and Ransom-Trapp model, are compared. The pressure decrease and vapor generation in the crack, and the leak flow rates are found to be larger for the Henry-Fauske model. It is shown that the leak flow rates are not affected by the structural temperature, but affected largely by the roughness of crack surface.

Modified Hybrid Genetic Algorithm-Based Artificial Neural Network Application on Wall Shear Stress Prediction

Prediction of wall shear stress in a rectangular channel, with non-homogeneous roughness distribution, was studied. Estimation of shear stress is an important subject in hydraulic engineering, since it affects the flow structure directly. In this study, the Genetic Algorithm Artificial (GAA) neural network is introduced as a hybrid methodology of the Artificial Neural Network (ANN) and modified Genetic Algorithm (GA) combination. This GAA method was employed to predict the wall shear stress. Various input combinations and transfer functions were considered to find the most appropriate GAA model. The results show that the proposed GAA method could predict the wall shear stress of open channels with high accuracy, by Root Mean Square Error (RMSE) of 0.064 in the test dataset. Thus, using GAA provides an accurate and practical simple-to-use equation.