Relationship between Iron-Related Parameters and Soluble Tumor Necrosis Factor-Like Weak Inducer of Apoptosis in Obese Children

Iron is physiologically essential. However, it also participates in the catalysis of free radical formation reactions. Its deficiency is associated with amplified health risks. This trace element establishes some links with another physiological process related to cell death, apoptosis. Both iron deficiency and iron overload are closely associated with apoptosis. Soluble tumor necrosis factor-like weak inducer of apoptosis (sTWEAK) has the ability to trigger apoptosis and plays a dual role in the physiological versus pathological inflammatory responses of tissues. The aim of this study was to investigate the status of these parameters as well as the associations among them in children with obesity, a low-grade inflammatory state. The study was performed on groups of children with normal body mass index (N-BMI) and obesity. 43 children were included in each group. Based upon age- and sex-adjusted BMI percentile tables prepared by the World Health Organization, children whose values varied between 85 and 15 were included in N-BMI group. Children, whose BMI percentile values were between 99 and 95, comprised obese (OB) group. Institutional ethical committee approval and informed consent forms were taken prior to the study. Anthropometric measurements (weight, height, waist circumference, hip circumference, head circumference, neck circumference) and blood pressure values (systolic blood pressure and diastolic blood pressure) were recorded. Routine biochemical analyses, including serum iron, total iron binding capacity (TIBC), transferrin saturation percent (Tf Sat %) and ferritin, were performed. sTWEAK levels were determined by enzyme-linked immunosorbent assay. study data were evaluated using appropriate statistical tests performed by the statistical program SPSS. Serum iron levels were 91 ± 34 mcrg/dl and 75 ± 31 mcrg/dl in N-BMI and OB children, respectively. The corresponding values for TIBC, Tf Sat %, ferritin were 265 mcrg/dl vs. 299 mcrg/dl, 37.2 ± 19.1% vs. 26.7 ± 14.6%, and 41 ± 25 ng/ml vs 44 ± 26 ng/ml. In N-BMI and OB groups, sTWEAK concentrations were measured as 351 ng/L and 325 ng/L, respectively (p > 0.05). Correlation analysis revealed significant associations between sTWEAK levels and iron related parameters (p < 0.05) except ferritin. In conclusion, iron contributes to apoptosis. Children with iron deficiency have decreased apoptosis rate in comparison with that of healthy children. sTWEAK is an inducer of apoptosis. OB children had lower levels of both iron and sTWEAK. Low levels of sTWEAK are associated with several types of cancers and poor survival. Although iron deficiency state was not observed in this study, the correlations detected between decreased sTWEAK and decreased iron as well as Tf Sat % values were valuable findings, which point out decreased apoptosis. This may induce a proinflammatory state, potentially leading to malignancies in the future lives of OB children.

The Event of the World in Martin Heidegger’s Early Hermeneutical Phenomenology

The paper focuses on Heidegger’s 1919-1920 early research in order to point out his hermeneutical phenomenology of the life-world, arguing that the concept of world (Welt) is the main philosophical trigger for the phenomenology of factical life. Accordingly, the argument of the paper is twofold: First, the phenomenological hermeneutics of facticity is preceded both chronologically and philosophically by an original phenomenological investigation of life-world, in which the world is construed as the context of the givenness of life. Second, the phenomenology of life-world anticipates the question of being (Seinsfrage), but it also follows it, once this latter is shattered, the question of world as event remaining at the very core of Heidegger’s last meditations on the dominion of technology and the post-metaphysical abode of human beings on earth.

Assessments of Internal Erosion in a Landfill Due to Changes in Groundwater Level

Soil erosion has special consequences for landfills that are more serious than those found at conventional construction sites. Different potential heads between two sides of a landfill and the subsequent movement of water through pores within the soil body could trigger the soil erosion and construction instability. Such condition was encountered in a landfill project in the southern part of Norway. To check the risk of internal erosion due changes in the groundwater level (because of seasonal flooding in the river), a series of numerical simulations by means of Geo-Seep software were conducted. Output of this study provides a total picture of the landfill stability, possibilities of erosions and necessary measures to prevent or reduce the risk for the landfill operator.

Spatial Indeterminacy: Destabilization of Dichotomies in Modern and Contemporary Architecture

Since the advent of modern architecture, notions of free plan and transparency have proliferated well into current trends. The movement’s notion of a spatially homogeneous, open and limitless ‘free plan’ contrasts with the spatially heterogeneous ‘series of rooms’ defined by load bearing walls, which in turn triggered new notions of transparency created by vast expanses of glazed walls. Similarly, transparency was also dichotomized as something that was physical or optical, as well as something conceptual, akin to spatial organization. As opposed to merely accepting the duality and possible incompatibility of these dichotomies, this paper seeks to ask how can space be both literally and phenomenally transparent, as well as exhibit both homogeneous and heterogeneous qualities? This paper explores this potential destabilization or blurring of spatial phenomena by dissecting the transparent layers and volumes of a series of selected case studies to investigate how different architects have devised strategies of spatial ambiguity and interpenetration. Projects by Peter Eisenman, Sou Fujimoto, and SANAA will be discussed and analyzed to show how the superimposition of geometries and spaces achieve different conditions of layering, transparency, and interstitiality. Their particular buildings will be explored to reveal various innovative kinds of spatial interpenetration produced through the articulate relations of the elements of architecture, which challenge conventional perceptions of interior and exterior whereby visual homogeneity blurs with spatial heterogeneity. The results show how spatial conceptions such as interpenetration and transparency have the ability to subvert not only inside-outside dialectics, but could also produce multiple degrees of interiority within complex and indeterminate spatial dimensions in constant flux as well as present alternative forms of social interaction.

Seismic Behavior and Loss Assessment of High-Rise Buildings with Light Gauge Steel-Concrete Hybrid Structure

The steel-concrete hybrid structure has been extensively employed in high-rise buildings and super high-rise buildings. The light gauge steel-concrete hybrid structure, including light gauge steel structure and concrete hybrid structure, is a type of steel-concrete hybrid structure, which possesses some advantages of light gauge steel structure and concrete hybrid structure. The seismic behavior and loss assessment of three high-rise buildings with three different concrete hybrid structures were investigated through finite element software. The three concrete hybrid structures are reinforced concrete column-steel beam (RC-S) hybrid structure, concrete-filled steel tube column-steel beam (CFST-S) hybrid structure, and tubed concrete column-steel beam (TC-S) hybrid structure. The nonlinear time-history analysis of three high-rise buildings under 80 earthquakes was carried out. After simulation, it indicated that the seismic performances of three high-rise buildings were superior. Under extremely rare earthquakes, the maximum inter-story drifts of three high-rise buildings are significantly lower than 1/50. The inter-story drift and floor acceleration of high-rise building with CFST-S hybrid structure were bigger than those of high-rise buildings with RC-S hybrid structure, and smaller than those of high-rise building with TC-S hybrid structure. Then, based on the time-history analysis results, the post-earthquake repair cost ratio and repair time of three high-rise buildings were predicted through an economic performance analysis method proposed in FEMA-P58 report. Under frequent earthquakes, basic earthquakes and rare earthquakes, the repair cost ratio and repair time of three high-rise buildings were less than 5% and 15 days, respectively. Under extremely rare earthquakes, the repair cost ratio and repair time of high-rise buildings with TC-S hybrid structure were the most among three high rise buildings. Due to the advantages of CFST-S hybrid structure, it could be extensively employed in high-rise buildings subjected to earthquake excitations.

Fighter Aircraft Selection Using Technique for Order Preference by Similarity to Ideal Solution with Multiple Criteria Decision Making Analysis

This paper presents a multiple criteria decision making analysis technique for selecting fighter aircraft for the national air force. The selection of military aircraft is a process consisting of contradictory goals and objectives. When a modern air force needs to choose fighter aircraft to upgrade existing fleets, a multiple criteria decision making analysis and scenario planning for defense acquisition has been put forward. The selection of fighter aircraft for the air defense force is a strategic decision making process, since the purchase or lease of fighter jets, maintenance and operating costs and having a fleet is the biggest cost for the air force. Multiple criteria decision making analysis methods are effectively applied to facilitate decision making from various available options. The selection criteria were determined using the literature on the problem of fighter aircraft selection. The selection of fighter aircraft to be purchased for the air defense forces is handled using a multiple criteria decision making analysis technique that also determines a suitable methodological approach for the defense procurement and fleet upgrade planning process. The aim of this study is to originate an approach to evaluate fighter aircraft alternatives, Su-35, F-35, and TF-X (MMU), based on technique for order preference by similarity to ideal solution (TOPSIS).

Predicting the Lack of GDP Growth: A Logit Model for 40 Advanced and Developing Countries

This paper identifies leading triggers of deficient episodes in terms of GDP growth based on a sample of countries at different stages of development over 1994-2017. Using logit models, we build early warning systems (EWS) and our results show important differences between developing countries (DCs) and advanced economies (AEs). For AEs, the main predictors of the probability of entering in a GDP growth deficient episode are the deterioration of external imbalances and the vulnerability of fiscal position while DCs face different challenges that need to be considered. The key indicators for them are first, the low ability to pay its debts and second, their belonging or not to a common currency area. We also build homogeneous pools of countries inside AEs and DCs. For AEs, the evolution of the proportion of countries in the riskiest pool is marked first, by three distinct peaks just after the high-tech bubble burst, the global financial crisis and the European sovereign debt crisis, and second by a very low minimum level in 2006 and 2007. In contrast, the situation of DCs is characterized first by a relative stability of this proportion and then by an upward trend from 2006, that can be explained by more unfavorable socio-political environment leading to shortcomings in the fiscal consolidation.

Analyzing the Prospects and Challenges in Implementing the Legal Framework for Competition Regulation in Nigeria

Competition law promotes market competition by regulating anti-competitive conduct by undertakings. There is a need for a third party to regulate the market for efficiency and supervision, since, if the market is left unchecked, it may be skewed against the consumers and the economy. Competition law is geared towards the protection of consumers from economic exploitation. It is the duty of every rational government to optimally manage its economic system by employing the best regulatory practices over the market to ensure it functions effectively and efficiently. The Nigerian government has done this by enacting the Federal Competition and Consumer Protection Act, 2018 (FCCPA). This is a comprehensive legal framework with the objective of governing competition issues in Nigeria. Prior to its enactment, the competition law regime in Nigeria was grossly inadequate despite Nigeria being the biggest economy in Africa. This latest legislation has become a bold step in the right direction. This study will use the doctrinal methodology in analyzing the FCCPA, 2018 in order to discover the extent to which the Act will guard against anti-competitive practices and promote competitive markets for the benefit of the Nigerian economy and consumers. The study finds that although the FCCPA, 2018 provides for the regulation of competition in Nigeria, there is a need to effectively tackle the challenges to the implementation of the Act and the development of anti-trust jurisprudence in Nigeria. This study concludes that incisive implementation of competition law in Nigeria will help protect consumers and create a conducive environment for economic growth, development, and protection of consumers from obnoxious competition practices.

Educational Path for Pedagogical Skills: A Football School Experience

The current pedagogical culture recognizes an educational scope within the sports practices. It is widely accepted, in the pedagogical culture, that thanks to the acquisition and development of motor skills, it is also possible to exercise abilities that concern the way of facing and managing the difficulties of everyday life. Sport is a peculiar educational environment: the children have the opportunity to discover the possibilities of their body, to correlate with their peers, and to learn how to manage the rules and the relationship with authorities, such as coaches. Educational aspects of the sport concern both non-formal and formal educational environments. Coaches play a critical role in an agonistic sphere: exactly like the competencies developed by the children, coaches have to work on their skills to properly set up the educational scene. Facing these new educational tasks - which are not new per se, but new because they are brought back to awareness - a few questions arise: does the coach have adequate preparation? Is the training of the coach in this specific area appropriate? This contribution aims to explore the issue in depth by focusing on the reality of the Football School. Starting from a possible sense of pedagogical inadequacy detected during a series of meetings with several football clubs in Piedmont (Italy), there have been highlighted some important educational needs within the professional training of sports coaches. It is indeed necessary for the coach to know the processes underlying the educational relationship in order to better understand the centrality of the assessment during the educational intervention and to be able to manage the asymmetry in the coach-athlete relationship. In order to provide a response to these pedagogical needs, a formative plan has been designed to allow both an in-depth study of educational issues and a correct self-evaluation of certain pedagogical skills’ control levels, led by the coach. This plan has been based on particular practices, the Educational Practices of Pre-test (EPP), a specific version of community practices designed for the extracurricular activities. The above-mentioned practices realized through the use of texts meant as pre-tests, promoted a reflection within the group of coaches: they set up real and plausible sports experiences - in particular football, triggering a reflection about the relationship’s object, spaces, and methods. The characteristic aspect of pre-tests is that it is impossible to anticipate the reflection as it is necessarily connected to the personal experience and sensitivity, requiring a strong interest and involvement by participants: situations must be considered by the coaches as possible settings in which they could be found on the field.

The Effect of Acrylic Gel Grouting on Groundwater in Porous Media

When digging excavations, groundwater bearing layers are often encountered. In order to allow anhydrous excavation, soil groutings are carried out, which form a water-impermeable layer. As it is injected into groundwater areas, the effects of the materials used on the environment must be known. Developing an eco-friendly, economical and low viscous acrylic gel which has a sealing effect on groundwater is therefore a significant task. At this point the study begins. Basic investigations with the rheometer and a reverse column experiment have been performed with different mixing ratios of an acrylic gel. A dynamic rheology study was conducted to determine the time at which the gel still can be processed and the maximum gel strength is reached. To examine the effect of acrylic gel grouting on determine the parameters pH value, turbidity, electric conductivity, and total organic carbon on groundwater, an acrylic gel was injected in saturated sand filled the column. The structure was rinsed with a constant flow and the eluate was subsequently examined. The results show small changes in pH values and turbidity but there is a dependency between electric conductivity and total organic carbon. The curves of the two parameters react at the same time, which means that the electrical conductivity in the eluate can be measured constantly until the maximum is reached and only then must total organic carbon (TOC) samples be taken.

Modelling for Roof Failure Analysis in an Underground Cave

Roof collapse is one of the problems with a higher frequency in most of the mines of all countries, even now. There are many reasons that may cause the roof to collapse, namely the mine stress activities in the mining process, the lack of vigilance and carelessness or the complexity of the geological structure and irregular operations. This work is the result of the analysis of one accident produced in the “Mary” coal exploitation located in northern Spain. In this accident, the roof of a crossroad of excavated galleries to exploit the “Morena” Layer, 700 m deep, collapsed. In the paper, the work done by the forensic team to determine the causes of the incident, its conclusions and recommendations are collected. Initially, the available documentation (geology, geotechnics, mining, etc.) and accident area were reviewed. After that, laboratory and on-site tests were carried out to characterize the behaviour of the rock materials and the support used (metal frames and shotcrete). With this information, different hypotheses of failure were simulated to find the one that best fits reality. For this work, the software of finite differences in three dimensions, FLAC 3D, was employed. The results of the study confirmed that the detachment was originated as a consequence of one sliding in the layer wall, due to the large roof span present in the place of the accident, and probably triggered as a consequence of the existence of a protection pillar insufficient. The results allowed to establish some corrective measures avoiding future risks. For example, the dimensions of the protection zones that must be remained unexploited and their interaction with the crossing areas between galleries, or the use of more adequate supports for these conditions, in which the significant deformations may discourage the use of rigid supports such as shotcrete. At last, a grid of seismic control was proposed as a predictive system. Its efficiency was tested along the investigation period employing three control equipment that detected new incidents (although smaller) in other similar areas of the mine. These new incidents show that the use of explosives produces vibrations which are a new risk factor to analyse in a next future.

Optimization of the Dental Direct Digital Imaging by Applying the Self-Recognition Technology

This paper is intended to introduce the technology to solve some of the deficiencies of the direct digital radiology. Nowadays, digital radiology is the latest progression in dental imaging, which has become an essential part of dentistry. There are two main parts of the direct digital radiology comprised of an intraoral X-ray machine and a sensor (digital image receptor). The dentists and the dental nurses experience afflictions during the taking image process by the direct digital X-ray machine. For instance, sometimes they need to readjust the sensor in the mouth of the patient to take the X-ray image again due to the low quality of that. Another problem is, the position of the sensor may move in the mouth of the patient and it triggers off an inappropriate image for the dentists. It means that it is a time-consuming process for dentists or dental nurses. On the other hand, taking several the X-ray images brings some problems for the patient such as being harmful to their health and feeling pain in their mouth due to the pressure of the sensor to the jaw. The author provides a technology to solve the above-mentioned issues that is called “Self-Recognition Direct Digital Radiology” (SDDR). This technology is based on the principle that the intraoral X-ray machine is capable to diagnose the location of the sensor in the mouth of the patient automatically. In addition, to solve the aforementioned problems, SDDR technology brings out fewer environmental impacts in comparison to the previous version.

Inflammatory Markers in the Blood and Chronic Periodontitis

Background: Plasma levels of inflammatory markers are the expression of the infectious wastes of existing periodontitis, as well as of existing inflammation everywhere in the body. Materials and Methods: The study consists of the clinical part of the measurement of inflammatory markers of 23 patients diagnosed with chronic periodontitis and the recording of parental periodontal parameters of patient periodontal status: hemorrhage index and probe values, before and 7-10 days after non-surgical periodontal treatment. Results: The level of fibrinogen drops according to the categorization of disease progression, active and passive, with the biggest % (18%-30%) at the fluctuation 10-20 mg/d. Fluctuations in fibrinogen level according to the age of patients in the range 0-10 mg/dL under 40 years and over 40 years was 13%-26%, in the range 10-20 mg/dL was 26%-22%, in the 20-40 mg/dL was 9%-4%. Conclusions: Non-surgical periodontal treatment significantly reduces the level of non-inflammatory markers in the blood. Oral health significantly reduces the potential source for periodontal bacteria, with the potential of promoting thromboembolism, through interaction between thrombocytes.

Physiological Effects during Aerobatic Flights on Science Astronaut Candidates

Spaceflight is considered the last frontier in terms of science, technology, and engineering. But it is also the next frontier in terms of human physiology and performance. After more than 200,000 years humans have evolved under earth’s gravity and atmospheric conditions, spaceflight poses environmental stresses for which human physiology is not adapted. Hypoxia, accelerations, and radiation are among such stressors, our research involves suborbital flights aiming to develop effective countermeasures in order to assure sustainable human space presence. The physiologic baseline of spaceflight participants is subject to great variability driven by age, gender, fitness, and metabolic reserve. The objective of the present study is to characterize different physiologic variables in a population of STEM practitioners during an aerobatic flight. Cardiovascular and pulmonary responses were determined in Science Astronaut Candidates (SACs) during unusual attitude aerobatic flight indoctrination. Physiologic data recordings from 20 subjects participating in high-G flight training were analyzed. These recordings were registered by wearable sensor-vest that monitored electrocardiographic tracings (ECGs), signs of dysrhythmias or other electric disturbances during all the flight. The same cardiovascular parameters were also collected approximately 10 min pre-flight, during each high-G/unusual attitude maneuver and 10 min after the flights. The ratio (pre-flight/in-flight/post-flight) of the cardiovascular responses was calculated for comparison of inter-individual differences. The resulting tracings depicting the cardiovascular responses of the subjects were compared against the G-loads (Gs) during the aerobatic flights to analyze cardiovascular variability aspects and fluid/pressure shifts due to the high Gs. In-flight ECG revealed cardiac variability patterns associated with rapid Gs onset in terms of reduced heart rate (HR) and some scattered dysrhythmic patterns (15% premature ventricular contractions-type) that were considered as triggered physiological responses to high-G/unusual attitude training and some were considered as instrument artifact. Variation events were observed in subjects during the +Gz and –Gz maneuvers and these may be due to preload and afterload, sudden shift. Our data reveal that aerobatic flight influenced the breathing rate of the subject, due in part by the various levels of energy expenditure due to the increased use of muscle work during these aerobatic maneuvers. Noteworthy was the high heterogeneity in the different physiological responses among a relatively small group of SACs exposed to similar aerobatic flights with similar Gs exposures. The cardiovascular responses clearly demonstrated that SACs were subjected to significant flight stress. Routine ECG monitoring during high-G/unusual attitude flight training is recommended to capture pathology underlying dangerous dysrhythmias in suborbital flight safety. More research is currently being conducted to further facilitate the development of robust medical screening, medical risk assessment approaches, and suborbital flight training in the context of the evolving commercial human suborbital spaceflight industry. A more mature and integrative medical assessment method is required to understand the physiology state and response variability among highly diverse populations of prospective suborbital flight participants.

University Curriculum Policy Processes in Chile: A Case Study

Located within the context of accelerating globalization in the 21st-century knowledge society, this paper focuses on one selected university in Chile at which radical curriculum policy changes have been taking place, diverging from the traditional curriculum in Chile at the undergraduate level as a section of a larger investigation. Using a ‘policy trajectory’ framework, and guided by the interpretivist approach to research, interview transcripts and institutional documents were analyzed in relation to the meso (university administration) and the micro (academics) level. Inside the case study, participants from the university administration and academic levels were selected both via snow-ball technique and purposive selection, thus they had different levels of seniority, with some participating actively in the curriculum reform processes. Guided by an interpretivist approach to research, documents and interview transcripts were analyzed to reveal major themes emerging from the data. A further ‘bigger picture’ analysis guided by critical theory was then undertaken, involving interrogation of underlying ideologies and how political and economic interests influence the cultural production of policy. The case-study university was selected because it represents a traditional and old case of university setting in the country, undergoing curriculum changes based on international trends such as the competency model and the liberal arts. Also, it is representative of a particular socioeconomic sector of the country. Access to the university was gained through email contact. Qualitative research methods were used, namely interviews and analysis of institutional documents. In all, 18 people were interviewed. The number was defined by when the saturation criterion was met. Semi-structured interview schedules were based on the four research questions about influences, policy texts, policy enactment and longer-term outcomes. Triangulation of information was used for the analysis. While there was no intention to generalize the specific findings of the case study, the results of the research were used as a focus for engagement with broader themes, often evident in global higher education policy developments. The research results were organized around major themes in three of the four contexts of the ‘policy trajectory’. Regarding the context of influences and the context of policy text production, themes relate to hegemony exercised by first world countries’ universities in the higher education field, its associated neoliberal ideology, with accountability and the discourse of continuous improvement, the local responses to those pressures, and the value of interdisciplinarity. Finally, regarding the context of policy practices and effects (enactment), themes emerged around the impacts of the curriculum changes on university staff, students, and resistance amongst academics. The research concluded with a few recommendations that potentially provide ‘food for thought’ beyond the localized settings of this study, as well as possibilities for further research.

Validity of Universe Structure Conception as Nested Vortexes

This paper introduces the Nested Vortexes conception of the universe structure and interprets all the physical phenomena according this conception. The paper first reviews recent physics theories, either in microscopic scale or macroscopic scale, to collect evidence that the space is not empty. But, these theories describe the property of the space medium without determining its structure. Determining the structure of space medium is essential to understand the mechanism that leads to its properties. Without determining the space medium structure, many phenomena; such as electric and magnetic fields, gravity, or wave-particle duality remain uninterpreted. Thus, this paper introduces a conception about the structure of the universe. It assumes that the universe is a medium of ultra-tiny homogeneous particles which are still undiscovered. Like any medium with certain movements, possibly because of a great asymmetric explosion, vortexes have occurred. A vortex condenses the ultra-tiny particles in its center forming a bigger particle, the bigger particles, in turn, could be trapped in a bigger vortex and condense in its center forming a much bigger particle and so on. This conception describes galaxies, stars, protons as particles at different levels. Existing of the particle’s vortexes make the consistency of the speed of light postulate is not true. This conception shows that the vortex motion dynamic agrees with the motion of all the universe particles at any level. An experiment has been carried out to detect the orbiting effect of aggregated vortexes of aligned atoms of a permanent magnet. Based on the described particle’s structure, the gravity force of a particle and attraction between particles as well as charge, electric and magnetic fields and quantum mechanics characteristics are interpreted. All augmented physics phenomena are solved.

A Comparison of Energy Calculations for a Single-Family Detached Home with Two Energy Simulation Methods

For newly produced houses and energy renovations, an energy calculation needs to be conducted. This is done to verify whether the energy consumption criteria of the house -to reach the energy targets by 2020 and 2050- are in-line with the norms. The main purpose of this study is to confirm whether easy to use energy calculation software or hand calculations used by small companies or individuals give logical results compared to advanced energy simulation program used by researchers or bigger companies. There are different methods for calculating energy consumption. In this paper, two energy calculation programs are used and the relation of energy consumption with solar radiation is compared. A hand calculation is also done to validate whether the hand calculations are still reasonable. The two computer programs which have been used are TMF Energi (the easy energy calculation variant used by small companies or individuals) and IDA ICE - Indoor Climate and Energy (the advanced energy simulation program used by researchers or larger companies). The calculations are done for a standard house from the Swedish house supplier Fiskarhedenvillan. The method is based on having the same conditions and inputs in the different calculation forms so that the results can be compared and verified. The house has been faced differently to see how the orientation affects energy consumption in different methods. The results for the simulations are close to each other and the hand calculation differs from the computer programs by only 5%. Even if solar factors differ due to the orientation of the house, energy calculation results from different computer programs and even hand calculation methods are in line with each other.

A Neuroscience-Based Learning Technique: Framework and Application to STEM

Existing learning techniques such as problem-based learning, project-based learning, or case study learning are learning techniques that focus mainly on technical details, but give no specific guidelines on learner’s experience and emotional learning aspects such as arousal salience and valence, being emotional states important factors affecting engagement and retention. Some approaches involving emotion in educational settings, such as social and emotional learning, lack neuroscientific rigorousness and use of specific neurobiological mechanisms. On the other hand, neurobiology approaches lack educational applicability. And educational approaches mainly focus on cognitive aspects and disregard conditioning learning. First, authors start explaining the reasons why it is hard to learn thoughtfully, then they use the method of neurobiological mapping to track the main limbic system functions, such as the reward circuit, and its relations with perception, memories, motivations, sympathetic and parasympathetic reactions, and sensations, as well as the brain cortex. The authors conclude explaining the major finding: The mechanisms of nonconscious learning and the triggers that guarantee long-term memory potentiation. Afterward, the educational framework for practical application and the instructors’ guidelines are established. An implementation example in engineering education is given, namely, the study of tuned-mass dampers for earthquake oscillations attenuation in skyscrapers. This work represents an original learning technique based on nonconscious learning mechanisms to enhance long-term memories that complement existing cognitive learning methods.

CybeRisk Management in Banks: An Italian Case Study

The financial sector is exposed to the risk of cyber-attacks like any other industrial sector. Furthermore, the topic of CybeRisk (cyber risk) has become particularly relevant given that Information Technology (IT) attacks have increased drastically in recent years, and cannot be stopped by single organizations requiring a response at international and national level. IT risk is never a matter purely for the IT manager, although he clearly plays a key role. A bank's risk management function requires a thorough understanding of the evolving risks as well as the tools and practical techniques available to address them. Upon the request of European and national legislation regarding CybeRisk in the financial system, banks are therefore called upon to strengthen the operational model for CybeRisk management. This will require an important change with a more intense collaboration with the structures that deal with information security for the development of an ad hoc system for the evaluation and control of this type of risk. The aim of the work is to propose a framework for the management and control of CybeRisk that will bridge the gap in the literature regarding the understanding and consideration of CybeRisk as an integral part of business management. The IT function has a strong relevance in the management of CybeRisk, which is perceived mainly as operational risk, but with a positive tendency on the part of risk management to the identification of CybeRisk assessment methods that are increasingly complete, quantitative and able to better describe the possible impacts on the business. The paper provides answers to the research questions: Is it possible to define a CybeRisk governance structure able to support the comparison between risk and security? How can the relationships between IT assets be integrated into a cyberisk assessment framework to guarantee a system of protection and risks control? From a methodological point of view, this research uses a case study approach. The choice of “Monte dei Paschi di Siena” was determined by the specific features of one of Italy’s biggest lenders. It is chosen to use an intensive research strategy: an in-depth study of reality. The case study methodology is an empirical approach to explore a complex and current phenomenon that develops over time. The use of cases has also the advantage of allowing the deepening of aspects concerning the "how" and "why" of contemporary events, on which the scholar has little control. The research bases on quantitative data and qualitative information obtained through semi-structured interviews of an open-ended nature and questionnaires to directors, members of the audit committee, risk, IT and compliance managers, and those responsible for internal audit function and anti-money laundering. The added value of the paper can be seen in the development of a framework based on a mapping of IT assets from which it is possible to identify their relationships for purposes of a more effective management and control of cyber risk.

Developing Improvements to Multi-Hazard Risk Assessments

This paper outlines the approaches taken to assess multi-hazard assessments. There is currently confusion in assessing multi-hazard impacts, and so this study aims to determine which of the available options are the most useful. The paper uses an international literature search, and analysis of current multi-hazard assessments and a case study to illustrate the effectiveness of the chosen method. Findings from this study will help those wanting to assess multi-hazards to undertake a straightforward approach. The paper is significant as it helps to interpret the various approaches and concludes with the preferred method. Many people in the world live in hazardous environments and are susceptible to disasters. Unfortunately, when a disaster strikes it is often compounded by additional cascading hazards, thus people would confront more than one hazard simultaneously. Hazards include natural hazards (earthquakes, floods, etc.) or cascading human-made hazards (for example, Natural Hazard Triggering Technological disasters (Natech) such as fire, explosion, toxic release). Multi-hazards have a more destructive impact on urban areas than one hazard alone. In addition, climate change is creating links between different disasters such as causing landslide dams and debris flows leading to more destructive incidents. Much of the prevailing literature deals with only one hazard at a time. However, recently sophisticated multi-hazard assessments have started to appear. Given that multi-hazards occur, it is essential to take multi-hazard risk assessment under consideration. This paper aims to review the multi-hazard assessment methods through articles published to date and categorize the strengths and disadvantages of using these methods in risk assessment. Napier City is selected as a case study to demonstrate the necessity of using multi-hazard risk assessments. In order to assess multi-hazard risk assessments, first, the current multi-hazard risk assessment methods were described. Next, the drawbacks of these multi-hazard risk assessments were outlined. Finally, the improvements to current multi-hazard risk assessments to date were summarised. Generally, the main problem of multi-hazard risk assessment is to make a valid assumption of risk from the interactions of different hazards. Currently, risk assessment studies have started to assess multi-hazard situations, but drawbacks such as uncertainty and lack of data show the necessity for more precise risk assessment. It should be noted that ignoring or partial considering multi-hazards in risk assessment will lead to an overestimate or overlook in resilient and recovery action managements.