Stating Best Commercialization Method: An Unanswered Question from Scholars and Practitioners

Commercialization method is a means to make inventions available at the market for final consumption. It is described as an important tool for keeping business enterprises sustainable and improving national economic growth. Thus, there are several scholarly publications on it, either presenting or testing different methods for commercialization. However, young entrepreneurs, technologists and scientists would like to know the best method to commercialize their innovations. Then, this question arises: What is the best commercialization method? To answer the question, a systematic literature review was conducted, and practitioners were interviewed. The literary results revealed that there are many methods but new methods are needed to improve commercialization especially during these times of economic crisis and political uncertainty. Similarly, the empirical results showed there are several methods, but the best method is the one that reduces costs, reduces the risks associated with uncertainty, and improves customer participation and acceptability. Therefore, it was concluded that new commercialization method is essential for today's high technologies and a method was presented.

Japan’s Challenges in Managing Resources and Implementing Strategies toward Sustainability

Japan’s strategy is based on improving the current resources and productivity by identifying the environmental challenges to progress further in many areas. For example, it will help in understanding the competitive challenges in the industry, emerging innovation, and other progresses. The present study seeks to examine the characteristics of sustainable practices using materials that will last longer and following environmental policies. There has been a major emphasis since 1990s and onwards about recycling and preserving the environment. Furthermore, the present paper analyses and argues how national interest in policy increases resource productivity. It is a universal law, but these actions may be different based on the unique situation of the country. In addition, the present study explains some of the strategies developed by the Environmental Agency of Japan in the last few years. There are a few resources reviewed involving ‘Strategy for an Environmental Nation in the 21st Century’ from 2001, ‘Clean Asia Initiative’ from 2008, and ‘New Growth Strategy’ from 2010. The present paper also highlights the emphasis on increasing efficiency, as it is an important part of sustainability. We finally conclude by providing reasoning on the impact and positivity of reducing production and consumption on the environment, resulting in a productive and progressive Japan for the near and long term future.

Bone Mineral Density and Quality, Body Composition of Women in the Postmenopausal Period

In the diagnostics of osteoporosis, the gold standard is considered to be bone mineral density; however, X-ray densitometry is not an accurate indicator of osteoporotic fracture risk under all circumstances. In this regard, the search for new methods that could determine the indicators not only of the mineral density, but of the bone tissue quality, is a logical step for diagnostic optimization. One of these methods is the evaluation of trabecular bone quality. The aim of this study was to examine the quality and mineral density of spine bone tissue, femoral neck, and body composition of women depending on the duration of the postmenopausal period, to determine the correlation of body fat with indicators of bone mineral density and quality. The study examined 179 women in premenopausal and postmenopausal periods. The patients were divided into the following groups: Women in the premenopausal period and women in the postmenopausal period at various stages (early, middle, late postmenopause). A general examination and study of the above parameters were conducted with General Electric X-ray densitometer. The results show that bone quality and mineral density probably deteriorate with advancing of postmenopausal period. Total fat and lean mass ratio is not likely to change with age. In the middle and late postmenopausal periods, the bone tissue mineral density of the spine and femoral neck increases along with total fat mass.

Lean Models Classification: Towards a Holistic View

The purpose of this paper is to present a classification of Lean models which aims to capture all the concepts related to this approach and thus facilitate its implementation. This classification allows the identification of the most relevant models according to several dimensions. From this perspective, we present a review and an analysis of Lean models literature and we propose dimensions for the classification of the current proposals while respecting among others the axes of the Lean approach, the maturity of the models as well as their application domains. This classification allowed us to conclude that researchers essentially consider the Lean approach as a toolbox also they design their models to solve problems related to a specific environment. Since Lean approach is no longer intended only for the automotive sector where it was invented, but to all fields (IT, Hospital, ...), we consider that this approach requires a generic model that is capable of being implemented in all areas.

Effect of Low Plastic Clay Quantity on Behavioral Characteristics of Loose Sand

After the Nigatta earthquake in Japan, in 1960, the liquefaction and its related hazards, moved to the thick of matter. Most of the research have been carried out on clean sands and silty sands so far, in order to study the effect of fine particles, confinement pressures, density and so on. However, because of this delusion that adhesiveness of clay prevents the liquefaction in sand, studies on clayey sands have not been taken seriously. However, several liquefactions happened in clayey sands in recent years, and lead to the necessity of more studies in this field. The studies which were carried out so far focused on high plastic clays. In this paper, the effect of low plasticity clays on the behavioral characteristics of sands is discussed. Thus, some triaxial tests were carried out on clean sands and clayey sands with different percentages of added clay. Specimens were compacted in various densities to study the effect of quantity of clay on various densities, too. Based on the findings, the amount of clay affects the behavior of sand greatly and leads to substantial changes in peak bearing capacity and steady state values.

Low-Cost Space-Based Geoengineering: An Assessment Based on Self-Replicating Manufacturing of in-Situ Resources on the Moon

Geoengineering approaches to climate change mitigation are unpopular and regarded with suspicion. Of these, space-based approaches are regarded as unworkable and enormously costly. Here, a space-based approach is presented that is modest in cost, fully controllable and reversible, and acts as a natural spur to the development of solar power satellites over the longer term as a clean source of energy. The low-cost approach exploits self-replication technology which it is proposed may be enabled by 3D printing technology. Self-replication of 3D printing platforms will enable mass production of simple spacecraft units. Key elements being developed are 3D-printable electric motors and 3D-printable vacuum tube-based electronics. The power of such technologies will open up enormous possibilities at low cost including space-based geoengineering.

Differential Analysis: Crew Resource Management and Profiles on the Balanced Inventory of Desirable Responding

A concern when administering questionnaires is whether the participant is providing information that is accurate. The results may be invalid because the person is trying to present oneself in an unrealistic positive manner referred to as ‘faking good’, or in an unrealistic negative manner known as ‘faking bad’. The Balanced Inventory of Desirable Responding (BIDR) was used to assess commercial pilots’ responses on the two subscales of the BIDR: impression management (IM) and self-deceptive enhancement (SDE) that result in high or low scores. Thus, the BIDR produces four valid profiles: IM low and SDE low, IM high and SDE low, IM low and SDE high, and IM high and SDE high. The various profiles were used to compare the respondents’ answers to crew resource management (CRM) items developed from the USA Federal Aviation Administration’s (FAA) guidelines for CRM composition and training. Of particular interest were the results on the IM subscale. The comparisons between those scoring high (lying or faking) versus those low on the IM suggest that there were significant differences regarding their views of the various dimensions of CRM. One of the more disconcerting conclusions is that the high IM scores suggest that the pilots were trying to impress rather than honestly answer the questions regarding their CRM training and practice.

Condition Monitoring for Twin-Fluid Nozzles with Internal Mixing

Liquid sprays of water are frequently used in air pollution control for gas cooling purposes and for gas cleaning. Twin-fluid nozzles with internal mixing are often used for these purposes because of the small size of the drops produced. In these nozzles the liquid is dispersed by compressed air or another pressurized gas. In high efficiency scrubbers for particle separation, several nozzles are operated in parallel because of the size of the cross section. In such scrubbers, the scrubbing water has to be re-circulated. Precipitation of some solid material can occur in the liquid circuit, caused by chemical reactions. When such precipitations are detached from the place of formation, they can partly or totally block the liquid flow to a nozzle. Due to the resulting unbalanced supply of the nozzles with water and gas, the efficiency of separation decreases. Thus, the nozzles have to be cleaned if a certain fraction of blockages is reached. The aim of this study was to provide a tool for continuously monitoring the status of the nozzles of a scrubber based on the available operation data (water flow, air flow, water pressure and air pressure). The difference between the air pressure and the water pressure is not well suited for this purpose, because the difference is quite small and therefore very exact calibration of the pressure measurement would be required. Therefore, an equation for the reference air flow of a nozzle at the actual water flow and operation pressure was derived. This flow can be compared with the actual air flow for assessment of the status of the nozzles.

Risk Management Approach for Lean, Agile, Resilient and Green Supply Chain

Implementation of LARG (Lean, Agile, Resilient, Green) practices in the supply chain management is a complex task mainly because ecological, economical and operational goals are usually in conflict. To implement these LARG practices successfully, companies’ need relevant decision making tools allowing processes performance control and improvement strategies visibility. To contribute to this issue, this work tries to answer the following research question: How to master performance and anticipate problems in supply chain LARG practices implementation? To answer this question, a risk management approach (RMA) is adopted. Indeed, the proposed RMA aims basically to assess the ability of a supply chain, guided by “Lean, Green and Achievement” performance goals, to face “agility and resilience risk” factors. To proof its relevance, a logistics academic case study based on simulation is used to illustrate all its stages. It shows particularly how to build the “LARG risk map” which is the main output of this approach.

The Impact of Physics Taught with Simulators and Texts in Brazilian High School: A Study in the Adult and Youth Education

The teaching of physics in Brazilian public schools emphasizes strongly the theoretical aspects of this science, showing its philosophical and mathematical basis, but neglecting its experimental character. Perhaps the lack of science laboratories explains this practice. In this work, we present a method of teaching physics using the computer. As alternatives to real experiments, we have the trials through simulators, many of which are free software available on the internet. In order to develop a study on the use of simulators in teaching, knowing the impossibility of simulations on all topics in a given subject, we combined these programs with phenomenological and/or experimental texts in order to mitigate this limitation. This study proposes the use of simulators and the debate using phenomenological/experimental texts on electrostatic theme in groups of the 3rd year of EJA (Adult and Youth Education) in order to verify the advantages of this methodology. Some benefits of the hybridization of the traditional method with the tools used were: Greater motivation of the students in learning, development of experimental notions, proactive socialization to learning, greater easiness to understand some concepts and the creation of collaborative activities that can reduce timidity of part of the students.

A Comparison of Tsunami Impact to Sydney Harbour, Australia at Different Tidal Stages

Sydney Harbour is an iconic location with a dense population and low-lying development. On the east coast of Australia, facing the Pacific Ocean, it is exposed to several tsunamigenic trenches. This paper presents a component of the most detailed assessment of the potential for earthquake-generated tsunami impact on Sydney Harbour to date. Models in this study use dynamic tides to account for tide-tsunami interaction. Sydney Harbour’s tidal range is 1.5 m, and the spring tides from January 2015 that are used in the modelling for this study are close to the full tidal range. The tsunami wave trains modelled include hypothetical tsunami generated from earthquakes of magnitude 7.5, 8.0, 8.5, and 9.0 MW from the Puysegur and New Hebrides trenches as well as representations of the historical 1960 Chilean and 2011 Tohoku events. All wave trains are modelled for the peak wave to coincide with both a low tide and a high tide. A single wave train, representing a 9.0 MW earthquake at the Puysegur trench, is modelled for peak waves to coincide with every hour across a 12-hour tidal phase. Using the hydrodynamic model ANUGA, results are compared according to the impact parameters of inundation area, depth variation and current speeds. Results show that both maximum inundation area and depth variation are tide dependent. Maximum inundation area increases when coincident with a higher tide, however, hazardous inundation is only observed for the larger waves modelled: NH90high and P90high. The maximum and minimum depths are deeper on higher tides and shallower on lower tides. The difference between maximum and minimum depths varies across different tidal phases although the differences are slight. Maximum current speeds are shown to be a significant hazard for Sydney Harbour; however, they do not show consistent patterns according to tide-tsunami phasing. The maximum current speed hazard is shown to be greater in specific locations such as Spit Bridge, a narrow channel with extensive marine infrastructure. The results presented for Sydney Harbour are novel, and the conclusions are consistent with previous modelling efforts in the greater area. It is shown that tide must be a consideration for both tsunami modelling and emergency management planning. Modelling with peak tsunami waves coinciding with a high tide would be a conservative approach; however, it must be considered that maximum current speeds may be higher on other tides.

Microbial Fuel Cells and Their Applications in Electricity Generating and Wastewater Treatment

This research is an experimental research which was done about microbial fuel cells in order to study them for electricity generating and wastewater treatment. These days, it is very important to find new, clean and sustainable ways for energy supplying. Because of this reason there are many researchers around the world who are studying about new and sustainable energies. There are different ways to produce these kind of energies like: solar cells, wind turbines, geothermal energy, fuel cells and many other ways. Fuel cells have different types one of these types is microbial fuel cell. In this research, an MFC was built in order to study how it can be used for electricity generating and wastewater treatment. The microbial fuel cell which was used in this research is a reactor that has two tanks with a catalyst solution. The chemical reaction in microbial fuel cells is a redox reaction. The microbial fuel cell in this research is a two chamber MFC. Anode chamber is an anaerobic one (ABR reactor) and the other chamber is a cathode chamber. Anode chamber consists of stabilized sludge which is the source of microorganisms that do redox reaction. The main microorganisms here are: Propionibacterium and Clostridium. The electrodes of anode chamber are graphite pages. Cathode chamber consists of graphite page electrodes and catalysts like: O2, KMnO4 and C6N6FeK4. The membrane which separates the chambers is Nafion117. The reason of choosing this membrane is explained in the complete paper. The main goal of this research is to generate electricity and treating wastewater. It was found that when you use electron receptor compounds like: O2, MnO4, C6N6FeK4 the velocity of electron receiving speeds up and in a less time more current will be achieved. It was found that the best compounds for this purpose are compounds which have iron in their chemical formula. It is also important to pay attention to the amount of nutrients which enters to bacteria chamber. By adding extra nutrients in some cases the result will be reverse.  By using ABR the amount of chemical oxidation demand reduces per day till it arrives to a stable amount.

Effects of High-Protein, Low-Energy Diet on Body Composition in Overweight and Obese Adults: A Clinical Trial

Background: In addition to reducing body weight, the low-calorie diets can reduce the lean body mass. It is hypothesized that in addition to reducing the body weight, the low-calorie diets can maintain the lean body mass. So, the current study aimed at evaluating the effects of high-protein diet with calorie restriction on body composition in overweight and obese individuals. Methods: 36 obese and overweight subjects were divided randomly into two groups. The first group received a normal-protein, low-energy diet (RDA), and the second group received a high-protein, low-energy diet (2×RDA). The anthropometric indices including height, weight, body mass index, body fat mass, fat free mass, and body fat percentage were evaluated before and after the study. Results: A significant reduction was observed in anthropometric indices in both groups (high-protein, low-energy diets and normal-protein, low-energy diets). In addition, more reduction in fat free mass was observed in the normal-protein, low-energy diet group compared to the high -protein, low-energy diet group. In other the anthropometric indices, significant differences were not observed between the two groups. Conclusion: Independently of the type of diet, low-calorie diet can improve the anthropometric indices, but during a weight loss, high-protein diet can help the fat free mass to be maintained.

Numerical Simulations of Acoustic Imaging in Hydrodynamic Tunnel with Model Adaptation and Boundary Layer Noise Reduction

The noise requirements for naval and research vessels have seen an increasing demand for quieter ships in order to fulfil current regulations and to reduce the effects on marine life. Hence, new methods dedicated to the characterization of propeller noise, which is the main source of noise in the far-field, are needed. The study of cavitating propellers in closed-section is interesting for analyzing hydrodynamic performance but could involve significant difficulties for hydroacoustic study, especially due to reverberation and boundary layer noise in the tunnel. The aim of this paper is to present a numerical methodology for the identification of hydroacoustic sources on marine propellers using hydrophone arrays in a large hydrodynamic tunnel. The main difficulties are linked to the reverberation of the tunnel and the boundary layer noise that strongly reduce the signal-to-noise ratio. In this paper it is proposed to estimate the reflection coefficients using an inverse method and some reference transfer functions measured in the tunnel. This approach allows to reduce the uncertainties of the propagation model used in the inverse problem. In order to reduce the boundary layer noise, a cleaning algorithm taking advantage of the low rank and sparse structure of the cross-spectrum matrices of the acoustic and the boundary layer noise is presented. This approach allows to recover the acoustic signal even well under the boundary layer noise. The improvement brought by this method is visible on acoustic maps resulting from beamforming and DAMAS algorithms.

Problem Solving in Chilean Higher Education: Figurations Prior in Interpretations of Cartesian Graphs

A Cartesian graph, as a mathematical object, becomes a tool for configuration of change. Its best comprehension is done through everyday life problem-solving associated with its representation. Despite this, the current educational framework favors general graphs, without consideration of their argumentation. Students are required to find the mathematical function without associating it to the development of graphical language. This research describes the use made by students of configurations made prior to Cartesian graphs with regards to an everyday life problem related to a time and distance variation phenomenon. The theoretical framework describes the function conditions of study and their modeling. This is a qualitative, descriptive study involving six undergraduate case studies that were carried out during the first term in 2016 at University of Los Lagos. The research problem concerned the graphic modeling of a real person’s movement phenomenon, and two levels of analysis were identified. The first level aims to identify local and global graph interpretations; a second level describes the iconicity and referentiality degree of an image. According to the results, students were able to draw no figures before the Cartesian graph, highlighting the need for students to represent the context and the movement of which causes the phenomenon change. From this, they managed Cartesian graphs representing changes in position, therefore, achieved an overall view of the graph. However, the local view only indicates specific events in the problem situation, using graphic and verbal expressions to represent movement. This view does not enable us to identify what happens on the graph when the movement characteristics change based on possible paths in the person’s walking speed.

Air Classification of Dust from Steel Converter Secondary De-dusting for Zinc Enrichment

The off-gas from the basic oxygen furnace (BOF), where pig iron is converted into steel, is treated in the primary ventilation system. This system is in full operation only during oxygen-blowing when the BOF converter vessel is in a vertical position. When pig iron and scrap are charged into the BOF and when slag or steel are tapped, the vessel is tilted. The generated emissions during charging and tapping cannot be captured by the primary off-gas system. To capture these emissions, a secondary ventilation system is usually installed. The emissions are captured by a canopy hood installed just above the converter mouth in tilted position. The aim of this study was to investigate the dependence of Zn and other components on the particle size of BOF secondary ventilation dust. Because of the high temperature of the BOF process it can be expected that Zn will be enriched in the fine dust fractions. If Zn is enriched in the fine fractions, classification could be applied to split the dust into two size fractions with a different content of Zn. For this air classification experiments with dust from the secondary ventilation system of a BOF were performed. The results show that Zn and Pb are highly enriched in the finest dust fraction. For Cd, Cu and Sb the enrichment is less. In contrast, the non-volatile metals Al, Fe, Mn and Ti were depleted in the fine fractions. Thus, air classification could be considered for the treatment of dust from secondary BOF off-gas cleaning.

Effectiveness of Lean Manufacturing Technologies on Improving Business Performance: A Study of Indian Manufacturing Industries

Indian manufacturing firms operating in rapidly changing and highly competitive market, over the last few decades, have embraced organization-wide transformation to achieve cultural and operational excellence. In recent years, numerous approaches have been proposed to improve business and manufacturing performance. Lean practices in particular, Total Productive Management (TPM) and Total Quality Management (TQM) have received considerable attention, as they being adopted and adapted for raising the performance standard of Indian manufacturing firms to world class levels. The complementary nature of TPM and TQM is being practiced in many companies to achieve synergy. Specifically, this research investigates whether joint TPM-TQM implementation contribute to higher business performance when compared to individual implementation. Data from 160 manufacturing firms were analyzed that demonstrate synergetic implementation of both TPM-TQM practices over a reasonable period of time, contributed in delivering better business performance as compared to individual implementation strategy.

Development of Energy Benchmarks Using Mandatory Energy and Emissions Reporting Data: Ontario Post-Secondary Residences

Governments are playing an increasingly active role in reducing carbon emissions, and a key strategy has been the introduction of mandatory energy disclosure policies. These policies have resulted in a significant amount of publicly available data, providing researchers with a unique opportunity to develop location-specific energy and carbon emission benchmarks from this data set, which can then be used to develop building archetypes and used to inform urban energy models. This study presents the development of such a benchmark using the public reporting data. The data from Ontario’s Ministry of Energy for Post-Secondary Educational Institutions are being used to develop a series of building archetype dynamic building loads and energy benchmarks to fill a gap in the currently available building database. This paper presents the development of a benchmark for college and university residences within ASHRAE climate zone 6 areas in Ontario using the mandatory disclosure energy and greenhouse gas emissions data. The methodology presented includes data cleaning, statistical analysis, and benchmark development, and lessons learned from this investigation are presented and discussed to inform the development of future energy benchmarks from this larger data set. The key findings from this initial benchmarking study are: (1) the importance of careful data screening and outlier identification to develop a valid dataset; (2) the key features used to develop a model of the data are building age, size, and occupancy schedules and these can be used to estimate energy consumption; and (3) policy changes affecting the primary energy generation significantly affected greenhouse gas emissions, and consideration of these factors was critical to evaluate the validity of the reported data.

Risk in the South African Sectional Title Industry: An Assurance Perspective

The sectional title industry has been a part of the property landscape in South Africa for almost half a century, and plays a significant role in addressing the housing problem in the country. Stakeholders such as owners and investors in sectional title property are in most cases not directly involved in the management thereof, and place reliance on the audited annual financial statements of bodies corporate for decision-making purposes. Although the industry seems to be highly regulated, the legislation regarding accounting and auditing of sectional title is vague and ambiguous. Furthermore, there are no industry-specific auditing and accounting standards to guide accounting and auditing practitioners in performing their work and industry financial benchmarks are not readily available. In addition, financial pressure on sectional title schemes is often very high due to the fact that some owners exercise unrealistic pressure to keep monthly levies as low as possible. All these factors have an impact on the business risk as well as audit risk of bodies corporate. Very little academic research has been undertaken on the sectional title industry in South Africa from an accounting and auditing perspective. The aim of this paper is threefold: Firstly, to discuss the findings of a literature review on uncertainties, ambiguity and confusing aspects in current legislation regarding the audit of a sectional title property that may cause or increase audit and business risk. Secondly, empirical findings of risk-related aspects from the results of interviews with three groups of body corporate role-players will be discussed. The role-players were body corporate trustee chairpersons, body corporate managing agents and accounting and auditing practitioners of bodies corporate. Specific reference will be made to business risk and audit risk. Thirdly, practical recommendations will be made on possibilities of closing the audit expectation gap, and further research opportunities in this regard will be discussed.

Lubrication Performance of Multi-Level Gear Oil in a Gasoline Engine

A vehicle gasoline engine converts gasoline into power so that the car can move, and lubricants are important for engines and also gear boxes. Manufacturers have produced numbers of engine oils, and gear oils for engines and gear boxes to SAE International Standards. Some products not only can improve the lubrication of both the engine and gear box but also can raise power of vehicle this can be easily seen in the advertisement declared by the manufacturers. To observe the lubrication performance, a multi-leveled (heavy duty) gear oil was added to a gasoline engine as the oil in the vehicle. The oil was checked at about every 10,000 kilometers. The engine was detailed disassembled, cleaned, and parts were measured. The wear of components of the engine parts were checked and recorded finally. Based on the experiment results, some gear oil seems possible to be used as engine oil in particular vehicles. Vehicle owners should change oil periodically in about every 6,000 miles (or 10,000 kilometers). Used car owners may change engine oil in even longer distance.