An Evaluation of Carbon Dioxide Emissions Trading among Enterprises -The Tokyo Cap and Trade Program-

This study aims to propose three evaluation methods to evaluate the Tokyo Cap and Trade Program when emissions trading is performed virtually among enterprises, focusing on carbon dioxide (CO2), which is the only emitted greenhouse gas that tends to increase. The first method clarifies the optimum reduction rate for the highest cost benefit, the second discusses emissions trading among enterprises through market trading, and the third verifies long-term emissions trading during the term of the plan (2010-2019), checking the validity of emissions trading partly using Geographic Information Systems (GIS). The findings of this study can be summarized in the following three points. 1. Since the total cost benefit is the greatest at a 44% reduction rate, it is possible to set it more highly than that of the Tokyo Cap and Trade Program to get more total cost benefit. 2. At a 44% reduction rate, among 320 enterprises, 8 purchasing enterprises and 245 sales enterprises gain profits from emissions trading, and 67 enterprises perform voluntary reduction without conducting emissions trading. Therefore, to further promote emissions trading, it is necessary to increase the sales volumes of emissions trading in addition to sales enterprises by increasing the number of purchasing enterprises. 3. Compared to short-term emissions trading, there are few enterprises which benefit in each year through the long-term emissions trading of the Tokyo Cap and Trade Program. Only 81 enterprises at the most can gain profits from emissions trading in FY 2019. Therefore, by setting the reduction rate more highly, it is necessary to increase the number of enterprises that participate in emissions trading and benefit from the restraint of CO2 emissions.

A Field Research for Investigating the Effect of Strategic Management on Institutionalization Levels of Enterprises

The aim of this study is to determine the effect of strategic management implementations on the institutionalization levels. In this regard a field study has been made over 31 stone quarry enterprises in cement producing sector in Konya by using survey method. In this study, institutionalization levels of the enterprises have been evaluated regarding three dimensions: professionalization, management approach, participation in decisions and delegation of authority. According to the results of the survey, there is a highly positive and statistically significant relationship between the strategic management implementations and institutionalization levels of the enterprises. Additionally,-considering the results of regression analysis made for establishing the relationship between strategic management and institutionalization levels- it has been determined that strategic management implementations of the enterprises can be used as a variable to explain the institutionalization levels of them, and also strategic management implementations of the enterprises increase the institutionalization levels of them.

Prioritizing Service Quality Dimensions:A Neural Network Approach

One of the determinants of a firm-s prosperity is the customers- perceived service quality and satisfaction. While service quality is wide in scope, and consists of various dimensions, there may be differences in the relative importance of these dimensions in affecting customers- overall satisfaction of service quality. Identifying the relative rank of different dimensions of service quality is very important in that it can help managers to find out which service dimensions have a greater effect on customers- overall satisfaction. Such an insight will consequently lead to more effective resource allocation which will finally end in higher levels of customer satisfaction. This issue –despite its criticality- has not received enough attention so far. Therefore, using a sample of 240 bank customers in Iran, an artificial neural network is developed to address this gap in the literature. As customers- evaluation of service quality is a subjective process, artificial neural networks –as a brain metaphor- may appear to have a potentiality to model such a complicated process. Proposing a neural network which is able to predict the customers- overall satisfaction of service quality with a promising level of accuracy is the first contribution of this study. In addition, prioritizing the service quality dimensions in affecting customers- overall satisfaction –by using sensitivity analysis of neural network- is the second important finding of this paper.

A Software for Calculation of Optimum Conditions for Cotton Bobbin Drying in a Hot-Air Bobbin Dryer

In this study, a software has been developed to predict the optimum conditions for drying of cotton based yarn bobbins in a hot air dryer. For this purpose, firstly, a suitable drying model has been specified using experimental drying behavior for different values of drying parameters. Drying parameters in the experiments were drying temperature, drying pressure, and volumetric flow rate of drying air. After obtaining a suitable drying model, additional curve fittings have been performed to obtain equations for drying time and energy consumption taking into account the effects of drying parameters. Then, a software has been developed using Visual Basic programming language to predict the optimum drying conditions for drying time and energy consumption.

Issues in Spectral Source Separation Techniques for Plant-wide Oscillation Detection and Diagnosis

In the last few years, three multivariate spectral analysis techniques namely, Principal Component Analysis (PCA), Independent Component Analysis (ICA) and Non-negative Matrix Factorization (NMF) have emerged as effective tools for oscillation detection and isolation. While the first method is used in determining the number of oscillatory sources, the latter two methods are used to identify source signatures by formulating the detection problem as a source identification problem in the spectral domain. In this paper, we present a critical drawback of the underlying linear (mixing) model which strongly limits the ability of the associated source separation methods to determine the number of sources and/or identify the physical source signatures. It is shown that the assumed mixing model is only valid if each unit of the process gives equal weighting (all-pass filter) to all oscillatory components in its inputs. This is in contrast to the fact that each unit, in general, acts as a filter with non-uniform frequency response. Thus, the model can only facilitate correct identification of a source with a single frequency component, which is again unrealistic. To overcome this deficiency, an iterative post-processing algorithm that correctly identifies the physical source(s) is developed. An additional issue with the existing methods is that they lack a procedure to pre-screen non-oscillatory/noisy measurements which obscure the identification of oscillatory sources. In this regard, a pre-screening procedure is prescribed based on the notion of sparseness index to eliminate the noisy and non-oscillatory measurements from the data set used for analysis.

Experimental Design and Performance Analysis in Plasma Arc Surface Hardening

In this paper, the experimental design of using the Taguchi method is employed to optimize the processing parameters in the plasma arc surface hardening process. The processing parameters evaluated are arc current, scanning velocity and carbon content of steel. In addition, other significant effects such as the relation between processing parameters are also investigated. An orthogonal array, signal-to-noise (S/N) ratio and analysis of variance (ANOVA) are employed to investigate the effects of these processing parameters. Through this study, not only the hardened depth increased and surface roughness improved, but also the parameters that significantly affect the hardening performance are identified. Experimental results are provided to verify the effectiveness of this approach.

Creating a Space for Teaching Problem Solving Skills to Engineering Students through English Language Teaching

The complexity of teaching English in higher institutions by non-native speakers within a second/foreign language setting has created continuous discussions and research about teaching approaches and teaching practises, professional identities and challenges. In addition, there is a growing awareness that teaching English within discipline-specific contexts adds up to the existing complexity. This awareness leads to reassessments, discussions and suggestions on course design and content and teaching approaches and techniques. In meeting expectations teaching at a university specified in a particular discipline such as engineering, English language educators are not only required to teach students to be able to communicate in English effectively but also to teach soft skills such as problem solving skills. This paper is part of a research conducted to investigate how English language educators negotiate with the complexities of teaching problem solving skills through English language teaching at a technical university. This paper reports the way an English language educator identified himself and the way he approached his teaching in this institutional context.

A New Algorithm for Determining the Leading Coefficient of in the Parabolic Equation

This paper investigates the inverse problem of determining the unknown time-dependent leading coefficient in the parabolic equation using the usual conditions of the direct problem and an additional condition. An algorithm is developed for solving numerically the inverse problem using the technique of space decomposition in a reproducing kernel space. The leading coefficients can be solved by a lower triangular linear system. Numerical experiments are presented to show the efficiency of the proposed methods.

Applying Clustering of Hierarchical K-means-like Algorithm on Arabic Language

In this study a clustering technique has been implemented which is K-Means like with hierarchical initial set (HKM). The goal of this study is to prove that clustering document sets do enhancement precision on information retrieval systems, since it was proved by Bellot & El-Beze on French language. A comparison is made between the traditional information retrieval system and the clustered one. Also the effect of increasing number of clusters on precision is studied. The indexing technique is Term Frequency * Inverse Document Frequency (TF * IDF). It has been found that the effect of Hierarchical K-Means Like clustering (HKM) with 3 clusters over 242 Arabic abstract documents from the Saudi Arabian National Computer Conference has significant results compared with traditional information retrieval system without clustering. Additionally it has been found that it is not necessary to increase the number of clusters to improve precision more.