A New Measurable Definition of Knowledge in New Growth Theory

New Growth Theory helps us make sense of the ongoing shift from a resource-based economy to a knowledge-based economy. It underscores the point that the economic processes which create and diffuse new knowledge are critical to shaping the growth of nations, communities and individual firms. In all too many contributions to New (Endogenous) Growth Theory – though not in all – central reference is made to 'a stock of knowledge', a 'stock of ideas', etc., this variable featuring centre-stage in the analysis. Yet it is immediately apparent that this is far from being a crystal clear concept. The difficulty and uncertainty of being able to capture the value associated with knowledge is a real problem. The intent of this paper is introducing new thinking and theorizing about the knowledge and its measurability in new growth theory. Moreover the study aims to synthesize various strain of the literature with a practical bearing on knowledge concept. By contribution of institution framework which is found within NGT, we can indirectly measure the knowledge concept. Institutions matter because they shape the environment for production and employment of new knowledge

Capacity Building for Hazmat Transport Emergency Preparedness: 'Hotspot Impact Zone' Mapping from Flammable and Toxic Releases

Hazardous Material transportation by road is coupled with inherent risk of accidents causing loss of lives, grievous injuries, property losses and environmental damages. The most common type of hazmat road accident happens to be the releases (78%) of hazardous substances, followed by fires (28%), explosions (14%) and vapour/ gas clouds (6 %.). The paper is discussing initially the probable 'Impact Zones' likely to be caused by one flammable (LPG) and one toxic (ethylene oxide) chemicals being transported through a sizable segment of a State Highway connecting three notified Industrial zones in Surat district in Western India housing 26 MAH industrial units. Three 'hotspots' were identified along the highway segment depending on the particular chemical traffic and the population distribution within 500 meters on either sides. The thermal radiation and explosion overpressure have been calculated for LPG / Ethylene Oxide BLEVE scenarios along with toxic release scenario for ethylene oxide. Besides, the dispersion calculations for ethylene oxide toxic release have been made for each 'hotspot' location and the impact zones have been mapped for the LOC concentrations. Subsequently, the maximum Initial Isolation and the protective zones were calculated based on ERPG-3 and ERPG-2 values of ethylene oxide respectively which are estimated taking the worst case scenario under worst weather conditions. The data analysis will be helpful to the local administration in capacity building with respect to rescue / evacuation and medical preparedness and quantitative inputs to augment the District Offsite Emergency Plan document.

Prediction of Dissolved Oxygen in Rivers Using a Wang-Mendel Method – Case Study of Au Sable River

Amount of dissolve oxygen in a river has a great direct affect on aquatic macroinvertebrates and this would influence on the region ecosystem indirectly. In this paper it is tried to predict dissolved oxygen in rivers by employing an easy Fuzzy Logic Modeling, Wang Mendel method. This model just uses previous records to estimate upcoming values. For this purpose daily and hourly records of eight stations in Au Sable watershed in Michigan, United States are employed for 12 years and 50 days period respectively. Calculations indicate that for long period prediction it is better to increase input intervals. But for filling missed data it is advisable to decrease the interval. Increasing partitioning of input and output features influence a little on accuracy but make the model too time consuming. Increment in number of input data also act like number of partitioning. Large amount of train data does not modify accuracy essentially, so, an optimum training length should be selected.

A DEA Model for Performance Evaluation in The Presence of Time Lag Effect

Data Envelopment Analysis (DEA) is a methodology that computes efficiency values for decision making units (DMU) in a given period by comparing the outputs with the inputs. In many cases, there are some time lag between the consumption of inputs and the production of outputs. For a long-term research project, it is hard to avoid the production lead time phenomenon. This time lag effect should be considered in evaluating the performance of organizations. This paper suggests a model to calculate efficiency values for the performance evaluation problem with time lag. In the experimental part, the proposed methods are compared with the CCR and an existing time lag model using the data set of the 21st century frontier R&D program which is a long-term national R&D program of Korea.

Ventilation Efficiency in the Subway Environment for the Indoor Air Quality

Clean air in subway station is important to passengers. The Platform Screen Doors (PSDs) can improve indoor air quality in the subway station; however the air quality in the subway tunnel is degraded. The subway tunnel has high CO2 concentration and indoor particulate matter (PM) value. The Indoor Air Quality (IAQ) level in subway environment degrades by increasing the frequency of the train operation and the number of the train. The ventilation systems of the subway tunnel need improvements to have better air-quality. Numerical analyses might be effective tools to analyze the performance of subway twin-track tunnel ventilation systems. An existing subway twin-track tunnel in the metropolitan Seoul subway system is chosen for the numerical simulations. The ANSYS CFX software is used for unsteady computations of the airflow inside the twin-track tunnel when the train moves. The airflow inside the tunnel is simulated when one train runs and two trains run at the same time in the tunnel. The piston-effect inside the tunnel is analyzed when all shafts function as the natural ventilation shaft. The supplied air through the shafts is mixed with the pollutant air in the tunnel. The pollutant air is exhausted by the mechanical ventilation shafts. The supplied and discharged airs are balanced when only one train runs in the twin-track tunnel. The pollutant air in the tunnel is high when two trains run simultaneously in opposite direction and all shafts functioned as the natural shaft cases when there are no electrical power supplies in the shafts. The remained pollutant air inside the tunnel enters into the station platform when the doors are opened.

Complex-Valued Neural Network in Signal Processing: A Study on the Effectiveness of Complex Valued Generalized Mean Neuron Model

A complex valued neural network is a neural network which consists of complex valued input and/or weights and/or thresholds and/or activation functions. Complex-valued neural networks have been widening the scope of applications not only in electronics and informatics, but also in social systems. One of the most important applications of the complex valued neural network is in signal processing. In Neural networks, generalized mean neuron model (GMN) is often discussed and studied. The GMN includes a new aggregation function based on the concept of generalized mean of all the inputs to the neuron. This paper aims to present exhaustive results of using Generalized Mean Neuron model in a complex-valued neural network model that uses the back-propagation algorithm (called -Complex-BP-) for learning. Our experiments results demonstrate the effectiveness of a Generalized Mean Neuron Model in a complex plane for signal processing over a real valued neural network. We have studied and stated various observations like effect of learning rates, ranges of the initial weights randomly selected, error functions used and number of iterations for the convergence of error required on a Generalized Mean neural network model. Some inherent properties of this complex back propagation algorithm are also studied and discussed.

Estimation of Time -Varying Linear Regression with Unknown Time -Volatility via Continuous Generalization of the Akaike Information Criterion

The problem of estimating time-varying regression is inevitably concerned with the necessity to choose the appropriate level of model volatility - ranging from the full stationarity of instant regression models to their absolute independence of each other. In the stationary case the number of regression coefficients to be estimated equals that of regressors, whereas the absence of any smoothness assumptions augments the dimension of the unknown vector by the factor of the time-series length. The Akaike Information Criterion is a commonly adopted means of adjusting a model to the given data set within a succession of nested parametric model classes, but its crucial restriction is that the classes are rigidly defined by the growing integer-valued dimension of the unknown vector. To make the Kullback information maximization principle underlying the classical AIC applicable to the problem of time-varying regression estimation, we extend it onto a wider class of data models in which the dimension of the parameter is fixed, but the freedom of its values is softly constrained by a family of continuously nested a priori probability distributions.

Screening Wheat Parents of Mapping Population for Heat and Drought Tolerance, Detection of Wheat Genetic Variation

To evaluate genetic variation of wheat (Triticum aestivum) affected by heat and drought stress on eight Australian wheat genotypes that are parents of Doubled Haploid (HD) mapping populations at the vegetative stage, the water stress experiment was conducted at 65% field capacity in growth room. Heat stress experiment was conducted in the research field under irrigation over summer. Result show that water stress decreased dry shoot weight and RWC but increased osmolarity and means of Fv/Fm values in all varieties except for Krichauff. Krichauff and Kukri had the maximum RWC under drought stress. Trident variety was shown maximum WUE, osmolarity (610 mM/Kg), dry mater, quantum yield and Fv/Fm 0.815 under water stress condition. However, the recovery of quantum yield was apparent between 4 to 7 days after stress in all varieties. Nevertheless, increase in water stress after that lead to strong decrease in quantum yield. There was a genetic variation for leaf pigments content among varieties under heat stress. Heat stress decreased significantly the total chlorophyll content that measured by SPAD. Krichauff had maximum value of Anthocyanin content (2.978 A/g FW), chlorophyll a+b (2.001 mg/g FW) and chlorophyll a (1.502 mg/g FW). Maximum value of chlorophyll b (0.515 mg/g FW) and Carotenoids (0.234 mg/g FW) content belonged to Kukri. The quantum yield of all varieties decreased significantly, when the weather temperature increased from 28 ÔùªC to 36 ÔùªC during the 6 days. However, the recovery of quantum yield was apparent after 8th day in all varieties. The maximum decrease and recovery in quantum yield was observed in Krichauff. Drought and heat tolerant and moderately tolerant wheat genotypes were included Trident, Krichauff, Kukri and RAC875. Molineux, Berkut and Excalibur were clustered into most sensitive and moderately sensitive genotypes. Finally, the results show that there was a significantly genetic variation among the eight varieties that were studied under heat and water stress.

MDA of Hexagonal Honeycomb Plates used for Space Applications

The purpose of this paper is to perform a multidisciplinary design and analysis (MDA) of honeycomb panels used in the satellites structural design. All the analysis is based on clamped-free boundary conditions. In the present work, detailed finite element models for honeycomb panels are developed and analysed. Experimental tests were carried out on a honeycomb specimen of which the goal is to compare the previous modal analysis made by the finite element method as well as the existing equivalent approaches. The obtained results show a good agreement between the finite element analysis, equivalent and tests results; the difference in the first two frequencies is less than 4% and less than 10% for the third frequency. The results of the equivalent model presented in this analysis are obtained with a good accuracy. Moreover, investigations carried out in this research relate to the honeycomb plate modal analysis under several aspects including the structural geometrical variation by studying the various influences of the dimension parameters on the modal frequency, the variation of core and skin material of the honeycomb. The various results obtained in this paper are promising and show that the geometry parameters and the type of material have an effect on the value of the honeycomb plate modal frequency.

A Novel Machining Signal Filtering Technique: Z-notch Filter

A filter is used to remove undesirable frequency information from a dynamic signal. This paper shows that the Znotch filter filtering technique can be applied to remove the noise nuisance from a machining signal. In machining, the noise components were identified from the sound produced by the operation of machine components itself such as hydraulic system, motor, machine environment and etc. By correlating the noise components with the measured machining signal, the interested components of the measured machining signal which was less interfered by the noise, can be extracted. Thus, the filtered signal is more reliable to be analysed in terms of noise content compared to the unfiltered signal. Significantly, the I-kaz method i.e. comprises of three dimensional graphical representation and I-kaz coefficient, Z∞ could differentiate between the filtered and the unfiltered signal. The bigger space of scattering and the higher value of Z∞ demonstrated that the signal was highly interrupted by noise. This method can be utilised as a proactive tool in evaluating the noise content in a signal. The evaluation of noise content is very important as well as the elimination especially for machining operation fault diagnosis purpose. The Z-notch filtering technique was reliable in extracting noise component from the measured machining signal with high efficiency. Even though the measured signal was exposed to high noise disruption, the signal generated from the interaction between cutting tool and work piece still can be acquired. Therefore, the interruption of noise that could change the original signal feature and consequently can deteriorate the useful sensory information can be eliminated.

Customer Knowledge and Service Development, the Web 2.0 Role in Co-production

The paper is concerned with relationships between SSME and ICTs and focuses on the role of Web 2.0 tools in the service development process. The research presented aims at exploring how collaborative technologies can support and improve service processes, highlighting customer centrality and value coproduction. The core idea of the paper is the centrality of user participation and the collaborative technologies as enabling factors; Wikipedia is analyzed as an example. The result of such analysis is the identification and description of a pattern characterising specific services in which users collaborate by means of web tools with value co-producers during the service process. The pattern of collaborative co-production concerning several categories of services including knowledge based services is then discussed.

MC and IC – What Is the Relationship?

MC (Management Control)& IC (Internal Control) – what is the relationship? (an empirical study into the definitions between MC and IC) based on the wider considerations of Internal Control and Management Control terms, attention is focused not only on the financial aspects but also more on the soft aspects of the business, such as culture, behaviour, standards and values. The limited considerations of Management Control are focused mainly in the hard, financial aspects of business operation. The definitions of Management Control and Internal Control are often used interchangeably and the results of this empirical study reveal that Management Control is part of Internal Control, there is no causal link between the two concepts. Based on the interpretation of the respondents, the term Management Control has moved from a broad term to a more limited term with the soft aspects of the influencing of behaviour, performance measurements, incentives and culture. This paper is an exploratory study based on qualitative research and on a qualitative matrix method analysis of the thematic definition of the terms Management Control and Internal Control.

Unsteady Flow between Two Concentric Rotating Spheres along with Uniform Transpiration

In this study, the numerical solution of unsteady flow between two concentric rotating spheres with suction and blowing at their boundaries is presented. The spheres are rotating about a common axis of rotation while their angular velocities are constant. The Navier-Stokes equations are solved by employing the finite difference method and implicit scheme. The resulting flow patterns are presented for various values of the flow parameters including rotational Reynolds number Re , and a blowing/suction Reynolds number Rew . Viscous torques at the inner and the outer spheres are calculated, too. It is seen that increasing the amount of suction and blowing decrease the size of eddies generated in the annulus.

A Theoretical Framework for Rural Tourism Motivation Factors

Rural tourism has many economical, environmental, and socio-cultural benefits. However, the development of rural tourism compared to urban tourism is also faced with several challenges added to the disadvantages of rural tourism. The aim of this study is to design a model of the factors affecting the motivations of rural tourists, in an attempt to improve the understanding of rural tourism motivation for the development of that form of tourism. The proposed model is based on a sound theoretical framework. It was designed following a literature review of tourism motivation theoretical frameworks and of rural tourism motivation factors. The tourism motivation theoretical framework that fitted to the best all rural tourism motivation factors was then chosen as the basis for the proposed model. This study hence found that the push and pull tourism motivation framework and the inner and outer directed values theory are the most adequate theoretical frameworks for the modeling of rural tourism motivation.

Approximate Range-Sum Queries over Data Cubes Using Cosine Transform

In this research, we propose to use the discrete cosine transform to approximate the cumulative distributions of data cube cells- values. The cosine transform is known to have a good energy compaction property and thus can approximate data distribution functions easily with small number of coefficients. The derived estimator is accurate and easy to update. We perform experiments to compare its performance with a well-known technique - the (Haar) wavelet. The experimental results show that the cosine transform performs much better than the wavelet in estimation accuracy, speed, space efficiency, and update easiness.

Navigation Patterns Mining Approach based on Expectation Maximization Algorithm

Web usage mining algorithms have been widely utilized for modeling user web navigation behavior. In this study we advance a model for mining of user-s navigation pattern. The model makes user model based on expectation-maximization (EM) algorithm.An EM algorithm is used in statistics for finding maximum likelihood estimates of parameters in probabilistic models, where the model depends on unobserved latent variables. The experimental results represent that by decreasing the number of clusters, the log likelihood converges toward lower values and probability of the largest cluster will be decreased while the number of the clusters increases in each treatment.

A New Decision Making Approach based on Possibilistic Influence Diagrams

This paper proposes a new decision making approch based on quantitative possibilistic influence diagrams which are extension of standard influence diagrams in the possibilistic framework. We will in particular treat the case where several expert opinions relative to value nodes are available. An initial expert assigns confidence degrees to other experts and fixes a similarity threshold that provided possibility distributions should respect. To illustrate our approach an evaluation algorithm for these multi-source possibilistic influence diagrams will also be proposed.

Water Consumption on Spanish Households

Water has always been a very precious resource. However, many of us do not fully understand or appreciate water-s value until there will be a shortage. We intended to analyze the water consumption into the Spanish households to understand their behavior according to the habitants of the house. In this research was carried out a survey of users, asking for water consumption of their households. The aim of this paper is get a reference value of consumers in Spanish households to help to check their bill and realize if their consumption is excessive, including some tips to decrease it.

Classifying Students for E-Learning in Information Technology Course Using ANN

This research’s objective is to select the model with most accurate value by using Neural Network Technique as a way to filter potential students who enroll in IT course by Electronic learning at Suan Suanadha Rajabhat University. It is designed to help students selecting the appropriate courses by themselves. The result showed that the most accurate model was 100 Folds Cross-validation which had 73.58% points of accuracy.

Electric Load Forecasting Using Genetic Based Algorithm, Optimal Filter Estimator and Least Error Squares Technique: Comparative Study

This paper presents performance comparison of three estimation techniques used for peak load forecasting in power systems. The three optimum estimation techniques are, genetic algorithms (GA), least error squares (LS) and, least absolute value filtering (LAVF). The problem is formulated as an estimation problem. Different forecasting models are considered. Actual recorded data is used to perform the study. The performance of the above three optimal estimation techniques is examined. Advantages of each algorithms are reported and discussed.