Abstract: New Growth Theory helps us make sense of the
ongoing shift from a resource-based economy to a knowledge-based
economy. It underscores the point that the economic processes which
create and diffuse new knowledge are critical to shaping the growth
of nations, communities and individual firms. In all too many
contributions to New (Endogenous) Growth Theory – though not in
all – central reference is made to 'a stock of knowledge', a 'stock of
ideas', etc., this variable featuring centre-stage in the analysis. Yet it
is immediately apparent that this is far from being a crystal clear
concept. The difficulty and uncertainty of being able to capture the
value associated with knowledge is a real problem. The intent of this
paper is introducing new thinking and theorizing about the
knowledge and its measurability in new growth theory. Moreover the
study aims to synthesize various strain of the literature with a
practical bearing on knowledge concept. By contribution of
institution framework which is found within NGT, we can indirectly
measure the knowledge concept. Institutions matter because they
shape the environment for production and employment of new
knowledge
Abstract: Hazardous Material transportation by road is coupled
with inherent risk of accidents causing loss of lives, grievous injuries,
property losses and environmental damages. The most common type
of hazmat road accident happens to be the releases (78%) of
hazardous substances, followed by fires (28%), explosions (14%) and
vapour/ gas clouds (6 %.).
The paper is discussing initially the probable 'Impact Zones'
likely to be caused by one flammable (LPG) and one toxic (ethylene
oxide) chemicals being transported through a sizable segment of a
State Highway connecting three notified Industrial zones in Surat
district in Western India housing 26 MAH industrial units. Three
'hotspots' were identified along the highway segment depending on
the particular chemical traffic and the population distribution within
500 meters on either sides. The thermal radiation and explosion
overpressure have been calculated for LPG / Ethylene Oxide BLEVE
scenarios along with toxic release scenario for ethylene oxide.
Besides, the dispersion calculations for ethylene oxide toxic release
have been made for each 'hotspot' location and the impact zones
have been mapped for the LOC concentrations. Subsequently, the
maximum Initial Isolation and the protective zones were calculated
based on ERPG-3 and ERPG-2 values of ethylene oxide respectively
which are estimated taking the worst case scenario under worst
weather conditions. The data analysis will be helpful to the local
administration in capacity building with respect to rescue /
evacuation and medical preparedness and quantitative inputs to
augment the District Offsite Emergency Plan document.
Abstract: This paper presents ageing experiments controlled by the evolution of junction parameters. The deterioration of the device is related to high injection effects which modified the transport mechanisms in the space charge region of the junction. Physical phenomena linked to the degradation of junction parameters that affect the devices reliability are reported and discussed. We have used the method based on numerical analysis of experimental current-voltage characteristic of the junction, in order to extract the electrical parameters. The simultaneous follow-up of the evolutions of the series resistance and of the transition voltage allow us to introduce a new parameter for reliability evaluation.
Abstract: Amount of dissolve oxygen in a river has a great direct affect on aquatic macroinvertebrates and this would influence on the region ecosystem indirectly. In this paper it is tried to predict dissolved oxygen in rivers by employing an easy Fuzzy Logic Modeling, Wang Mendel method. This model just uses previous records to estimate upcoming values. For this purpose daily and hourly records of eight stations in Au Sable watershed in Michigan, United States are employed for 12 years and 50 days period respectively. Calculations indicate that for long period prediction it is better to increase input intervals. But for filling missed data it is advisable to decrease the interval. Increasing partitioning of input and output features influence a little on accuracy but make the model too time consuming. Increment in number of input data also act like number of partitioning. Large amount of train data does not modify accuracy essentially, so, an optimum training length should be selected.
Abstract: Data Envelopment Analysis (DEA) is a methodology
that computes efficiency values for decision making units (DMU) in a
given period by comparing the outputs with the inputs. In many cases,
there are some time lag between the consumption of inputs and the
production of outputs. For a long-term research project, it is hard to
avoid the production lead time phenomenon. This time lag effect
should be considered in evaluating the performance of organizations.
This paper suggests a model to calculate efficiency values for the
performance evaluation problem with time lag. In the experimental
part, the proposed methods are compared with the CCR and an
existing time lag model using the data set of the 21st century frontier
R&D program which is a long-term national R&D program of Korea.
Abstract: This paper describes text mining technique for automatically extracting association rules from collections of textual documents. The technique called, Extracting Association Rules from Text (EART). It depends on keyword features for discover association rules amongst keywords labeling the documents. In this work, the EART system ignores the order in which the words occur, but instead focusing on the words and their statistical distributions in documents. The main contributions of the technique are that it integrates XML technology with Information Retrieval scheme (TFIDF) (for keyword/feature selection that automatically selects the most discriminative keywords for use in association rules generation) and use Data Mining technique for association rules discovery. It consists of three phases: Text Preprocessing phase (transformation, filtration, stemming and indexing of the documents), Association Rule Mining (ARM) phase (applying our designed algorithm for Generating Association Rules based on Weighting scheme GARW) and Visualization phase (visualization of results). Experiments applied on WebPages news documents related to the outbreak of the bird flu disease. The extracted association rules contain important features and describe the informative news included in the documents collection. The performance of the EART system compared with another system that uses the Apriori algorithm throughout the execution time and evaluating extracted association rules.
Abstract: This paper presents a Particle Swarm Optimization
(PSO) method for determining the optimal parameters of a first-order
controller for TCP/AQM system. The model TCP/AQM is described
by a second-order system with time delay. First, the analytical
approach, based on the D-decomposition method and Lemma of
Kharitonov, is used to determine the stabilizing regions of a firstorder
controller. Second, the optimal parameters of the controller are
obtained by the PSO algorithm. Finally, the proposed method is
implemented in the Network Simulator NS-2 and compared with the
PI controller.
Abstract: Clean air in subway station is important to passengers. The Platform Screen Doors (PSDs) can improve indoor air quality in the subway station; however the air quality in the subway tunnel is degraded. The subway tunnel has high CO2 concentration and indoor particulate matter (PM) value. The Indoor Air Quality (IAQ) level in subway environment degrades by increasing the frequency of the train operation and the number of the train. The ventilation systems of the subway tunnel need improvements to have better air-quality. Numerical analyses might be effective tools to analyze the performance of subway twin-track tunnel ventilation systems. An existing subway twin-track tunnel in the metropolitan Seoul subway system is chosen for the numerical simulations. The ANSYS CFX software is used for unsteady computations of the airflow inside the twin-track tunnel when the train moves. The airflow inside the tunnel is simulated when one train runs and two trains run at the same time in the tunnel. The piston-effect inside the tunnel is analyzed when all shafts function as the natural ventilation shaft. The supplied air through the shafts is mixed with the pollutant air in the tunnel. The pollutant air is exhausted by the mechanical ventilation shafts. The supplied and discharged airs are balanced when only one train runs in the twin-track tunnel. The pollutant air in the tunnel is high when two trains run simultaneously in opposite direction and all shafts functioned as the natural shaft cases when there are no electrical power supplies in the shafts. The remained pollutant air inside the tunnel enters into the station platform when the doors are opened.
Abstract: A complex valued neural network is a neural network
which consists of complex valued input and/or weights and/or thresholds
and/or activation functions. Complex-valued neural networks
have been widening the scope of applications not only in electronics
and informatics, but also in social systems. One of the most important
applications of the complex valued neural network is in signal
processing. In Neural networks, generalized mean neuron model
(GMN) is often discussed and studied. The GMN includes a new
aggregation function based on the concept of generalized mean of all
the inputs to the neuron. This paper aims to present exhaustive results
of using Generalized Mean Neuron model in a complex-valued neural
network model that uses the back-propagation algorithm (called
-Complex-BP-) for learning. Our experiments results demonstrate the
effectiveness of a Generalized Mean Neuron Model in a complex
plane for signal processing over a real valued neural network. We
have studied and stated various observations like effect of learning
rates, ranges of the initial weights randomly selected, error functions
used and number of iterations for the convergence of error required on
a Generalized Mean neural network model. Some inherent properties
of this complex back propagation algorithm are also studied and
discussed.
Abstract: We present a new method for the fully automatic 3D
reconstruction of the coronary artery centerlines, using two X-ray
angiogram projection images from a single rotating monoplane
acquisition system. During the first stage, the input images are
smoothed using curve evolution techniques. Next, a simple yet
efficient multiscale method, based on the information of the Hessian
matrix, for the enhancement of the vascular structure is introduced.
Hysteresis thresholding using different image quantiles, is used to
threshold the arteries. This stage is followed by a thinning procedure
to extract the centerlines. The resulting skeleton image is then pruned
using morphological and pattern recognition techniques to remove
non-vessel like structures. Finally, edge-based stereo correspondence
is solved using a parallel evolutionary optimization method based on
f symbiosis. The detected 2D centerlines combined with disparity
map information allow the reconstruction of the 3D vessel
centerlines. The proposed method has been evaluated on patient data
sets for evaluation purposes.
Abstract: It is necessary to evaluate the bridges conditions and
strengthen bridges or parts of them. The reinforcement necessary due
to some reasons can be summarized as: First, a changing in use of
bridge could produce internal forces in a part of structural which
exceed the existing cross-sectional capacity. Second, bridges may
also need reinforcement because damage due to external factors
which reduced the cross-sectional resistance to external loads. One of
other factors could listed here its misdesign in some details, like
safety of bridge or part of its.This article identify the design demands
of Qing Shan bridge located in is in Heilongjiang Province He gang -
Nen Jiang Road 303 provincial highway, Wudalianchi area, China, is
an important bridge in the urban areas. The investigation program
was include the observation and evaluate the damage in T- section
concrete beams , prestressed concrete box girder bridges section in
additional evaluate the whole state of bridge includes the pier ,
abutments , bridge decks, wings , bearing and capping beam, joints,
........etc. The test results show that the bridges in general structural
condition are good. T beam span No 10 were observed, crack
extended upward along the ribbed T beam, and continue to the T
beam flange. Crack width varying between 0.1mm to 0.4mm, the
maximum about 0.4mm. The bridge needs to be improved flexural
bending strength especially at for T beam section.
Abstract: In this paper, to optimize the “Characteristic Straight Line Method" which is used in the soil displacement analysis, a “best estimate" of the geodetic leveling observations has been achieved by taking in account the concept of 'Height systems'. This concept has been discussed in detail and consequently the concept of “height". In landslides dynamic analysis, the soil is considered as a mosaic of rigid blocks. The soil displacement has been monitored and analyzed by using the “Characteristic Straight Line Method". Its characteristic components have been defined constructed from a “best estimate" of the topometric observations. In the measurement of elevation differences, we have used the most modern leveling equipment available. Observational procedures have also been designed to provide the most effective method to acquire data. In addition systematic errors which cannot be sufficiently controlled by instrumentation or observational techniques are minimized by applying appropriate corrections to the observed data: the level collimation correction minimizes the error caused by nonhorizontality of the leveling instrument's line of sight for unequal sight lengths, the refraction correction is modeled to minimize the refraction error caused by temperature (density) variation of air strata, the rod temperature correction accounts for variation in the length of the leveling rod' s Invar/LO-VAR® strip which results from temperature changes, the rod scale correction ensures a uniform scale which conforms to the international length standard and the introduction of the concept of the 'Height systems' where all types of height (orthometric, dynamic, normal, gravity correction, and equipotential surface) have been investigated. The “Characteristic Straight Line Method" is slightly more convenient than the “Characteristic Circle Method". It permits to evaluate a displacement of very small magnitude even when the displacement is of an infinitesimal quantity. The inclination of the landslide is given by the inverse of the distance reference point O to the “Characteristic Straight Line". Its direction is given by the bearing of the normal directed from point O to the Characteristic Straight Line (Fig..6). A “best estimate" of the topometric observations was used to measure the elevation of points carefully selected, before and after the deformation. Gross errors have been eliminated by statistical analyses and by comparing the heights within local neighborhoods. The results of a test using an area where very interesting land surface deformation occurs are reported. Monitoring with different options and qualitative comparison of results based on a sufficient number of check points are presented.
Abstract: The problem of estimating time-varying regression is
inevitably concerned with the necessity to choose the appropriate
level of model volatility - ranging from the full stationarity of instant
regression models to their absolute independence of each other. In the
stationary case the number of regression coefficients to be estimated
equals that of regressors, whereas the absence of any smoothness
assumptions augments the dimension of the unknown vector by the
factor of the time-series length. The Akaike Information Criterion
is a commonly adopted means of adjusting a model to the given
data set within a succession of nested parametric model classes,
but its crucial restriction is that the classes are rigidly defined by
the growing integer-valued dimension of the unknown vector. To
make the Kullback information maximization principle underlying the
classical AIC applicable to the problem of time-varying regression
estimation, we extend it onto a wider class of data models in which
the dimension of the parameter is fixed, but the freedom of its values
is softly constrained by a family of continuously nested a priori
probability distributions.
Abstract: To evaluate genetic variation of wheat (Triticum
aestivum) affected by heat and drought stress on eight Australian
wheat genotypes that are parents of Doubled Haploid (HD) mapping
populations at the vegetative stage, the water stress experiment was
conducted at 65% field capacity in growth room. Heat stress
experiment was conducted in the research field under irrigation over
summer. Result show that water stress decreased dry shoot weight
and RWC but increased osmolarity and means of Fv/Fm values in all
varieties except for Krichauff. Krichauff and Kukri had the
maximum RWC under drought stress. Trident variety was shown
maximum WUE, osmolarity (610 mM/Kg), dry mater, quantum yield
and Fv/Fm 0.815 under water stress condition. However, the
recovery of quantum yield was apparent between 4 to 7 days after
stress in all varieties. Nevertheless, increase in water stress after that
lead to strong decrease in quantum yield. There was a genetic
variation for leaf pigments content among varieties under heat stress.
Heat stress decreased significantly the total chlorophyll content that
measured by SPAD. Krichauff had maximum value of Anthocyanin
content (2.978 A/g FW), chlorophyll a+b (2.001 mg/g FW) and
chlorophyll a (1.502 mg/g FW). Maximum value of chlorophyll b
(0.515 mg/g FW) and Carotenoids (0.234 mg/g FW) content
belonged to Kukri. The quantum yield of all varieties decreased
significantly, when the weather temperature increased from 28 ÔùªC to
36 ÔùªC during the 6 days. However, the recovery of quantum yield
was apparent after 8th day in all varieties. The maximum decrease
and recovery in quantum yield was observed in Krichauff. Drought
and heat tolerant and moderately tolerant wheat genotypes were
included Trident, Krichauff, Kukri and RAC875. Molineux, Berkut
and Excalibur were clustered into most sensitive and moderately
sensitive genotypes. Finally, the results show that there was a
significantly genetic variation among the eight varieties that were
studied under heat and water stress.
Abstract: The purpose of this paper is to perform a multidisciplinary design and analysis (MDA) of honeycomb panels used in the satellites structural design. All the analysis is based on clamped-free boundary conditions. In the present work, detailed finite element models for honeycomb panels are developed and analysed. Experimental tests were carried out on a honeycomb specimen of which the goal is to compare the previous modal analysis made by the finite element method as well as the existing equivalent approaches. The obtained results show a good agreement between the finite element analysis, equivalent and tests results; the difference in the first two frequencies is less than 4% and less than 10% for the third frequency. The results of the equivalent model presented in this analysis are obtained with a good accuracy. Moreover, investigations carried out in this research relate to the honeycomb plate modal analysis under several aspects including the structural geometrical variation by studying the various influences of the dimension parameters on the modal frequency, the variation of core and skin material of the honeycomb. The various results obtained in this paper are promising and show that the geometry parameters and the type of material have an effect on the value of the honeycomb plate modal frequency.
Abstract: Automatic segmentation of skin lesions is the first step
towards development of a computer-aided diagnosis of melanoma.
Although numerous segmentation methods have been developed,
few studies have focused on determining the most discriminative
and effective color space for melanoma application. This paper
proposes a novel automatic segmentation algorithm using color space
analysis and clustering-based histogram thresholding, which is able to
determine the optimal color channel for segmentation of skin lesions.
To demonstrate the validity of the algorithm, it is tested on a set of 30
high resolution dermoscopy images and a comprehensive evaluation
of the results is provided, where borders manually drawn by four
dermatologists, are compared to automated borders detected by the
proposed algorithm. The evaluation is carried out by applying three
previously used metrics of accuracy, sensitivity, and specificity and
a new metric of similarity. Through ROC analysis and ranking the
metrics, it is shown that the best results are obtained with the X and
XoYoR color channels which results in an accuracy of approximately
97%. The proposed method is also compared with two state-ofthe-
art skin lesion segmentation methods, which demonstrates the
effectiveness and superiority of the proposed segmentation method.
Abstract: A filter is used to remove undesirable frequency information from a dynamic signal. This paper shows that the Znotch filter filtering technique can be applied to remove the noise nuisance from a machining signal. In machining, the noise components were identified from the sound produced by the operation of machine components itself such as hydraulic system, motor, machine environment and etc. By correlating the noise components with the measured machining signal, the interested components of the measured machining signal which was less interfered by the noise, can be extracted. Thus, the filtered signal is more reliable to be analysed in terms of noise content compared to the unfiltered signal. Significantly, the I-kaz method i.e. comprises of three dimensional graphical representation and I-kaz coefficient, Z∞ could differentiate between the filtered and the unfiltered signal. The bigger space of scattering and the higher value of Z∞ demonstrated that the signal was highly interrupted by noise. This method can be utilised as a proactive tool in evaluating the noise content in a signal. The evaluation of noise content is very important as well as the elimination especially for machining operation fault diagnosis purpose. The Z-notch filtering technique was reliable in extracting noise component from the measured machining signal with high efficiency. Even though the measured signal was exposed to high noise disruption, the signal generated from the interaction between cutting tool and work piece still can be acquired. Therefore, the interruption of noise that could change the original signal feature and consequently can deteriorate the useful sensory information can be eliminated.
Abstract: The paper is concerned with relationships between
SSME and ICTs and focuses on the role of Web 2.0 tools in
the service development process. The research presented aims at
exploring how collaborative technologies can support and improve
service processes, highlighting customer centrality and value coproduction.
The core idea of the paper is the centrality of user
participation and the collaborative technologies as enabling factors;
Wikipedia is analyzed as an example. The result of such analysis is
the identification and description of a pattern characterising specific
services in which users collaborate by means of web tools with value
co-producers during the service process. The pattern of collaborative
co-production concerning several categories of services including
knowledge based services is then discussed.
Abstract: Moisture is an important consideration in many
aspects ranging from irrigation, soil chemistry, golf course, corrosion
and erosion, road conditions, weather predictions, livestock feed
moisture levels, water seepage etc. Vegetation and crops always
depend more on the moisture available at the root level than on
precipitation occurrence. In this paper, design of an instrument is
discussed which tells about the variation in the moisture contents of
soil. This is done by measuring the amount of water content in soil by
finding the variation in capacitance of soil with the help of a
capacitive sensor. The greatest advantage of soil moisture sensor is
reduced water consumption. The sensor is also be used to set lower
and upper threshold to maintain optimum soil moisture saturation and
minimize water wilting, contributes to deeper plant root growth
,reduced soil run off /leaching and less favorable condition for insects
and fungal diseases. Capacitance method is preferred because, it
provides absolute amount of water content and also measures water
content at any depth.
Abstract: MC (Management Control)& IC (Internal Control) – what is the relationship? (an empirical study into the definitions between MC and IC) based on the wider considerations of Internal Control and Management Control terms, attention is focused not only on the financial aspects but also more on the soft aspects of the business, such as culture, behaviour, standards and values. The limited considerations of Management Control are focused mainly in the hard, financial aspects of business operation. The definitions of Management Control and Internal Control are often used interchangeably and the results of this empirical study reveal that Management Control is part of Internal Control, there is no causal link between the two concepts. Based on the interpretation of the respondents, the term Management Control has moved from a broad term to a more limited term with the soft aspects of the influencing of behaviour, performance measurements, incentives and culture. This paper is an exploratory study based on qualitative research and on a qualitative matrix method analysis of the thematic definition of the terms Management Control and Internal Control.