Abstract: Starting with an analysis of the financial and
operational indicators that can be found in the specialised literature,
this study aims to contribute to improvements in the performance
measurement systems used when the unit of analysis is the
manufacturing plant. For this a search was done in the highest impact
Journals of Production and Operations Management and
Management Accounting , with the aim of determining the financial
and operational indicators used to evaluate performance when
Advanced Production Practices have been implemented, more
specifically when the practices implemented are Total Quality
Management, JIT/Lean Manufacturing and Total Productive
Maintenance. This has enabled us to obtain a classification of the two
types of indicators based on how much each is used. For the financial
indicators we have also prepared a proposal that can be adapted to
manufacturing plants- accounting features. In the near future we will
propose a model that links practices implementation with financial
and operational indicators and these two last with each other. We aim
to will test this model empirically with the data obtained in the High
Performance Manufacturing Project.
Abstract: In July 1, 2007, Taiwan Stock Exchange (TWSE) on
market observation post system (MOPS) adds a new "Financial
reference database" for investors to do investment reference. This
database as a warning to public offering companies listed on the
public financial information and it original within eight targets. In
this paper, this database provided by the indicators for the application
of company financial crisis early warning model verify that the
database provided by the indicator forecast for the financial crisis,
whether or not companies have a high accuracy rate as opposed to
domestic and foreign scholars have positive results. There is use of
Logistic Regression Model application of the financial early warning
model, in which no joined back-conditions is the first model, joined it
in is the second model, has been taken occurred in the financial crisis
of companies to research samples and then business took place
before the financial crisis point with T-1 and T-2 sample data to do
positive analysis. The results show that this database provided the
debt ratio and net per share for the best forecast variables.
Abstract: In this paper, first we introduce the stable distribution, stable process and theirs characteristics. The a -stable distribution family has received great interest in the last decade due to its success in modeling data, which are too impulsive to be accommodated by the Gaussian distribution. In the second part, we propose major applications of alpha stable distribution in telecommunication, computer science such as network delays and signal processing and financial markets. At the end, we focus on using stable distribution to estimate measure of risk in stock markets and show simulated data with statistical softwares.
Abstract: Data mining has been integrated into application systems to enhance the quality of the decision-making process. This study aims to focus on the integration of data mining technology and Knowledge Management System (KMS), due to the ability of data mining technology to create useful knowledge from large volumes of data. Meanwhile, KMS vitally support the creation and use of knowledge. The integration of data mining technology and KMS are popularly used in business for enhancing and sustaining organizational performance. However, there is a lack of studies that applied data mining technology and KMS in the education sector; particularly students- academic performance since this could reflect the IHL performance. Realizing its importance, this study seeks to integrate data mining technology and KMS to promote an effective management of knowledge within IHLs. Several concepts from literature are adapted, for proposing the new integrative data mining technology and KMS framework to an IHL.
Abstract: Database management systems that integrate user preferences promise better solution for personalization, greater flexibility and higher quality of query responses. This paper presents a tentative work that studies and investigates approaches to express user preferences in queries. We sketch an extend capabilities of SQLf language that uses the fuzzy set theory in order to define the user preferences. For that, two essential points are considered: the first concerns the expression of user preferences in SQLf by so-called fuzzy commensurable predicates set. The second concerns the bipolar way in which these user preferences are expressed on mandatory and/or optional preferences.
Abstract: It has become crucial over the years for nations to
improve their credit scoring methods and techniques in light of the
increasing volatility of the global economy. Statistical methods or
tools have been the favoured means for this; however artificial
intelligence or soft computing based techniques are becoming
increasingly preferred due to their proficient and precise nature and
relative simplicity. This work presents a comparison between Support
Vector Machines and Artificial Neural Networks two popular soft
computing models when applied to credit scoring. Amidst the
different criteria-s that can be used for comparisons; accuracy,
computational complexity and processing times are the selected
criteria used to evaluate both models. Furthermore the German credit
scoring dataset which is a real world dataset is used to train and test
both developed models. Experimental results obtained from our study
suggest that although both soft computing models could be used with
a high degree of accuracy, Artificial Neural Networks deliver better
results than Support Vector Machines.
Abstract: For several high speed networks, providing resilience against failures is an essential requirement. The main feature for designing next generation optical networks is protecting and restoring high capacity WDM networks from the failures. Quick detection, identification and restoration make networks more strong and consistent even though the failures cannot be avoided. Hence, it is necessary to develop fast, efficient and dependable fault localization or detection mechanisms. In this paper we propose a new fault localization algorithm for WDM networks which can identify the location of a failure on a failed lightpath. Our algorithm detects the failed connection and then attempts to reroute data stream through an alternate path. In addition to this, we develop an algorithm to analyze the information of the alarms generated by the components of an optical network, in the presence of a fault. It uses the alarm correlation in order to reduce the list of suspected components shown to the network operators. By our simulation results, we show that our proposed algorithms achieve less blocking probability and delay while getting higher throughput.
Abstract: Environmental micro-organisms include a large number of taxa and some species that are generally considered nonpathogenic, but can represent a risk in certain conditions, especially for elderly people and immunocompromised individuals. Chemotaxonomic identification techniques are powerful tools for environmental micro-organisms, and cellular fatty acid methyl esters (FAME) content is a powerful fingerprinting identification technique. A system based on an unsupervised artificial neural network (ANN) was set up using the fatty acid profiles of standard bacterial strains, obtained by gas-chromatography, used as learning data. We analysed 45 certified strains belonging to Acinetobacter, Aeromonas, Alcaligenes, Aquaspirillum, Arthrobacter, Bacillus, Brevundimonas, Enterobacter, Flavobacterium, Micrococcus, Pseudomonas, Serratia, Shewanella and Vibrio genera. A set of 79 bacteria isolated from a drinking water line (AMGA, the major water supply system in Genoa) were used as an example for identification compared to standard MIDI method. The resulting ANN output map was found to be a very powerful tool to identify these fresh isolates.
Abstract: In the last 15 years, a number of methods have been proposed for forecasting based on fuzzy time series. Most of the fuzzy time series methods are presented for forecasting of enrollments at the University of Alabama. However, the forecasting accuracy rates of the existing methods are not good enough. In this paper, we compared our proposed new method of fuzzy time series forecasting with existing methods. Our method is based on frequency density based partitioning of the historical enrollment data. The proposed method belongs to the kth order and time-variant methods. The proposed method can get the best forecasting accuracy rate for forecasting enrollments than the existing methods.
Abstract: Pipeline infrastructures normally represent high cost of investment and the pipeline must be free from risks that could cause environmental hazard and potential threats to personnel safety. Pipeline integrity such monitoring and management become very crucial to provide unimpeded transportation and avoiding unnecessary production deferment. Thus proper cleaning and inspection is the key to safe and reliable pipeline operation and plays an important role in pipeline integrity management program and has become a standard industry procedure. In view of this, understanding the motion (dynamic behavior), prediction and control of the PIG speed is important in executing pigging operation as it offers significant benefits, such as estimating PIG arrival time at receiving station, planning for suitable pigging operation, and improves efficiency of pigging tasks. The objective of this paper is to review recent developments in speed control system of pipeline PIGs. The review carried out would serve as an industrial application in a form of quick reference of recent developments in pipeline PIG speed control system, and further initiate others to add-in/update the list in the future leading to knowledge based data, and would attract active interest of others to share their view points.
Abstract: In face recognition, feature extraction techniques
attempts to search for appropriate representation of the data. However,
when the feature dimension is larger than the samples size, it brings
performance degradation. Hence, we propose a method called
Normalization Discriminant Independent Component Analysis
(NDICA). The input data will be regularized to obtain the most
reliable features from the data and processed using Independent
Component Analysis (ICA). The proposed method is evaluated on
three face databases, Olivetti Research Ltd (ORL), Face Recognition
Technology (FERET) and Face Recognition Grand Challenge
(FRGC). NDICA showed it effectiveness compared with other
unsupervised and supervised techniques.
Abstract: Several combinations of the preprocessing algorithms,
feature selection techniques and classifiers can be applied to the data
classification tasks. This study introduces a new accurate classifier,
the proposed classifier consist from four components: Signal-to-
Noise as a feature selection technique, support vector machine,
Bayesian neural network and AdaBoost as an ensemble algorithm.
To verify the effectiveness of the proposed classifier, seven well
known classifiers are applied to four datasets. The experiments show
that using the suggested classifier enhances the classification rates for
all datasets.
Abstract: The classification of the protein structure is commonly
not performed for the whole protein but for structural domains, i.e.,
compact functional units preserved during evolution. Hence, a first
step to a protein structure classification is the separation of the
protein into its domains. We approach the problem of protein domain
identification by proposing a novel graph theoretical algorithm. We
represent the protein structure as an undirected, unweighted and
unlabeled graph which nodes correspond the secondary structure
elements of the protein. This graph is call the protein graph. The
domains are then identified as partitions of the graph corresponding
to vertices sets obtained by the maximization of an objective function,
which mutually maximizes the cycle distributions found in the
partitions of the graph. Our algorithm does not utilize any other kind
of information besides the cycle-distribution to find the partitions. If
a partition is found, the algorithm is iteratively applied to each of
the resulting subgraphs. As stop criterion, we calculate numerically
a significance level which indicates the stability of the predicted
partition against a random rewiring of the protein graph. Hence,
our algorithm terminates automatically its iterative application. We
present results for one and two domain proteins and compare our
results with the manually assigned domains by the SCOP database
and differences are discussed.
Abstract: Volume rendering is widely used in medical CT image
visualization. Applying 3D image visualization to diagnosis
application can require accurate volume rendering with high
resolution. Interpolation is important in medical image processing
applications such as image compression or volume resampling.
However, it can distort the original image data because of edge
blurring or blocking effects when image enhancement procedures
were applied. In this paper, we proposed adaptive tension control
method exploiting gradient information to achieve high resolution
medical image enhancement in volume visualization, where restored
images are similar to original images as much as possible. The
experimental results show that the proposed method can improve
image quality associated with the adaptive tension control efficacy.
Abstract: The experiment was conducted to study the effect of
rearing systems on fatty acid composition and cholesterol content of
Thai indigenous chicken meat. Three hundred and sixty chicks were
allocated to 2 different rearing systems: conventional, housing in an
indoor pen (5 birds/m2); free-range, housing in an indoor pen (5
birds/m2) with access to a grass paddock (1 bird/m2) from 8 wk of age
until slaughter. All birds were provided with the same diet during the
experimental period. At 16 wk of age, 24 birds per group were
slaughtered to evaluate the fatty acid composition and cholesterol
content of breast and thigh meat. The results showed that the
proportion of SFA, MUFA and PUFA in breast and thigh meat were
not different among groups (P>0.05). However, the proportion of n-3
fatty acids was higher and the ratio of n-6 to n-3 fatty acids was lower
in free-range system than in conventional system (P0.05). The data indicated that the free-range system
could increase the proportion of n-3 fatty acids, but no effect on
cholesterol content in Thai indigenous chicken meat.
Abstract: The present work was conducted for the synthesis of
nano size zerovalent iron (nZVI) and hexavalent chromium (Cr(VI))
removal as a highly toxic pollutant by using this nanoparticles. Batch
experiments were performed to investigate the effects of Cr(VI),
nZVI concentration, pH of solution and contact time variation on
the removal efficiency of Cr(VI). nZVI was synthesized by
reduction of ferric chloride using sodium borohydrid. SEM and
XRD examinations applied for determination of particle size and
characterization of produced nanoparticles. The results showed that
the removal efficiency decreased with Cr(VI) concentration and pH
of solution and increased with adsorbent dosage and contact time.
The Langmuir and Freundlich isotherm models were used for the
adsorption equilibrium data and the Langmuir isotherm model was
well fitted. Nanoparticle ZVI presented an outstanding ability to
remove Cr(VI) due to high surface area, low particle size and high
inherent activity.
Abstract: Landslide susceptibility map delineates the potential
zones for landslide occurrence. Previous works have applied
multivariate methods and neural networks for mapping landslide
susceptibility. This study proposed a new approach to integrate
decision tree model and spatial cluster statistic for assessing landslide
susceptibility spatially. A total of 2057 landslide cells were digitized
for developing the landslide decision tree model. The relationships of
landslides and instability factors were explicitly represented by using
tree graphs in the model. The local Getis-Ord statistics were used to
cluster cells with high landslide probability. The analytic result from
the local Getis-Ord statistics was classed to create a map of landslide
susceptibility zones. The map was validated using new landslide data
with 482 cells. Results of validation show an accuracy rate of 86.1% in
predicting new landslide occurrence. This indicates that the proposed
approach is useful for improving landslide susceptibility mapping.
Abstract: In the last years, the computers have increased their capacity of calculus and networks, for the interconnection of these machines. The networks have been improved until obtaining the actual high rates of data transferring. The programs that nowadays try to take advantage of these new technologies cannot be written using the traditional techniques of programming, since most of the algorithms were designed for being executed in an only processor,in a nonconcurrent form instead of being executed concurrently ina set of processors working and communicating through a network.This paper aims to present the ongoing development of a new system for the reconfiguration of grouping of computers, taking into account these new technologies.
Abstract: This paper presents dynamic voltage collapse prediction on an actual power system using support vector machines.
Dynamic voltage collapse prediction is first determined based on the PTSI calculated from information in dynamic simulation output. Simulations were carried out on a practical 87 bus test system by considering load increase as the contingency. The data collected from the time domain simulation is then used as input to the SVM in which support vector regression is used as a predictor to determine the
dynamic voltage collapse indices of the power system. To reduce training time and improve accuracy of the SVM, the Kernel function type and Kernel parameter are considered. To verify the
effectiveness of the proposed SVM method, its performance is compared with the multi layer perceptron neural network (MLPNN). Studies show that the SVM gives faster and more accurate results for dynamic voltage collapse prediction compared with the MLPNN.
Abstract: The purpose of this paper was to study motivation
factors affecting job performance effectiveness. This paper drew
upon data collected from an Internal Audit Staffs of Internal Audit
Line of Head Office of Krung Thai Public Company Limited.
Statistics used included frequency, percentage, mean and standard
deviation, t-test, and one-way ANOVA test. The finding revealed that
the majority of the respondents were female of 46 years of age and
over, married and live together, hold a bachelor degree, with an
average monthly income over 70,001 Baht. The majority of
respondents had over 15 years of work experience. They generally
had high working motivation as well as high job performance
effectiveness.
The hypotheses testing disclosed that employees with different
working status had different level of job performance effectiveness at
a 0.01 level of significance. Working motivation factors had an effect
on job performance in the same direction with high level. Individual
working motivation included working completion, reorganization,
working progression, working characteristic, opportunity,
responsibility, management policy, supervision, relationship with
their superior, relationship with co-worker, working position,
working stability, safety, privacy, working conditions, and payment.
All of these factors related to job performance effectiveness in the
same direction with medium level.