Abstract: Although services play a crucial role in economy,
service did not gain as much importance as productivity management
in manufacturing. This paper presents key findings from literature
and practice. Based on an initial definition of complex services, seven
productivity concepts are briefly presented and assessed by relevant,
complex service specific criteria. Following the findings a complex
service productivity model is proposed. The novel model comprises
of all specific dimensions of service provision from both, the
provider-s as well as costumer-s perspective. A clear assignment of
identified value drivers and relationships between them is presented.
In order to verify the conceptual service productivity model a case
study from a project engineering department of a chemical plant
development and construction company is presented.
Abstract: In this paper bi-annual time series data on unemployment rates (from the Labour Force Survey) are expanded to quarterly rates and linked to quarterly unemployment rates (from the Quarterly Labour Force Survey). The resultant linked series and the consumer price index (CPI) series are examined using Johansen’s cointegration approach and vector error correction modeling. The study finds that both the series are integrated of order one and are cointegrated. A statistically significant co-integrating relationship is found to exist between the time series of unemployment rates and the CPI. Given this significant relationship, the study models this relationship using Vector Error Correction Models (VECM), one with a restriction on the deterministic term and the other with no restriction.
A formal statistical confirmation of the existence of a unique linear and lagged relationship between inflation and unemployment for the period between September 2000 and June 2011 is presented. For the given period, the CPI was found to be an unbiased predictor of the unemployment rate. This relationship can be explored further for the development of appropriate forecasting models incorporating other study variables.
Abstract: Proper management of residues originated from
industrial activities is considered as one of the serious challenges
faced by industrial societies due to their potential hazards to the
environment. Common disposal methods for industrial solid wastes
(ISWs) encompass various combinations of solely management
options, i.e. recycling, incineration, composting, and sanitary
landfilling. Indeed, the procedure used to evaluate and nominate the
best practical methods should be based on environmental, technical,
economical, and social assessments. In this paper an environmentaltechnical
assessment model is developed using analytical network
process (ANP) to facilitate the decision making practice for ISWs
generated at Gilan province, Iran. Using the results of performed
surveys on industrial units located at Gilan, the various groups of
solid wastes in the research area were characterized, and four
different ISW management scenarios were studied. The evaluation
process was conducted using the above-mentioned model in the
Super Decisions software (version 2.0.8) environment. The results
indicates that the best ISW management scenario for Gilan province
is consist of recycling the metal industries residues, composting the
putrescible portion of ISWs, combustion of paper, wood, fabric and
polymeric wastes as well as energy extraction in the incineration
plant, and finally landfilling the rest of the waste stream in addition
with rejected materials from recycling and compost production plants
and ashes from the incineration unit.
Abstract: Zero inflated strict arcsine model is a newly developed
model which is found to be appropriate in modeling overdispersed
count data. In this study, we extend zero inflated strict arcsine model
to zero inflated strict arcsine regression model by taking into
consideration the extra variability caused by extra zeros and
covariates in count data. Maximum likelihood estimation method is
used in estimating the parameters for this zero inflated strict arcsine
regression model.
Abstract: This study proposes a conceptual model and
empirically tests the relationships between customers and librarians
(i.e. tangibles, responsiveness, assurance, reliability and empathy)
with a dependent variable (customer satisfaction) regarding library
services. The SERVQUAL instrument was administered to 100
respondents which comprises of staff and students at a public higher
learning institution in the Federal Territory of Labuan, Malaysia.
They were public university library users. Results revealed that all
service quality dimensions tested were significant and influenced
customer satisfaction of visitors to a public university library.
Assurance is the most important factor that influences customer
satisfaction with the services rendered by the librarian. It is
imperative for the library management to take note that the top five
service attributes that gained greatest attention from library visitors-
perspective includes employee willingness to help customers,
availability of customer representatives online for response to
queries, library staff actively and promptly provide services, signs in
the building are clear and library staff are friendly and courteous.
This study provides valuable results concerning the determinants of
the service quality and customer satisfaction of public university
library services from the users' perspective.
Abstract: The so-called all-pass filter circuits are commonly
used in the field of signal processing, control and measurement.
Being connected to capacitive loads, these circuits tend to loose their
stability; therefore the elaborate analysis of their dynamic behavior is
necessary. The compensation methods intending to increase the
stability of such circuits are discussed in this paper, including the socalled
lead-lag compensation technique being treated in detail. For
the dynamic modeling, a two-port network model of the all-pass filter
is being derived. The results of the model analysis show, that
effective lead-lag compensation can be achieved, alone by the
optimization of the circuit parameters; therefore the application of
additional electric components are not needed to fulfill the stability
requirement.
Abstract: This paper presents a computational methodology
based on matrix operations for a computer based solution to the
problem of performance analysis of software reliability models
(SRMs). A set of seven comparison criteria have been formulated to
rank various non-homogenous Poisson process software reliability
models proposed during the past 30 years to estimate software
reliability measures such as the number of remaining faults, software
failure rate, and software reliability. Selection of optimal SRM for
use in a particular case has been an area of interest for researchers in
the field of software reliability. Tools and techniques for software
reliability model selection found in the literature cannot be used with
high level of confidence as they use a limited number of model
selection criteria. A real data set of middle size software project from
published papers has been used for demonstration of matrix method.
The result of this study will be a ranking of SRMs based on the
Permanent value of the criteria matrix formed for each model based
on the comparison criteria. The software reliability model with
highest value of the Permanent is ranked at number – 1 and so on.
Abstract: A state of the art Speaker Identification (SI) system requires a robust feature extraction unit followed by a speaker modeling scheme for generalized representation of these features. Over the years, Mel-Frequency Cepstral Coefficients (MFCC) modeled on the human auditory system has been used as a standard acoustic feature set for SI applications. However, due to the structure of its filter bank, it captures vocal tract characteristics more effectively in the lower frequency regions. This paper proposes a new set of features using a complementary filter bank structure which improves distinguishability of speaker specific cues present in the higher frequency zone. Unlike high level features that are difficult to extract, the proposed feature set involves little computational burden during the extraction process. When combined with MFCC via a parallel implementation of speaker models, the proposed feature set outperforms baseline MFCC significantly. This proposition is validated by experiments conducted on two different kinds of public databases namely YOHO (microphone speech) and POLYCOST (telephone speech) with Gaussian Mixture Models (GMM) as a Classifier for various model orders.
Abstract: Corporate credit rating prediction using statistical and
artificial intelligence (AI) techniques has been one of the attractive
research topics in the literature. In recent years, multiclass
classification models such as artificial neural network (ANN) or
multiclass support vector machine (MSVM) have become a very
appealing machine learning approaches due to their good
performance. However, most of them have only focused on classifying
samples into nominal categories, thus the unique characteristic of the
credit rating - ordinality - has been seldom considered in their
approaches. This study proposes new types of ANN and MSVM
classifiers, which are named OMANN and OMSVM respectively.
OMANN and OMSVM are designed to extend binary ANN or SVM
classifiers by applying ordinal pairwise partitioning (OPP) strategy.
These models can handle ordinal multiple classes efficiently and
effectively. To validate the usefulness of these two models, we applied
them to the real-world bond rating case. We compared the results of
our models to those of conventional approaches. The experimental
results showed that our proposed models improve classification
accuracy in comparison to typical multiclass classification techniques
with the reduced computation resource.
Abstract: The response surface methodology (RSM) is a
collection of mathematical and statistical techniques useful in the
modeling and analysis of problems in which the dependent variable
receives the influence of several independent variables, in order to
determine which are the conditions under which should operate these
variables to optimize a production process. The RSM estimated a
regression model of first order, and sets the search direction using the
method of maximum / minimum slope up / down MMS U/D.
However, this method selects the step size intuitively, which can
affect the efficiency of the RSM. This paper assesses how the step
size affects the efficiency of this methodology. The numerical
examples are carried out through Monte Carlo experiments,
evaluating three response variables: efficiency gain function, the
optimum distance and the number of iterations. The results in the
simulation experiments showed that in response variables efficiency
and gain function at the optimum distance were not affected by the
step size, while the number of iterations is found that the efficiency if
it is affected by the size of the step and function type of test used.
Abstract: This paper presents an effective framework for Chinesesyntactic parsing, which includes two parts. The first one is a parsing framework, which is based on an improved bottom-up chart parsingalgorithm, and integrates the idea of the beam search strategy of N bestalgorithm and heuristic function of A* algorithm for pruning, then get multiple parsing trees. The second is a novel evaluation model, which integrates contextual and partial lexical information into traditional PCFG model and defines a new score function. Using this model, the tree with the highest score is found out as the best parsing tree. Finally,the contrasting experiment results are given. Keywords?syntactic parsing, PCFG, pruning, evaluation model.
Abstract: Time varying network induced delays in networked
control systems (NCS) are known for degrading control system-s
quality of performance (QoP) and causing stability problems. In
literature, a control method employing modeling of communication
delays as probability distribution, proves to be a better method. This
paper focuses on modeling of network induced delays as probability
distribution.
CAN and MIL-STD-1553B are extensively used to carry periodic
control and monitoring data in networked control systems.
In literature, methods to estimate only the worst-case delays for
these networks are available. In this paper probabilistic network
delay model for CAN and MIL-STD-1553B networks are given.
A systematic method to estimate values to model parameters from
network parameters is given. A method to predict network delay in
next cycle based on the present network delay is presented. Effect of
active network redundancy and redundancy at node level on network
delay and system response-time is also analyzed.
Abstract: Considering a reservoir with periodic states and
different cost functions with penalty, its release rules can be
modeled as a periodic Markov decision process (PMDP). First,
we prove that policy- iteration algorithm also works for the
PMDP. Then, with policy- iteration algorithm, we obtain the
optimal policies for a special aperiodic reservoir model with
two cost functions under large penalty and give a discussion
when the penalty is small.
Abstract: The present work compares the performance of three
turbulence modeling approach (based on the two-equation k -ε
model) in predicting erosive wear in multi-size dense slurry flow
through rotating channel. All three turbulence models include
rotation modification to the production term in the turbulent kineticenergy
equation. The two-phase flow field obtained numerically
using Galerkin finite element methodology relates the local flow
velocity and concentration to the wear rate via a suitable wear model.
The wear models for both sliding wear and impact wear mechanisms
account for the particle size dependence. Results of predicted wear
rates using the three turbulence models are compared for a large
number of cases spanning such operating parameters as rotation rate,
solids concentration, flow rate, particle size distribution and so forth.
The root-mean-square error between FE-generated data and the
correlation between maximum wear rate and the operating
parameters is found less than 2.5% for all the three models.
Abstract: The challenge for software development house in
Bangladesh is to find a path of using minimum process rather than CMMI or ISO type gigantic practice and process area. The small and medium size organization in Bangladesh wants to ensure minimum
basic Software Process Improvement (SPI) in day to day operational
activities. Perhaps, the basic practices will ensure to realize their company's improvement goals. This paper focuses on the key issues in basic software practices for small and medium size software
organizations, who are unable to effort the CMMI, ISO, ITIL etc. compliance certifications. This research also suggests a basic software process practices model for Bangladesh and it will show the mapping of our suggestions with international best practice. In this IT
competitive world for software process improvement, Small and medium size software companies that require collaboration and
strengthening to transform their current perspective into inseparable global IT scenario. This research performed some investigations and analysis on some projects- life cycle, current good practice, effective approach, reality and pain area of practitioners, etc. We did some
reasoning, root cause analysis, comparative analysis of various
approach, method, practice and justifications of CMMI and real life. We did avoid reinventing the wheel, where our focus is for minimal
practice, which will ensure a dignified satisfaction between
organizations and software customer.
Abstract: The aim of this paper is to provide an empirical
evidence about the effects that the management of continuous
training have on employability (or employment stability) in the
Spanish labour market. With this purpose a binary logit model with
interaction effect is been used. The dependent variable includes two
situations of the active workers: continuous and discontinuous
employability. To distinguish between them an Employability Index
Stability (ESI) was calculated taking into account two factors: time
worked and job security. Various aspects of the continuous training
and personal workers data are used as independent variables. The
data obtained from a survey of a sample of 918 employed have
revealed a relationship between the likelihood of continuous
employability and continuous training received. The empirical results
support the positive and significant relationship between various
aspects of the training provided by firms and employability
likelihood of the workers, postulate alike from a theoretical point of
view.
Abstract: In this paper, a new probability density function (pdf)
is proposed to model the statistics of wavelet coefficients, and a
simple Kalman-s filter is derived from the new pdf using Bayesian
estimation theory. Specifically, we decompose the speckled image
into wavelet subbands, we apply the Kalman-s filter to the high
subbands, and reconstruct a despeckled image from the modified
detail coefficients. Experimental results demonstrate that our method
compares favorably to several other despeckling methods on test
synthetic aperture radar (SAR) images.
Abstract: In this work, we study the impact of dynamically changing link slowdowns on the stability properties of packetswitched networks under the Adversarial Queueing Theory framework. Especially, we consider the Adversarial, Quasi-Static Slowdown Queueing Theory model, where each link slowdown may take on values in the two-valued set of integers {1, D} with D > 1 which remain fixed for a long time, under a (w, p)-adversary. In this framework, we present an innovative systematic construction for the estimation of adversarial injection rate lower bounds, which, if exceeded, cause instability in networks that use the LIS (Longest-in- System) protocol for contention-resolution. In addition, we show that a network that uses the LIS protocol for contention-resolution may result in dropping its instability bound at injection rates p > 0 when the network size and the high slowdown D take large values. This is the best ever known instability lower bound for LIS networks.
Abstract: The paper presents the potential of fuzzy logic (FL-I)
and neural network techniques (ANN-I) for predicting the
compressive strength, for SCC mixtures. Six input parameters that is
contents of cement, sand, coarse aggregate, fly ash, superplasticizer
percentage and water-to-binder ratio and an output parameter i.e. 28-
day compressive strength for ANN-I and FL-I are used for modeling.
The fuzzy logic model showed better performance than neural
network model.
Abstract: Physical urban form is recognized to be the media for
human transactions. It directly influences the travel demand of people
in a specific urban area and the amount of energy used for
transportation. Distorted, sprawling form often creates sustainability
problems in urban areas. It is declared in EU strategic planning
documents that compact urban form and mixed land use pattern must
be given the main focus to achieve better sustainability in urban
areas, but the methods to measure and compare these characteristics
are still not clear.
This paper presents the simple methods to measure the spatial
characteristics of urban form by analyzing the location and
distribution of objects in an urban environment. The extended CA
(cellular automata) model is used to simulate urban development
scenarios.