Abstract: This study discusses the stumbling blocks stifling the
adoption of GPS technology in the public sector of Pakistan. This
study has been carried out in order to describe the value of GPS
technology and its adoption at various public sector organisations in
Pakistan. Sample size for the research conducted was 200; personnel
working in public sector having age above 29 years were surveyed.
Data collected for this research has been quantitatively analysed with
the help of SPSS. Regression analysis, correlation and cross
tabulation were the techniques used to determine the strength of
relationship between key variables. Findings of this research indicate
that main hurdles in GPS adoption in the public sector of Pakistan are
lack of awareness about GPS among masses in general and the
stakeholders in particular, lack of initiative on part of government in
promoting new technologies, unavailability of GPS infrastructure in
Pakistan and prohibitions on map availability because of security
reasons.
Abstract: Calibration estimation is a method of adjusting the
original design weights to improve the survey estimates by using
auxiliary information such as the known population total (or mean)
of the auxiliary variables. A calibration estimator uses calibrated
weights that are determined to minimize a given distance measure to
the original design weights while satisfying a set of constraints
related to the auxiliary information. In this paper, we propose a new
multivariate calibration estimator for the population mean in the
stratified sampling design, which incorporates information available
for more than one auxiliary variable. The problem of determining the
optimum calibrated weights is formulated as a Mathematical
Programming Problem (MPP) that is solved using the Lagrange
multiplier technique.
Abstract: In this paper a new fast simplification method is presented. Such method realizes Karnough map with large number of variables. In order to accelerate the operation of the proposed method, a new approach for fast detection of group of ones is presented. Such approach implemented in the frequency domain. The search operation relies on performing cross correlation in the frequency domain rather than time one. It is proved mathematically and practically that the number of computation steps required for the presented method is less than that needed by conventional cross correlation. Simulation results using MATLAB confirm the theoretical computations. Furthermore, a powerful solution for realization of complex functions is given. The simplified functions are implemented by using a new desigen for neural networks. Neural networks are used because they are fault tolerance and as a result they can recognize signals even with noise or distortion. This is very useful for logic functions used in data and computer communications. Moreover, the implemented functions are realized with minimum amount of components. This is done by using modular neural nets (MNNs) that divide the input space into several homogenous regions. Such approach is applied to implement XOR function, 16 logic functions on one bit level, and 2-bit digital multiplier. Compared to previous non- modular designs, a clear reduction in the order of computations and hardware requirements is achieved.
Abstract: Graph coloring is an important problem in computer
science and many algorithms are known for obtaining reasonably
good solutions in polynomial time. One method of comparing
different algorithms is to test them on a set of standard graphs where
the optimal solution is already known. This investigation analyzes a
set of 50 well known graph coloring instances according to a set of
complexity measures. These instances come from a variety of
sources some representing actual applications of graph coloring
(register allocation) and others (mycieleski and leighton graphs) that
are theoretically designed to be difficult to solve. The size of the
graphs ranged from ranged from a low of 11 variables to a high of
864 variables. The method used to solve the coloring problem was
the square of the adjacency (i.e., correlation) matrix. The results
show that the most difficult graphs to solve were the leighton and the
queen graphs. Complexity measures such as density, mobility,
deviation from uniform color class size and number of block
diagonal zeros are calculated for each graph. The results showed that
the most difficult problems have low mobility (in the range of .2-.5)
and relatively little deviation from uniform color class size.
Abstract: In this study, the effects of machining parameters on
specific energy during surface grinding of 6061Al-SiC35P
composites are investigated. Vol% of SiC, feed and depth of cut were
chosen as process variables. The power needed for the calculation of
the specific energy is measured from the two watt meter method.
Experiments are conducted using standard RSM design called Central
composite design (CCD). A second order response surface model was
developed for specific energy. The results identify the significant
influence factors to minimize the specific energy. The confirmation
results demonstrate the practicability and effectiveness of the
proposed approach.
Abstract: In this paper real money demand function is analyzed
within multivariate time-series framework. Cointegration approach is
used (Johansen procedure) assuming interdependence between
money demand determinants, which are nonstationary variables. This
will help us to understand the behavior of money demand in Croatia,
revealing the significant influence between endogenous variables in
vector autoregrression system (VAR), i.e. vector error correction
model (VECM). Exogeneity of the explanatory variables is tested.
Long-run money demand function is estimated indicating slow speed
of adjustment of removing the disequilibrium. Empirical results
provide the evidence that real industrial production and exchange
rate explains the most variations of money demand in the long-run,
while interest rate is significant only in short-run.
Abstract: This paper explores the extent of the gap in poverty rates between immigrant and native households in Spanish regions and assess to what extent regional differences in individual and contextual characteristics can explain the divergences in such a gap. By using multilevel techniques and European Union Survey on Income and Living Conditions, we estimate immigrant households experiments an increase of 76 per cent in the odds of being poor compared with a native one when we control by individual variables. In relation to regional differences in the risk of poverty, regionallevel variables have higher effect in the reduction of these differences than individual variables.
Abstract: In this paper, the criteria of Ψ-eventual stability have been established for generalized impulsive differential systems of multiple dependent variables. The sufficient conditions have been obtained using piecewise continuous Lyapunov function. An example is given to support our theoretical result.
Abstract: Automotive engine air-ratio plays an important role of
emissions and fuel consumption reduction while maintains
satisfactory engine power among all of the engine control variables. In
order to effectively control the air-ratio, this paper presents a model
predictive fuzzy control algorithm based on online least-squares
support vector machines prediction model and fuzzy logic optimizer.
The proposed control algorithm was also implemented on a real car for
testing and the results are highly satisfactory. Experimental results
show that the proposed control algorithm can regulate the engine
air-ratio to the stoichiometric value, 1.0, under external disturbance
with less than 5% tolerance.
Abstract: The main objective of this work is to provide a fault detection and isolation based on Markov parameters for residual generation and a neural network for fault classification. The diagnostic approach is accomplished in two steps: In step 1, the system is identified using a series of input / output variables through an identification algorithm. In step 2, the fault is diagnosed comparing the Markov parameters of faulty and non faulty systems. The Artificial Neural Network is trained using predetermined faulty conditions serves to classify the unknown fault. In step 1, the identification is done by first formulating a Hankel matrix out of Input/ output variables and then decomposing the matrix via singular value decomposition technique. For identifying the system online sliding window approach is adopted wherein an open slit slides over a subset of 'n' input/output variables. The faults are introduced at arbitrary instances and the identification is carried out in online. Fault residues are extracted making a comparison of the first five Markov parameters of faulty and non faulty systems. The proposed diagnostic approach is illustrated on benchmark problems with encouraging results.
Abstract: The main aim of this study is to identify the most
influential variables that cause defects on the items produced by a
casting company located in Turkey. To this end, one of the items
produced by the company with high defective percentage rates is
selected. Two approaches-the regression analysis and decision treesare
used to model the relationship between process parameters and
defect types. Although logistic regression models failed, decision tree
model gives meaningful results. Based on these results, it can be
claimed that the decision tree approach is a promising technique for
determining the most important process variables.
Abstract: We consider a Principal-Agent model with the
Principal being a seller who does not know perfectly how much the
buyer (the Agent) is willing to pay for the good. The buyer-s
preferences are hence his private information. The model corresponds
to the nonlinear pricing problem of Maskin and Riley. We assume
there are three types of Agents. The model is solved using
“informational rents" as variables. In the last section we present the
main characteristics of the optimal contracts in asymmetric
information and some possible extensions of the model.
Abstract: The objective of this work which is based on the
approach of simultaneous engineering is to contribute to the
development of a CIM tool for the synthesis of functional design
dimensions expressed by average values and tolerance intervals. In
this paper, the dispersions method known as the Δl method which
proved reliable in the simulation of manufacturing dimensions is
used to develop a methodology for the automation of the simulation.
This methodology is constructed around three procedures. The first
procedure executes the verification of the functional requirements by
automatically extracting the functional dimension chains in the
mechanical sub-assembly. Then a second procedure performs an
optimization of the dispersions on the basis of unknown variables.
The third procedure uses the optimized values of the dispersions to
compute the optimized average values and tolerances of the
functional dimensions in the chains. A statistical and cost based
approach is integrated in the methodology in order to take account of
the capabilities of the manufacturing processes and to distribute
optimal values among the individual components of the chains.
Abstract: In this paper, we present a new method for solving quadratic programming problems, not strictly convex. Constraints of the problem are linear equalities and inequalities, with bounded variables. The suggested method combines the active-set strategies and support methods. The algorithm of the method and numerical experiments are presented, while comparing our approach with the active set method on randomly generated problems.
Abstract: The main purpose of this paper is to investigate thelong-run equilibrium and short-run dynamics of international housing prices when macroeconomic variables change. We apply the Pedroni’s, panel cointegration, using the unbalanced panel data analysis of 33 countries over the period from 1980Q1 to 2013Q1, to examine the relationships among house prices and macroeconomic variables. Our empirical results of panel data cointegration tests support the existence of a cointegration among these macroeconomic variables and house prices. Besides, the empirical results of panel DOLS further present that a 1% increase in economic activity, long-term interest rates, and construction costs cause house prices to respectively change 2.16%, -0.04%, and 0.22% in the long run.Furthermore, the increasing economic activity and the construction cost would cause strongerimpacts on the house prices for lower income countries than higher income countries.The results lead to the conclusion that policy of house prices growth can be regarded as economic growth for lower income countries. Finally, in America region, the coefficient of economic activity is the highest, which displays that increasing economic activity causes a faster rise in house prices there than in other regions. There are some special cases whereby the coefficients of interest rates are significantly positive in America and Asia regions.
Abstract: The purpose of this study was to examine to what
extend classroom management efficacy, marital status, gender, and
teaching experience predict burnout among primary school teachers.
Participants of this study were 523 (345 female, 178 male) teachers
who completed inventories. The results of multiple regression
analysis indicated that three dimensions of teacher burnout
(Emotional Exhaustion, Depersonalization, Personal
Accomplishment) were affected differently from four predictor
variables. Findings indicated that for the emotional exhaustion,
classroom management efficacy, marital status and teaching
experience; for depersonalization dimension, classroom management
efficacy and marital status and finally for the personal
accomplishment dimension, classroom management efficacy, gender,
and teaching experience were significant predictors.
Abstract: This research aimed to find out the determining
factors for ISO 14001 EMS implementation among SMEs in
Malaysia from the Resource based view. A cross-sectional approach
using survey was conducted. A research model been proposed which
comprises of ISO 14001 EMS implementation as the criterion
variable while physical capital resources (i.e. environmental
performance tracking and organizational infrastructures), human
capital resources (i.e. top management commitment and support,
training and education, employee empowerment and teamwork) and
organizational capital resources (i.e. recognition and reward,
organizational culture and organizational communication) as the
explanatory variables. The research findings show that only
environmental performance tracking, top management commitment
and support and organizational culture are found to be positively and
significantly associated with ISO 14001 EMS implementation. It is
expected that this research will shed new knowledge and provide a
base for future studies about the role played by firm-s internal
resources.
Abstract: Co-integration models the long-term, equilibrium relationship of two or more related financial variables. Even if cointegration is found, in the short run, there may be deviations from the long run equilibrium relationship. The aim of this work is to forecast these deviations using neural networks and create a trading strategy based on them. A case study is used: co-integration residuals from Australian Bank Bill futures are forecast and traded using various exogenous input variables combined with neural networks. The choice of the optimal exogenous input variables chosen for each neural network, undertaken in previous work [1], is validated by comparing the forecasts and corresponding profitability of each, using a trading strategy.
Abstract: In a none-super-competitive environment the concepts
of closed system, management control remains to be the dominant
guiding concept to management. The merits of closed loop have been
the sources of most of the management literature and culture for
many decades. It is a useful exercise to investigate and poke into the
dynamics of the control loop phenomenon and draws some lessons to
use for refining the practice of management. This paper examines the
multitude of lessons abstracted from the behavior of the Input /output
/feedback control loop model, which is the core of control theory.
There are numerous lessons that can be learned from the insights this
model would provide and how it parallels the management dynamics
of the organization. It is assumed that an organization is basically a
living system that interacts with the internal and external variables. A
viable control loop is the one that reacts to the variation in the
environment and provide or exert a corrective action. In managing
organizations this is reflected in organizational structure and
management control practices. This paper will report findings that
were a result of examining several abstract scenarios that are
exhibited in the design, operation, and dynamics of the control loop
and how they are projected on the functioning of the organization.
Valuable lessons are drawn in trying to find parallels and new
paradigms, and how the control theory science is reflected in the
design of the organizational structure and management practices. The
paper is structured in a logical and perceptive format. Further
research is needed to extend these findings.
Abstract: This study examines causal link between energy use and economic growth for five South Asian countries over period 1971-2006. Panel cointegration, ECM and FMOLS are applied for short and long run estimates. In short run unidirectional causality from per capita GDP to per capita energy consumption is found, but not vice versa. In long run one percent increase in per capita energy consumption tend to decrease 0.13 percent per capita GDP. i.e. Energy use discourage economic growth. This short and long run relationship indicate energy shortage crisis in South Asia due to increased energy use coupled with insufficient energy supply. Beside this long run estimated coefficient of error term suggest that short term adjustment to equilibrium are driven by adjustment back to long run equilibrium. Moreover, per capita energy consumption is responsive to adjustment back to equilibrium and it takes 59 years approximately. It specifies long run feedback between both variables.