Abstract: Majority of Business Software Systems (BSS)
Development and Enhancement Projects (D&EP) fail to meet criteria
of their effectiveness, what leads to the considerable financial losses.
One of the fundamental reasons for such projects- exceptionally low
success rate are improperly derived estimates for their costs and time.
In the case of BSS D&EP these attributes are determined by the work
effort, meanwhile reliable and objective effort estimation still appears
to be a great challenge to the software engineering. Thus this paper is
aimed at presenting the most important synthetic conclusions coming
from the author-s own studies concerning the main factors of
effective BSS D&EP work effort estimation. Thanks to the rational
investment decisions made on the basis of reliable and objective
criteria it is possible to reduce losses caused not only by abandoned
projects but also by large scale of overrunning the time and costs of
BSS D&EP execution.
Abstract: The steady incompressible flow has been solved in cylindrical coordinates in both vapour region and wick structure. The governing equations in vapour region are continuity, Navier-Stokes and energy equations. These equations have been solved using SIMPLE algorithm. For study of parameters variation on heat pipe operation, a benchmark has been chosen and the effect of changing one parameter has been analyzed when the others have been fixed.
Abstract: A fuzzy classifier using multiple ellipsoids approximating decision regions for classification is to be designed in this paper. An algorithm called Gustafson-Kessel algorithm (GKA) with an adaptive distance norm based on covariance matrices of prototype data points is adopted to learn the ellipsoids. GKA is able toadapt the distance norm to the underlying distribution of the prototypedata points except that the sizes of ellipsoids need to be determined a priori. To overcome GKA's inability to determine appropriate size ofellipsoid, the genetic algorithm (GA) is applied to learn the size ofellipsoid. With GA combined with GKA, it will be shown in this paper that the proposed method outperforms the benchmark algorithms as well as algorithms in the field.
Abstract: Ants are fascinating creatures that demonstrate the
ability to find food and bring it back to their nest. Their ability as a
colony, to find paths to food sources has inspired the development of
algorithms known as Ant Colony Systems (ACS). The principle of
cooperation forms the backbone of such algorithms, commonly used
to find solutions to problems such as the Traveling Salesman
Problem (TSP). Ants communicate to each other through chemical
substances called pheromones. Modeling individual ants- ability to
manipulate this substance can help an ACS find the best solution.
This paper introduces a Dynamic Ant Colony System with threelevel
updates (DACS3) that enhance an existing ACS. Experiments
were conducted to observe single ant behavior in a colony of
Malaysian House Red Ants. Such behavior was incorporated into the
DACS3 algorithm. We benchmark the performance of DACS3 versus
DACS on TSP instances ranging from 14 to 100 cities. The result
shows that the DACS3 algorithm can achieve shorter distance in
most cases and also performs considerably faster than DACS.
Abstract: Flexible Job Shop Problem (FJSP) is an extension of
classical Job Shop Problem (JSP). The FJSP extends the routing
flexibility of the JSP, i.e assigning machine to an operation. Thus it
makes it more difficult than the JSP. In this study, Cooperative Coevolutionary
Genetic Algorithm (CCGA) is presented to solve the
FJSP. Makespan (time needed to complete all jobs) is used as the
performance evaluation for CCGA. In order to test performance and
efficiency of our CCGA the benchmark problems are solved.
Computational result shows that the proposed CCGA is comparable
with other approaches.
Abstract: Partitioning is a critical area of VLSI CAD. In order to build complex digital logic circuits its often essential to sub-divide multi -million transistor design into manageable Pieces. This paper looks at the various partitioning techniques aspects of VLSI CAD, targeted at various applications. We proposed an evolutionary time-series model and a statistical glitch prediction system using a neural network with selection of global feature by making use of clustering method model, for partitioning a circuit. For evolutionary time-series model, we made use of genetic, memetic & neuro-memetic techniques. Our work focused in use of clustering methods - K-means & EM methodology. A comparative study is provided for all techniques to solve the problem of circuit partitioning pertaining to VLSI design. The performance of all approaches is compared using benchmark data provided by MCNC standard cell placement benchmark net lists. Analysis of the investigational results proved that the Neuro-memetic model achieves greater performance then other model in recognizing sub-circuits with minimum amount of interconnections between them.
Abstract: The purpose of this study is to identify the critical success factors (CSFs) for the effective implementation of Six Sigma in non-formal service Sectors.
Based on the survey of literature, the critical success factors (CSFs) for Six Sigma have been identified and are assessed for their importance in Non-formal service sector using Delphi Technique. These selected CSFs were put forth to the panel of expert to cluster them and prepare cognitive map to establish their relationship.
All the critical success factors examined and obtained from the review of literature have been assessed for their importance with respect to their contribution to Six Sigma effectiveness in non formal service sector.
The study is limited to the non-formal service sectors involved in the organization of religious festival only. However, the similar exercise can be conducted for broader sample of other non-formal service sectors like temple/ashram management, religious tours management etc.
The research suggests an approach to identify CSFs of Six Sigma for Non-formal service sector. All the CSFs of the formal service sector will not be applicable to Non-formal services, hence opinion of experts was sought to add or delete the CSFs. In the first round of Delphi, the panel of experts has suggested, two new CSFs-“competitive benchmarking (F19) and resident’s involvement (F28)”, which were added for assessment in the next round of Delphi. One of the CSFs-“fulltime six sigma personnel (F15)” has been omitted in proposed clusters of CSFs for non-formal organization, as it is practically impossible to deploy full time trained Six Sigma recruits.
Abstract: In this paper, the differential quadrature method is applied to simulate natural convection in an inclined cubic cavity using velocity-vorticity formulation. The numerical capability of the present algorithm is demonstrated by application to natural convection in an inclined cubic cavity. The velocity Poisson equations, the vorticity transport equations and the energy equation are all solved as a coupled system of equations for the seven field variables consisting of three velocities, three vorticities and temperature. The coupled equations are simultaneously solved by imposing the vorticity definition at boundary without requiring the explicit specification of the vorticity boundary conditions. Test results obtained for an inclined cubic cavity with different angle of inclinations for Rayleigh number equal to 103, 104, 105 and 106 indicate that the present coupled solution algorithm could predict the benchmark results for temperature and flow fields. Thus, it is convinced that the present formulation is capable of solving coupled Navier-Stokes equations effectively and accurately.
Abstract: Leo Breimans Random Forests (RF) is a recent
development in tree based classifiers and quickly proven to be one of
the most important algorithms in the machine learning literature. It
has shown robust and improved results of classifications on standard
data sets. Ensemble learning algorithms such as AdaBoost and
Bagging have been in active research and shown improvements in
classification results for several benchmarking data sets with mainly
decision trees as their base classifiers. In this paper we experiment to
apply these Meta learning techniques to the random forests. We
experiment the working of the ensembles of random forests on the
standard data sets available in UCI data sets. We compare the
original random forest algorithm with their ensemble counterparts
and discuss the results.
Abstract: Natural gas flow contains undesirable solid particles,
liquid condensation, and/or oil droplets and requires reliable
removing equipment to perform filtration. Recent natural gas
processing applications are demanded compactness and reliability of
process equipment. Since conventional means are sophisticated in
design, poor in efficiency, and continue lacking robust, a supersonic
nozzle has been introduced as an alternative means to meet such
demands.
A 3-D Convergent-Divergent Nozzle is simulated using
commercial Code for pressure ratio (NPR) varies from 1.2 to 2. Six
different shapes of nozzle are numerically examined to illustrate the
position of shock-wave as such spot could be considered as a
benchmark of particle separation. Rectangle, triangle, circular,
elliptical, pentagon, and hexagon nozzles are simulated using Fluent
Code with all have same cross-sectional area.
The simple one-dimensional inviscid theory does not describe the
actual features of fluid flow precisely as it ignores the impact of
nozzle configuration on the flow properties. CFD Simulation results,
however, show that nozzle geometry influences the flow structures
including location of shock wave.
The CFD analysis predicts shock appearance when p01/pa>1.2 for
almost all geometry and locates at the lower area ratio (Ae/At).
Simulation results showed that shock wave in Elliptical nozzle has
the farthest distance from the throat among the others at relatively
small NPR. As NPR increases, hexagon would be the farthest. The
numerical result is compared with available experimental data and
has shown good agreement in terms of shock location and flow
structure.
Abstract: Adaptive Genetic Algorithms extend the Standard Gas
to use dynamic procedures to apply evolutionary operators such as
crossover, mutation and selection. In this paper, we try to propose a
new adaptive genetic algorithm, which is based on the statistical
information of the population as a guideline to tune its crossover,
selection and mutation operators. This algorithms is called Statistical
Genetic Algorithm and is compared with traditional GA in some
benchmark problems.
Abstract: European Union candidate status provides a
strong motivation for decision-making in the candidate
countries in shaping the regional development policy where
there is an envisioned transfer of power from center to the
periphery. The process of Europeanization anticipates the
candidate countries configure their regional institutional
templates in the context of the requirements of the European
Union policies and introduces new instruments of incentive
framework of enlargement to be employed in regional
development schemes. It is observed that the contribution of
the local actors to the decision making in the design of the
allocation architectures enhances the efficiency of the funds
and increases the positive effects of the projects funded under
the regional development objectives. This study aims at
exploring the performances of the three regional development
grant schemes in Turkey, established and allocated under the
pre-accession process with a special emphasis given to the
roles of the national and local actors in decision-making for
regional development. Efficiency analyses have been
conducted using the DEA methodology which has proved to
be a superior method in comparative efficiency and
benchmarking measurements. The findings of this study as
parallel to similar international studies, provides that the
participation of the local actors to the decision-making in
funding contributes both to the quality and the efficiency of
the projects funded under the EU schemes.
Abstract: A neurofuzzy approach for a given set of input-output training data is proposed in two phases. Firstly, the data set is partitioned automatically into a set of clusters. Then a fuzzy if-then rule is extracted from each cluster to form a fuzzy rule base. Secondly, a fuzzy neural network is constructed accordingly and parameters are tuned to increase the precision of the fuzzy rule base. This network is able to learn and optimize the rule base of a Sugeno like Fuzzy inference system using Hybrid learning algorithm, which combines gradient descent, and least mean square algorithm. This proposed neurofuzzy system has the advantage of determining the number of rules automatically and also reduce the number of rules, decrease computational time, learns faster and consumes less memory. The authors also investigate that how neurofuzzy techniques can be applied in the area of control theory to design a fuzzy controller for linear and nonlinear dynamic systems modelling from a set of input/output data. The simulation analysis on a wide range of processes, to identify nonlinear components on-linely in a control system and a benchmark problem involving the prediction of a chaotic time series is carried out. Furthermore, the well-known examples of linear and nonlinear systems are also simulated under the Matlab/Simulink environment. The above combination is also illustrated in modeling the relationship between automobile trips and demographic factors.
Abstract: Unlike general-purpose processors, digital signal
processors (DSP processors) are strongly application-dependent. To
meet the needs for diverse applications, a wide variety of DSP
processors based on different architectures ranging from the
traditional to VLIW have been introduced to the market over the
years. The functionality, performance, and cost of these processors
vary over a wide range. In order to select a processor that meets the
design criteria for an application, processor performance is usually
the major concern for digital signal processing (DSP) application
developers. Performance data are also essential for the designers of
DSP processors to improve their design. Consequently, several DSP
performance benchmarks have been proposed over the past decade or
so. However, none of these benchmarks seem to have included recent
new DSP applications.
In this paper, we use a new benchmark that we recently developed
to compare the performance of popular DSP processors from Texas
Instruments and StarCore. The new benchmark is based on the
Selectable Mode Vocoder (SMV), a speech-coding program from the
recent third generation (3G) wireless voice applications. All
benchmark kernels are compiled by the compilers of the respective
DSP processors and run on their simulators. Weighted arithmetic
mean of clock cycles and arithmetic mean of code size are used to
compare the performance of five DSP processors.
In addition, we studied how the performance of a processor is
affected by code structure, features of processor architecture and
optimization of compiler. The extensive experimental data gathered,
analyzed, and presented in this paper should be helpful for DSP
processor and compiler designers to meet their specific design goals.
Abstract: In this paper, a clustering algorithm named KHarmonic
means (KHM) was employed in the training of Radial
Basis Function Networks (RBFNs). KHM organized the data in
clusters and determined the centres of the basis function. The popular
clustering algorithms, namely K-means (KM) and Fuzzy c-means
(FCM), are highly dependent on the initial identification of elements
that represent the cluster well. In KHM, the problem can be avoided.
This leads to improvement in the classification performance when
compared to other clustering algorithms. A comparison of the
classification accuracy was performed between KM, FCM and KHM.
The classification performance is based on the benchmark data sets:
Iris Plant, Diabetes and Breast Cancer. RBFN training with the KHM
algorithm shows better accuracy in classification problem.
Abstract: To satisfy the need of outfield tests of star sensors, a
method is put forward to construct the reference attitude benchmark.
Firstly, its basic principle is introduced; Then, all the separate
conversion matrixes are deduced, which include: the conversion
matrix responsible for the transformation from the Earth Centered
Inertial frame i to the Earth-centered Earth-fixed frame w according to
the time of an atomic clock, the conversion matrix from frame w to the
geographic frame t, and the matrix from frame t to the platform frame
p, so the attitude matrix of the benchmark platform relative to the
frame i can be obtained using all the three matrixes as the
multiplicative factors; Next, the attitude matrix of the star sensor
relative to frame i is got when the mounting matrix from frame p to the
star sensor frame s is calibrated, and the reference attitude angles for
star sensor outfield tests can be calculated from the transformation
from frame i to frame s; Finally, the computer program is finished to
solve the reference attitudes, and the error curves are drawn about the
three axis attitude angles whose absolute maximum error is just 0.25ÔÇ│.
The analysis on each loop and the final simulating results manifest that
the method by precise timing to acquire the absolute reference attitude
is feasible for star sensor outfield tests.
Abstract: A one-step conservative level set method, combined with a global mass correction method, is developed in this study to simulate the incompressible two-phase flows. The present framework do not need to solve the conservative level set scheme at two separated steps, and the global mass can be exactly conserved. The present method is then more efficient than two-step conservative level set scheme. The dispersion-relation-preserving schemes are utilized for the advection terms. The pressure Poisson equation solver is applied to GPU computation using the pCDR library developed by National Center for High-Performance Computing, Taiwan. The SMP parallelization is used to accelerate the rest of calculations. Three benchmark problems were done for the performance evaluation. Good agreements with the referenced solutions are demonstrated for all the investigated problems.
Abstract: Literature reveals that many investors rely on technical trading rules when making investment decisions. If stock markets are efficient, one cannot achieve superior results by using these trading rules. However, if market inefficiencies are present, profitable opportunities may arise. The aim of this study is to investigate the effectiveness of technical trading rules in 34 emerging stock markets. The performance of the rules is evaluated by utilizing White-s Reality Check and the Superior Predictive Ability test of Hansen, along with an adjustment for transaction costs. These tests are able to evaluate whether the best model performs better than a buy-and-hold benchmark. Further, they provide an answer to data snooping problems, which is essential to obtain unbiased outcomes. Based on our results we conclude that technical trading rules are not able to outperform a naïve buy-and-hold benchmark on a consistent basis. However, we do find significant trading rule profits in 4 of the 34 investigated markets. We also present evidence that technical analysis is more profitable in crisis situations. Nevertheless, this result is relatively weak.
Abstract: This paper investigates the control of a bouncing
ball using Model Predictive Control. Bouncing ball is a benchmark
problem for various rhythmic tasks such as juggling, walking,
hopping and running. Humans develop intentions which may be
perceived as our reference trajectory and tries to track it. The
human brain optimizes the control effort needed to track its
reference; this forms the central theme for control of bouncing ball
in our investigations.
Abstract: In many data mining applications, it is a priori known
that the target function should satisfy certain constraints imposed
by, for example, economic theory or a human-decision maker. In this
paper we consider partially monotone prediction problems, where the
target variable depends monotonically on some of the input variables
but not on all. We propose a novel method to construct prediction
models, where monotone dependences with respect to some of
the input variables are preserved by virtue of construction. Our
method belongs to the class of mixture models. The basic idea is to
convolute monotone neural networks with weight (kernel) functions
to make predictions. By using simulation and real case studies,
we demonstrate the application of our method. To obtain sound
assessment for the performance of our approach, we use standard
neural networks with weight decay and partially monotone linear
models as benchmark methods for comparison. The results show that
our approach outperforms partially monotone linear models in terms
of accuracy. Furthermore, the incorporation of partial monotonicity
constraints not only leads to models that are in accordance with the
decision maker's expertise, but also reduces considerably the model
variance in comparison to standard neural networks with weight
decay.