Abstract: With the technology evolving every day and with the increase in global competition, industries are always under the pressure to be the best. They need to provide good quality products at competitive prices, when and how the customer wants them. In order to achieve this level of service, products and their respective supply chain processes need to be flexible and evolvable; otherwise changes will be extremely expensive, slow and with many combinatorial effects. Those combinatorial effects impact the whole organizational structure, from a management, financial, documentation, logistics and specially the information system Enterprise Requirement Planning (ERP) perspective. By applying the normalized system concept/theory to segments of the supply chain, we believe minimal effects, especially at the time of launching an organization global software project. The purpose of this paper is to point out that if an organization wants to develop a software from scratch or implement an existing ERP software for their business needs and if their business processes are normalized and modular then most probably this will yield to a normalized and modular software system that can be easily modified when the business evolves. Another important goal of this paper is to increase the awareness regarding the design of the business processes in a software implementation project. If the blueprints created are normalized then the software developers and configurators will use those modular blueprints to map them into modular software. This paper only prepares the ground for further studies; the above concept will be supported by going through the steps of developing, configuring and/or implementing a software system for an organization by using two methods: The Software Development Lifecycle method (SDLC) and the Accelerated SAP implementation method (ASAP). Both methods start with the customer requirements, then blue printing of its business processes and finally mapping those processes into a software system. Since those requirements and processes are the starting point of the implementation process, then normalizing those processes will end up in a normalizing software.
Abstract: This work describes a framework for teaching of global software engineering (GSE) in university undergraduate programs. This framework proposes a method of teaching that incorporates adequate techniques of software requirements elicitation and validated tools of communication, critical aspects to global software development scenarios. The use of proposed framework allows teachers to simulate small software development companies formed by Latin American students, which build information systems. Students from three Latin American universities played the roles of engineers by applying an iterative development of a requirements specification in a global software project. The proposed framework involves the use of a specific purpose Wiki for asynchronous communication between the participants of the process. It is also a practice to improve the quality of software requirements that are formulated by the students. The additional motivation of students to participate in these practices, in conjunction with peers from other countries, is a significant additional factor that positively contributes to the learning process. The framework promotes skills for communication, negotiation, and other complementary competencies that are useful for working on GSE scenarios.
Abstract: With the rapid development of information technology, project management has gained more and more attention recently. Based on CDIO, this paper proposes some teaching reform ideas for software project management curriculum. We first change from Teacher-centered classroom to Student-centered and adopt project-driven, scenario animation show, teaching rhythms, case study and team work practice to improve students' learning enthusiasm. Results showed these attempts have been well received and very effective; as well, students prefer to learn with this curriculum more than before the reform.
Abstract: Software maintenance phase is started once a software project has been developed and delivered. After that, any modification to it corresponds to maintenance. Software maintenance involves modifications to keep a software project usable in a changed or a changing environment, to correct discovered faults, and modifications, and to improve performance or maintainability. Software maintenance and management of software maintenance are recognized as two most important and most expensive processes in a life of a software product. This research is basing the prediction of maintenance, on risks and time evaluation, and using them as data sets for working with neural networks. The aim of this paper is to provide support to project maintenance managers. They will be able to pass the issues planned for the next software-service-patch to the experts, for risk and working time evaluation, and afterward to put all data to neural networks in order to get software maintenance prediction. This process will lead to the more accurate prediction of the working hours needed for the software-service-patch, which will eventually lead to better planning of budget for the software maintenance projects.
Abstract: There are several methods to monitor software
projects and the objective for monitoring is to ensure that the
software projects are developed and delivered successfully. A
performance measurement is a method that is closely associated with
monitoring and it can be scrutinized by looking at two important
attributes which are efficiency and effectiveness both of which are
factors that are important for the success of a software project.
Consequently, a successful steering is achieved by monitoring and
controlling a software project via the performance measurement
criteria and metrics. Hence, this paper is aimed at identifying the
performance measurement criteria and the metrics for monitoring the
performance of a software project by using the Goal Question
Metrics (GQM) approach. The GQM approach is utilized to ensure
that the identified metrics are reliable and useful. These identified
metrics are useful guidelines for project managers to monitor the
performance of their software projects.
Abstract: Measuring the reusability of Object-Oriented (OO) program code is important to ensure a successful and timely adaptation and integration of the reused code in new software projects. It has become even more relevant with the availability of huge amounts of open-source projects. Reuse saves cost, increases the speed of development and improves software reliability. Measuring this reusability is not s straight forward process due to the variety of metrics and qualities linked to software reuse and the lack of comprehensive empirical studies to support the proposed metrics or models. In this paper, a conceptual model is proposed to measure the reusability of OO program code. A comprehensive set of metrics is used to compute the most significant factors of reusability and an empirical investigation is conducted to measure the reusability of the classes of randomly selected open-source Java projects. Additionally, the impact of using inner and anonymous classes on the reusability of their enclosing classes is assessed. The results obtained are thoroughly analyzed to identify the factors behind lack of reusability in open-source OO program code and the impact of nesting on it.
Abstract: Metrics is the process by which numbers or symbols
are assigned to attributes of entities in the real world in such a way as
to describe them according to clearly defined rules. Software metrics
are instruments or ways to measuring all the aspect of software
product. These metrics are used throughout a software project to
assist in estimation, quality control, productivity assessment, and
project control. Object oriented software metrics focus on
measurements that are applied to the class and other characteristics.
These measurements convey the software engineer to the behavior of
the software and how changes can be made that will reduce
complexity and improve the continuing capability of the software.
Object oriented software metric can be classified in two types static
and dynamic. Static metrics are concerned with all the aspects of
measuring by static analysis of software and dynamic metrics are
concerned with all the measuring aspect of the software at run time.
Major work done before, was focusing on static metric. Also some
work has been done in the field of dynamic nature of the software
measurements. But research in this area is demanding for more work.
In this paper we give a set of dynamic metrics specifically for
polymorphism in object oriented system.
Abstract: Software project effort estimation is frequently seen
as complex and expensive for individual software engineers.
Software production is in a crisis. It suffers from excessive costs.
Software production is often out of control. It has been suggested that
software production is out of control because we do not measure.
You cannot control what you cannot measure. During last decade, a
number of researches on cost estimation have been conducted. The
metric-set selection has a vital role in software cost estimation
studies; its importance has been ignored especially in neural network
based studies. In this study we have explored the reasons of those
disappointing results and implemented different neural network
models using augmented new metrics. The results obtained are
compared with previous studies using traditional metrics. To be able
to make comparisons, two types of data have been used. The first
part of the data is taken from the Constructive Cost Model
(COCOMO'81) which is commonly used in previous studies and the
second part is collected according to new metrics in a leading
international company in Turkey. The accuracy of the selected
metrics and the data samples are verified using statistical techniques.
The model presented here is based on Multi-Layer Perceptron
(MLP). Another difficulty associated with the cost estimation studies
is the fact that the data collection requires time and care. To make a
more thorough use of the samples collected, k-fold, cross validation
method is also implemented. It is concluded that, as long as an
accurate and quantifiable set of metrics are defined and measured
correctly, neural networks can be applied in software cost estimation
studies with success
Abstract: Through inward perceptions, we intuitively expect
distributed software development to increase the risks associated with
achieving cost, schedule, and quality goals. To compound this
problem, agile software development (ASD) insists one of the main
ingredients of its success is cohesive communication attributed to
collocation of the development team. The following study identified
the degree of communication richness needed to achieve comparable
software quality (reduce pre-release defects) between distributed and
collocated teams. This paper explores the relevancy of
communication richness in various development phases and its
impact on quality. Through examination of a large distributed agile
development project, this investigation seeks to understand the levels
of communication required within each ASD phase to produce
comparable quality results achieved by collocated teams. Obviously,
a multitude of factors affects the outcome of software projects.
However, within distributed agile software development teams, the
mode of communication is one of the critical components required to
achieve team cohesiveness and effectiveness. As such, this study
constructs a distributed agile communication model (DAC-M) for
potential application to similar distributed agile development efforts
using the measurement of the suitable level of communication. The
results of the study show that less rich communication methods, in
the appropriate phase, might be satisfactory to achieve equivalent
quality in distributed ASD efforts.
Abstract: Various models have been derived by studying large number of completed software projects from various organizations and applications to explore how project sizes mapped into project effort. But, still there is a need to prediction accuracy of the models. As Neuro-fuzzy based system is able to approximate the non-linear function with more precision. So, Neuro-Fuzzy system is used as a soft computing approach to generate model by formulating the relationship based on its training. In this paper, Neuro-Fuzzy technique is used for software estimation modeling of on NASA software project data and performance of the developed models are compared with the Halstead, Walston-Felix, Bailey-Basili and Doty Models mentioned in the literature.
Abstract: A dynamic software risk assessment model is
presented. Analogies between dynamic financial analysis and
software risk assessment models are established and based on these
analogies it suggested that dynamic risk model for software projects
is the way to move forward for the risk assessment of software
project. It is shown how software risk assessment change during
different phases of a software project and hence requires a dynamic
risk assessment model to capture these variations. Further evolution
of dynamic financial analysis models is discussed and mapped to the
evolution of software risk assessment models.
Abstract: Most standard software development methodologies
are often not applied to software projects in many developing
countries of the world. The approach generally practice is close to
what eXtreme Programming (XP) is likely promoting, just keep
coding and testing as the requirement evolves. XP is an agile
software process development methodology that has inherent
capability for improving efficiency of Business Software
Development (BSD). XP can facilitate Business-to-Development
(B2D) relationship due to its customer-oriented advocate. From
practitioner point of view, we applied XP to BSD and result shows
that customer involvement has positive impact on productivity, but
can as well frustrate the success of the project. In an effort to
promote software engineering practice in developing countries of
Africa, we present the experiment performed, lessons learned,
problems encountered and solution adopted in applying XP
methodology to BSD.
Abstract: Fault-proneness of a software module is the
probability that the module contains faults. To predict faultproneness
of modules different techniques have been proposed which
includes statistical methods, machine learning techniques, neural
network techniques and clustering techniques. The aim of proposed
study is to explore whether metrics available in the early lifecycle
(i.e. requirement metrics), metrics available in the late lifecycle (i.e.
code metrics) and metrics available in the early lifecycle (i.e.
requirement metrics) combined with metrics available in the late
lifecycle (i.e. code metrics) can be used to identify fault prone
modules using Genetic Algorithm technique. This approach has been
tested with real time defect C Programming language datasets of
NASA software projects. The results show that the fusion of
requirement and code metric is the best prediction model for
detecting the faults as compared with commonly used code based
model.
Abstract: Despite various methods that exist in software risk management, software projects have a high rate of failure. When complexity and size of the projects are increased, managing software development becomes more difficult. In these projects the need for more analysis and risk assessment is vital. In this paper, a classification for software risks is specified. Then relations between these risks using risk tree structure are presented. Analysis and assessment of these risks are done using probabilistic calculations. This analysis helps qualitative and quantitative assessment of risk of failure. Moreover it can help software risk management process. This classification and risk tree structure can apply to some software tools.
Abstract: Software effort estimation is the process of predicting
the most realistic use of effort required to develop or maintain
software based on incomplete, uncertain and/or noisy input. Effort
estimates may be used as input to project plans, iteration plans,
budgets. There are various models like Halstead, Walston-Felix,
Bailey-Basili, Doty and GA Based models which have already used
to estimate the software effort for projects. In this study Statistical
Models, Fuzzy-GA and Neuro-Fuzzy (NF) Inference Systems are
experimented to estimate the software effort for projects. The
performances of the developed models were tested on NASA
software project datasets and results are compared with the Halstead,
Walston-Felix, Bailey-Basili, Doty and Genetic Algorithm Based
models mentioned in the literature. The result shows that the NF
Model has the lowest MMRE and RMSE values. The NF Model
shows the best results as compared with the Fuzzy-GA based hybrid
Inference System and other existing Models that are being used for
the Effort Prediction with lowest MMRE and RMSE values.
Abstract: Free and open source software is gaining popularity at
an unprecedented rate of growth. Organizations despite some
concerns about the quality have been using them for various
purposes. One of the biggest concerns about free and open source
software is post release software defects and their fixing. Many
believe that there is no appropriate support available to fix the bugs.
On the contrary some believe that due to the active involvement of
internet user in online forums, they become a major source of
communicating the identification and fixing of defects in open source
software. The research model of this empirical investigation
establishes and studies the relationship between open source software
defects and online public forums. The results of this empirical study
provide evidence about the realities of software defects myths of
open source software. We used a dataset consist of 616 open source
software projects covering a broad range of categories to study the
research model of this investigation. The results of this investigation
show that online forums play a significant role identifying and fixing
the defects in open source software.
Abstract: Software estimation accuracy is among the greatest
challenges for software developers. This study aimed at building and
evaluating a neuro-fuzzy model to estimate software projects
development time. The forty-one modules developed from ten
programs were used as dataset. Our proposed approach is compared
with fuzzy logic and neural network model and Results show that the
value of MMRE (Mean of Magnitude of Relative Error) applying
neuro-fuzzy was substantially lower than MMRE applying fuzzy
logic and neural network.
Abstract: Risk response planning is of importance for software project risk management (SPRM). In CMMI, risk management was in the third capability maturity level, which provides a framework for software project risk identification, assessment, risk planning, risk control. However, the CMMI-based SPRM currently lacks quantitative supporting tools, especially during the process of implementing software project risk planning. In this paper, an economic optimization model for selecting risk reduction actions in the phase of software project risk response planning is presented. Furthermore, an example taken from a Chinese software industry is illustrated to verify the application of this method. The research provides a risk decision method for project risk managers that can be used in the implementation of CMMI-based SPRM.
Abstract: In this paper, subtractive clustering based fuzzy inference system approach is used for early detection of faults in the function oriented software systems. This approach has been tested with real time defect datasets of NASA software projects named as PC1 and CM1. Both the code based model and joined model (combination of the requirement and code based metrics) of the datasets are used for training and testing of the proposed approach. The performance of the models is recorded in terms of Accuracy, MAE and RMSE values. The performance of the proposed approach is better in case of Joined Model. As evidenced from the results obtained it can be concluded that Clustering and fuzzy logic together provide a simple yet powerful means to model the earlier detection of faults in the function oriented software systems.
Abstract: This paper presents a computational methodology
based on matrix operations for a computer based solution to the
problem of performance analysis of software reliability models
(SRMs). A set of seven comparison criteria have been formulated to
rank various non-homogenous Poisson process software reliability
models proposed during the past 30 years to estimate software
reliability measures such as the number of remaining faults, software
failure rate, and software reliability. Selection of optimal SRM for
use in a particular case has been an area of interest for researchers in
the field of software reliability. Tools and techniques for software
reliability model selection found in the literature cannot be used with
high level of confidence as they use a limited number of model
selection criteria. A real data set of middle size software project from
published papers has been used for demonstration of matrix method.
The result of this study will be a ranking of SRMs based on the
Permanent value of the criteria matrix formed for each model based
on the comparison criteria. The software reliability model with
highest value of the Permanent is ranked at number – 1 and so on.