Abstract: Defect prevention is the most vital but habitually
neglected facet of software quality assurance in any project. If
functional at all stages of software development, it can condense the
time, overheads and wherewithal entailed to engineer a high quality
product. The key challenge of an IT industry is to engineer a
software product with minimum post deployment defects.
This effort is an analysis based on data obtained for five selected
projects from leading software companies of varying software
production competence. The main aim of this paper is to provide
information on various methods and practices supporting defect
detection and prevention leading to thriving software generation. The
defect prevention technique unearths 99% of defects. Inspection is
found to be an essential technique in generating ideal software
generation in factories through enhanced methodologies of abetted
and unaided inspection schedules. On an average 13 % to 15% of
inspection and 25% - 30% of testing out of whole project effort time
is required for 99% - 99.75% of defect elimination.
A comparison of the end results for the five selected projects
between the companies is also brought about throwing light on the
possibility of a particular company to position itself with an
appropriate complementary ratio of inspection testing.
Abstract: Creation and maintenance of knowledge management
systems has been recognized as an important research area.
Consecutively lack of accurate results from knowledge management
systems limits the organization to apply their knowledge
management processes. This leads to a failure in getting the right
information to the right people at the right time thus followed by a
deficiency in decision making processes. An Intranet offers a
powerful tool for communication and collaboration, presenting data
and information, and the means that creates and shares knowledge,
all in one easily accessible place. This paper proposes an archetype
describing how a knowledge management system, with the support
of intranet capabilities, could very much increase the accuracy of
capturing, storing and retrieving knowledge based processes thereby
increasing the efficiency of the system. This system will expect a
critical mass of usage, by the users, for intranet to function as
knowledge management systems. This prototype would lead to a
design of an application that would impose creation and maintenance
of an effective knowledge management system through intranet. The
aim of this paper is to introduce an effective system to handle
capture, store and distribute knowledge management in a form that
may not lead to any failure which exists in most of the systems. The
methodology used in the system would require all the employees, in
the organization, to contribute the maximum to deliver the system to
a successful arena. The system is still in its initial mode and thereby
the authors are under the process to practically implement the ideas,
as mentioned in the system, to produce satisfactory results.
Abstract: The Romanian government has been making
significant attempts to make its services and information available on
the Internet. According to the UN e-government survey conducted in
2008, Romania comes under mid range countries by utilization of egovernment
(percent of utilization 41%). Romania-s national portal
www.e-guvernare.ro aims at progressively making all services and
information accessible through the portal. However, the success of
these efforts depends, to a great extent, on how well the targeted
users for such services, citizens in general, make use of them. For
this reason, the purpose of the presented study was to identify what
factors could affect the citizens' adoption of e-government services.
The study is an extension of the Technology Acceptance Model. The
proposed model was validated using data collected from 481 citizens.
The results provided substantial support for all proposed hypotheses
and showed the significance of the extended constructs.
Abstract: Most of the Question Answering systems
composed of three main modules: question processing,
document processing and answer processing. Question
processing module plays an important role in QA systems. If
this module doesn't work properly, it will make problems for
other sections. Moreover answer processing module is an
emerging topic in Question Answering, where these systems
are often required to rank and validate candidate answers.
These techniques aiming at finding short and precise answers
are often based on the semantic classification.
This paper discussed about a new model for question
answering which improved two main modules, question
processing and answer processing.
There are two important components which are the bases
of the question processing. First component is question
classification that specifies types of question and answer.
Second one is reformulation which converts the user's
question into an understandable question by QA system in a
specific domain. Answer processing module, consists of
candidate answer filtering, candidate answer ordering
components and also it has a validation section for interacting
with user. This module makes it more suitable to find exact
answer. In this paper we have described question and answer
processing modules with modeling, implementing and
evaluating the system. System implemented in two versions.
Results show that 'Version No.1' gave correct answer to 70%
of questions (30 correct answers to 50 asked questions) and
'version No.2' gave correct answers to 94% of questions (47
correct answers to 50 asked questions).
Abstract: In this paper we have suggested a new system for egovernment.
In this method a government can design a precise and
perfect system to control people and organizations by using five
major documents. These documents contain the important
information of each member of a society and help all organizations to
do their informatics tasks through them. This information would be
available by only a national code and a secure program would
support it. The suggested system can give a good awareness to the
society and help it be managed correctly.
Abstract: This paper presents an application of Artificial Neural Network (ANN) to forecast actual cost of a project based on the earned value management system (EVMS). For this purpose, some projects randomly selected based on the standard data set , and it is produced necessary progress data such as actual cost ,actual percent complete , baseline cost and percent complete for five periods of project. Then an ANN with five inputs and five outputs and one hidden layer is trained to produce forecasted actual costs. The comparison between real and forecasted data show better performance based on the Mean Absolute Percentage Error (MAPE) criterion. This approach could be applicable to better forecasting the project cost and result in decreasing the risk of project cost overrun, and therefore it is beneficial for planning preventive actions.
Abstract: Energy consumption is one of the indices in
determining the levels of development of a nation. Therefore,
availability of energy supply to all sectors of life in any country is
crucial for its development. These exists shortage of all kinds of
energy, particularly electricity which is badly needed for economic
development. Electricity from the sun which is quite abundant in
most of the developing countries is used in rural areas to meet basic
electricity needs of a rural community. Today-s electricity supply in
Myanmar is generated by fuel generators and hydroelectric power
plants. However, far-flung areas which are away from National Grids
cannot enjoy the electricity generated by these sources. Since
Myanmar is a land of plentiful sunshine, especially in central and
southern regions of the country, the first form of energy- solar energy
could hopefully become the final solution to its energy supply
problem. The direct conversion of solar energy into electricity using
photovoltaic system has been receiving intensive installation not only
in developed countries but also in developing countries. It is mainly
intended to present solar energy potential and application in
Myanmar. It is also wanted to get the benefits of using solar energy
for people in remote areas which are not yet connected to the national
grids because of the high price of fossil fuel.
Abstract: We analyze the problem of decision making under
ignorance with regrets. Recently, Yager has developed a new method
for decision making where instead of using regrets he uses another
type of transformation called negrets. Basically, the negret is
considered as the dual of the regret. We study this problem in detail
and we suggest the use of geometric aggregation operators in this
method. For doing this, we develop a different method for
constructing the negret matrix where all the values are positive. The
main result obtained is that now the model is able to deal with
negative numbers because of the transformation done in the negret
matrix. We further extent these results to another model developed
also by Yager about mixing valuations and negrets. Unfortunately, in
this case we are not able to deal with negative numbers because the
valuations can be either positive or negative.
Abstract: As the web continues to grow exponentially, the idea
of crawling the entire web on a regular basis becomes less and less
feasible, so the need to include information on specific domain,
domain-specific search engines was proposed. As more information
becomes available on the World Wide Web, it becomes more difficult
to provide effective search tools for information access. Today,
people access web information through two main kinds of search
interfaces: Browsers (clicking and following hyperlinks) and Query
Engines (queries in the form of a set of keywords showing the topic
of interest) [2]. Better support is needed for expressing one's
information need and returning high quality search results by web
search tools. There appears to be a need for systems that do reasoning
under uncertainty and are flexible enough to recover from the
contradictions, inconsistencies, and irregularities that such reasoning
involves. In a multi-view problem, the features of the domain can be
partitioned into disjoint subsets (views) that are sufficient to learn the
target concept. Semi-supervised, multi-view algorithms, which
reduce the amount of labeled data required for learning, rely on the
assumptions that the views are compatible and uncorrelated. This
paper describes the use of semi-structured machine learning approach
with Active learning for the “Domain Specific Search Engines". A
domain-specific search engine is “An information access system that
allows access to all the information on the web that is relevant to a
particular domain. The proposed work shows that with the help of
this approach relevant data can be extracted with the minimum
queries fired by the user. It requires small number of labeled data and
pool of unlabelled data on which the learning algorithm is applied to
extract the required data.
Abstract: Regarding to the fast growth of computer, internet, and virtual learning in our country (Iran) and need computer-based learning systems and multimedia tools as an essential part of such education, designing and implementing such systems would help teach different field such as science. This paper describes the basic principle of multimedia. At the end, with a description of learning science to the infant students, the method of this system will be explained.
Abstract: In most of the popular implementation of Parallel GAs
the whole population is divided into a set of subpopulations, each
subpopulation executes GA independently and some individuals are
migrated at fixed intervals on a ring topology. In these studies,
the migrations usually occur 'synchronously' among subpopulations.
Therefore, CPUs are not used efficiently and the communication
do not occur efficiently either. A few studies tried asynchronous
migration but it is hard to implement and setting proper parameter
values is difficult.
The aim of our research is to develop a migration method which is
easy to implement, which is easy to set parameter values, and which
reduces communication traffic. In this paper, we propose a traffic
reduction method for the Asynchronous Parallel Distributed GA by
migration of elites only. This is a Server-Client model. Every client
executes GA on a subpopulation and sends an elite information to the
server. The server manages the elite information of each client and
the migrations occur according to the evolution of sub-population in
a client. This facilitates the reduction in communication traffic.
To evaluate our proposed model, we apply it to many function optimization
problems. We confirm that our proposed method performs
as well as current methods, the communication traffic is less, and
setting of the parameters are much easier.
Abstract: Estimation time and cost of work completion in a
project and follow up them during execution are contributors to
success or fail of a project, and is very important for project
management team. Delivering on time and within budgeted cost
needs to well managing and controlling the projects. To dealing with
complex task of controlling and modifying the baseline project
schedule during execution, earned value management systems have
been set up and widely used to measure and communicate the real
physical progress of a project. But it often fails to predict the total
duration of the project. In this paper data mining techniques is used
predicting the total project duration in term of Time Estimate At
Completion-EAC (t). For this purpose, we have used a project with
90 activities, it has updated day by day. Then, it is used regular
indexes in literature and applied Earned Duration Method to
calculate time estimate at completion and set these as input data for
prediction and specifying the major parameters among them using
Clem software. By using data mining, the effective parameters on
EAC and the relationship between them could be extracted and it is
very useful to manage a project with minimum delay risks. As we
state, this could be a simple, safe and applicable method in prediction
the completion time of a project during execution.
Abstract: Computers are being integrated in the various aspects
of human every day life in different shapes and abilities. This fact
has intensified a requirement for the software development
technologies which is ability to be: 1) portable, 2) adaptable, and 3)
simple to develop. This problem is also known as the Pervasive
Computing Problem (PCP) which can be implemented in different
ways, each has its own pros and cons and Context Oriented
Programming (COP) is one of the methods to address the PCP.
In this paper a design for a COP framework, a context aware
framework, is presented which has eliminated weak points of a
previous design based on interpreter languages, while introducing the
compiler languages power in implementing these frameworks.
The key point of this improvement is combining COP and
Dependency Injection (DI) techniques. Both old and new frameworks
are analyzed to show advantages and disadvantages. Finally a
simulation of both designs is proposed to indicating that the practical
results agree with the theoretical analysis while the new design runs
almost 8 times faster.