Abstract: One of the main difficulties in developing multi-robot
systems (MRS) is related to the simulation and testing tools available.
Indeed, if the differences between simulations and real robots are
too significant, the transition from the simulation to the robot
won’t be possible without another long development phase and
won’t permit to validate the simulation. Moreover, the testing of
different algorithmic solutions or modifications of robots requires
a strong knowledge of current tools and a significant development
time. Therefore, the availability of tools for MRS, mainly with
flying drones, is crucial to enable the industrial emergence of these
systems. This research aims to present the most commonly used
tools for MRS simulations and their main shortcomings and presents
complementary tools to improve the productivity of designers in the
development of multi-vehicle solutions focused on a fast learning
curve and rapid transition from simulations to real usage. The
proposed contributions are based on existing open source tools as
Gazebo simulator combined with ROS (Robot Operating System) and
the open-source multi-platform autopilot ArduPilot to bring them to
a broad audience.
Abstract: This paper is aimed at creating an Automatic Java X-Machine testing tool for software development. The nature of software development is changing; thus, the type of software testing tools required is also changing. Software is growing increasingly complex and, in part due to commercial impetus for faster software releases with new features and value, increasingly in danger of containing faults. These faults can incur huge cost for software development organisations and users; Cambridge Judge Business School’s research estimated the cost of software bugs to the global economy is $312 billion. Beyond the cost, faster software development methodologies and increasing expectations on developers to become testers is driving demand for faster, automated, and effective tools to prevent potential faults as early as possible in the software development lifecycle. Using X-Machine theory, this paper will explore a new tool to address software complexity, changing expectations on developers, faster development pressures and methodologies, with a view to reducing the huge cost of fixing software bugs.
Abstract: Growing dependency of mankind on software
technology increases the need for thorough testing of the software
applications and automated testing techniques that support testing
activities. We have outlined our testing strategy for performing
various types of automated testing of Java applications using
AspectJ which has become the de-facto standard for Aspect Oriented
Programming (AOP). Likewise JUnit, a unit testing framework is
the most popular Java testing tool. In this paper, we have evaluated
our proposed AOP approach for automated testing and JUnit on
various parameters. First we have provided the similarity between
the two approaches and then we have done a detailed comparison
of the two testing techniques on factors like lines of testing code,
learning curve, testing of private members etc. We established that
our AOP testing approach using AspectJ has got several advantages
and is thus particularly more effective than JUnit.
Abstract: The benchmarking of tools for dynamic analysis of
vulnerabilities in web applications is something that is done
periodically, because these tools from time to time update their
knowledge base and search algorithms, in order to improve their
accuracy. Unfortunately, the vast majority of these evaluations are
made by software enthusiasts who publish their results on blogs
or on non-academic websites and always with the same evaluation
methodology. Similarly, academics who have carried out this type of
analysis from a scientific approach, the majority, make their analysis
within the same methodology as well the empirical authors. This
paper is based on the interest of finding answers to questions that
many users of this type of tools have been asking over the years,
such as, to know if the tool truly test and evaluate every vulnerability
that it ensures do, or if the tool, really, deliver a real report of all the
vulnerabilities tested and exploited. This kind of questions have also
motivated previous work but without real answers. The aim of this
paper is to show results that truly answer, at least on the tested tools,
all those unanswered questions. All the results have been obtained
by changing the common model of benchmarking used for all those
previous works.
Abstract: Software security testing is an important means to ensure software security and trustiness. This paper first mainly discusses the definition and classification of software security testing, and investigates methods and tools of software security testing widely. Then it analyzes and concludes the advantages and disadvantages of various methods and the scope of application, presents a taxonomy of security testing tools. Finally, the paper points out future focus and development directions of software security testing technology.
Abstract: Testable software has two inherent properties – observability and controllability. Observability facilitates observation of internal behavior of software to required degree of detail. Controllability allows creation of difficult-to-achieve states prior to execution of various tests. In this paper, we describe COTT, a Controllability and Observability Testing Tool, to create testable object-oriented software. COTT provides a framework that helps the user to instrument object-oriented software to build the required controllability and observability. During testing, the tool facilitates creation of difficult-to-achieve states required for testing of difficultto- test conditions and observation of internal details of execution at unit, integration and system levels. The execution observations are logged in a test log file, which are used for post analysis and to generate test coverage reports.
Abstract: An empirical study of web applications that use
software frameworks is presented here. The analysis is based on two
approaches. In the first, developers using such frameworks are
required, based on their experience, to assign weights to parameters
such as database connection. In the second approach, a performance
testing tool, OpenSTA, is used to compute start time and other such
measures. From such an analysis, it is concluded that open source
software is superior to proprietary software. The motivation behind
this research is to examine ways in which a quantitative assessment
can be made of software in general and frameworks in particular.
Concepts such as metrics and architectural styles are discussed along
with previously published research.