Abstract: In the current technological market environment, ensuring the quality of new products has become a complex challenge. In this scenario, companies have been investing in solutions that aim to reduce the execution time of software testing and lead to cost efficiency. However, companies that have a complex and specialized testing environment usually face barriers related to costly testing processes, especially in distributed settings. Sidia Institute of Technology works on research and development for the Android platform for mobile devices in Latin America. As we work in a global software development (GSD) scope, we have faced barriers caused by failures detected lately that have caused delays in the homologation release process on Android projects. Thus, we adopt an Internal Review process, using as an alternative to reduce these failures. In this paper it was presented the experience of a homologation team adopting an Internal Review process in order to increase the performance through of improving test efficiency. Using this approach, it was possible to realize a substantial improvement in quality, reliability and timeliness of our deliveries. Through the quantitative analyses, it was possible identify a positive growth in homologation efficiency of 6% after adoption of the process. In addition, we performed a qualitative analysis from the collected data through an online questionnaire. In particular, results show that association between failure reduction and review process adoption provides the most quality that has a positive effect on project milestones. We hope this report can be helpful to other companies and the scientific community to improve their process thereby increasing competitive advantages.
Abstract: In general, state-of-the-art Data Acquisition Systems
(DAQ) in high energy physics experiments must satisfy high
requirements in terms of reliability, efficiency and data rate capability.
This paper presents the development and deployment of a debugging
tool named DAQ Debugger for the intelligent, FPGA-based Data
Acquisition System (iFDAQ) of the COMPASS experiment at CERN.
Utilizing a hardware event builder, the iFDAQ is designed to be
able to readout data at the average maximum rate of 1.5 GB/s of
the experiment. In complex softwares, such as the iFDAQ, having
thousands of lines of code, the debugging process is absolutely
essential to reveal all software issues. Unfortunately, conventional
debugging of the iFDAQ is not possible during the real data taking.
The DAQ Debugger is a tool for identifying a problem, isolating
the source of the problem, and then either correcting the problem
or determining a way to work around it. It provides the layer
for an easy integration to any process and has no impact on the
process performance. Based on handling of system signals, the
DAQ Debugger represents an alternative to conventional debuggers
provided by most integrated development environments. Whenever
problem occurs, it generates reports containing all necessary
information important for a deeper investigation and analysis. The
DAQ Debugger was fully incorporated to all processes in the iFDAQ
during the run 2016. It helped to reveal remaining software issues
and improved significantly the stability of the system in comparison
with the previous run. In the paper, we present the DAQ Debugger
from several insights and discuss it in a detailed way.
Abstract: Traditional software engineering allows engineers to propose to their clients multiple specialized software distributions assembled from a shared set of software assets. The management of these assets however requires a trade-off between client satisfaction and software engineering process. Clients have more and more difficult to find a distribution or components based on their needs from all of distributed repositories.
This paper proposes a software engineering for a user-driven software product line in which engineers define a Feature Model but users drive the actual software distribution on demand. This approach makes the user become final actor as a release manager in software engineering process, increasing user product satisfaction and simplifying user operations to find required components. In addition, it provides a way for engineers to manage and assembly large software families.
As a proof of concept, a user-driven software product line is implemented for Eclipse, an integrated development environment. An Eclipse feature model is defined, which is exposed to users on a cloud-based built platform from which clients can download individualized Eclipse distributions.
Abstract: To achieve competitive advantage nowadays, most of
the industrial companies are considering that success is sustained to
great product development. That is to manage the product throughout
its entire lifetime ranging from design, manufacture, operation and
destruction. Achieving this goal requires a tight collaboration
between partners from a wide variety of domains, resulting in various
product data types and formats, as well as different software tools. So
far, the lack of a meaningful unified representation for product data
semantics has slowed down efficient product development. This
paper proposes an ontology based approach to enable such semantic
interoperability. Generic and extendible product ontology is
described, gathering main concepts pertaining to the mechanical field
and the relations that hold among them. The ontology is not
exhaustive; nevertheless, it shows that such a unified representation
is possible and easily exploitable. This is illustrated thru a case study
with an example product and some semantic requests to which the
ontology responds quite easily. The study proves the efficiency of
ontologies as a support to product data exchange and information
sharing, especially in product development environments where
collaboration is not just a choice but a mandatory prerequisite.
Abstract: To offer a large variety of products while maintaining
low costs, high speed, and high quality in a mass customization
product development environment, platform based product
development has much benefit and usefulness in many industry fields.
This paper proposes a product configuration strategy by similarity
measure, incorporating the knowledge engineering principles such as
product information model, ontology engineering, and formal concept
analysis.
Abstract: The way music is interpreted by the human brain is a very interesting topic, but also an intricate one. Although this domain has been studied for over a century, many gray areas remain in the understanding of music. Recent advances have enabled us to perform accurate measurements of the time taken by the human brain to interpret and assimilate a sound. Cognitive computing provides tools and development environments that facilitate human cognition simulation. ACT-R is a cognitive architecture which offers an environment for implementing human cognitive tasks. This project combines our understanding of the music interpretation by a human listener and the ACT-R cognitive architecture to build SINGER, a computerized simulation for listening and recalling songs. The results are similar to human experimental data. Simulation results also show how it is easier to remember short melodies than long melodies which require more trials to be recalled correctly.
Abstract: Low power consumption is a major constraint for battery-powered system like computer notebook or PDA. In the past, specialists usually designed both specific optimized equipments and codes to relief this concern. Doing like this could work for quite a long time, however, in this era, there is another significant restraint, the time to market. To be able to serve along the power constraint while can launch products in shorter production period, objectoriented programming (OOP) has stepped in to this field. Though everyone knows that OOP has quite much more overhead than assembly and procedural languages, development trend still heads to this new world, which contradicts with the target of low power consumption. Most of the prior power related software researches reported that OOP consumed much resource, however, as industry had to accept it due to business reasons, up to now, no papers yet had mentioned about how to choose the best OOP practice in this power limited boundary. This article is the pioneer that tries to specify and propose the optimized strategy in writing OOP software under energy concerned environment, based on quantitative real results. The language chosen for studying is C# based on .NET Framework 2.0 which is one of the trendy OOP development environments. The recommendation gotten from this research would be a good roadmap that can help developers in coding that well balances between time to market and time of battery.
Abstract: The whole work is based on possibility to use Lego Mindstorms robotics systems to reduce costs. Lego Mindstorms consists of a wide variety of hardware components necessary to simulate, programme and test of robotics systems in practice. To programme algorithm, which simulates space using the ultrasonic sensor, was used development environment supplied with kit. Software Matlab was used to render values afterwards they were measured by ultrasonic sensor. The algorithm created for this paper uses theoretical knowledge from area of signal processing. Data being processed by algorithm are collected by ultrasonic sensor that scans 2D space in front of it. Ultrasonic sensor is placed on moving arm of robot which provides horizontal moving of sensor. Vertical movement of sensor is provided by wheel drive. The robot follows map in order to get correct positioning of measured data. Based on discovered facts it is possible to consider Lego Mindstorm for low-cost and capable kit for real-time modelling.
Abstract: Distributed Computing Systems are usually considered the most suitable model for practical solutions of many parallel algorithms. In this paper an enhanced distributed system is presented to improve the time complexity of Binary Indexed Trees (BIT). The proposed system uses multi-uniform processors with identical architectures and a specially designed distributed memory system. The analysis of this system has shown that it has reduced the time complexity of the read query to O(Log(Log(N))), and the update query to constant complexity, while the naive solution has a time complexity of O(Log(N)) for both queries. The system was implemented and simulated using VHDL and Verilog Hardware Description Languages, with xilinx ISE 10.1, as the development environment and ModelSim 6.1c, similarly as the simulation tool. The simulation has shown that the overhead resulting by the wiring and communication between the system fragments could be fairly neglected, which makes it applicable to practically reach the maximum speed up offered by the proposed model.
Abstract: This paper proposes a novel solution for optimizing
the size and communication overhead of a distributed multiagent
system without compromising the performance. The proposed approach
addresses the challenges of scalability especially when the
multiagent system is large. A modified spectral clustering technique
is used to partition a large network into logically related clusters.
Agents are assigned to monitor dedicated clusters rather than monitor
each device or node. The proposed scalable multiagent system is
implemented using JADE (Java Agent Development Environment)
for a large power system. The performance of the proposed topologyindependent
decentralized multiagent system and the scalable multiagent
system is compared by comprehensively simulating different
fault scenarios. The time taken for reconfiguration, the overall computational
complexity, and the communication overhead incurred are
computed. The results of these simulations show that the proposed
scalable multiagent system uses fewer agents efficiently, makes faster
decisions to reconfigure when a fault occurs, and incurs significantly
less communication overhead.
Abstract: When programming in languages such as C, Java, etc.,
it is difficult to reconstruct the programmer's ideas only from the
program code. This occurs mainly because, much of the programmer's
ideas behind the implementation are not recorded in the code during
implementation. For example, physical aspects of computation such as
spatial structures, activities, and meaning of variables are not required
as instructions to the computer and are often excluded. This makes the
future reconstruction of the original ideas difficult. AIDA, which is a
multimedia programming language based on the cyberFilm model, can
solve these problems allowing to describe ideas behind programs
using advanced annotation methods as a natural extension to
programming. In this paper, a development environment that
implements the AIDA language is presented with a focus on the
annotation methods. In particular, an actual scientific numerical
computation code is created and the effects of the annotation methods
are analyzed.
Abstract: Both software applications and their development environment are becoming more and more distributed. This trend impacts not only the way software computes, but also how it looks. This article proposes a Human Computer Interface (HCI) template from three representative applications we have developed. These applications include a Multi-Agent System based software, a 3D Internet computer game with distributed game world logic, and a programming language environment used in constructing distributed neural network and its visualizations. HCI concepts that are common to these applications are described in abstract terms in the template. These include off-line presentation of global entities, entities inside a hierarchical namespace, communication and languages, reconfiguration of entity references in a graph, impersonation and access right, etc. We believe the metaphor that underlies an HCI concept as well as the relationships between a bunch of HCI concepts are crucial to the design of software systems and vice versa.
Abstract: Hybrid knowledge model is suggested as an underlying
framework for product development management. It can support such
hybrid features as ontologies and rules. Effective collaboration in
product development environment depends on sharing and reasoning
product information as well as engineering knowledge. Many studies
have considered product information and engineering knowledge.
However, most previous research has focused either on building the
ontology of product information or rule-based systems of engineering
knowledge. This paper shows that F-logic based knowledge model can
support such desirable features in a hybrid way.
Abstract: Neural networks offer an alternative approach both
for identification and control of nonlinear processes in process
engineering. The lack of software tools for the design of controllers
based on neural network models is particularly pronounced in this
field. SIMULINK is properly a widely used graphical code
development environment which allows system-level developers to
perform rapid prototyping and testing. Such graphical based
programming environment involves block-based code development
and offers a more intuitive approach to modeling and control task in
a great variety of engineering disciplines. In this paper a
SIMULINK based Neural Tool has been developed for analysis and
design of multivariable neural based control systems. This tool has
been applied to the control of a high purity distillation column
including non linear hydrodynamic effects. The proposed control
scheme offers an optimal response for both theoretical and practical
challenges posed in process control task, in particular when both,
the quality improvement of distillation products and the operation
efficiency in economical terms are considered.