Automatic Translation of Ada-ECATNet Using Rewriting Logic

One major difficulty that faces developers of concurrent and distributed software is analysis for concurrency based faults like deadlocks. Petri nets are used extensively in the verification of correctness of concurrent programs. ECATNets are a category of algebraic Petri nets based on a sound combination of algebraic abstract types and high-level Petri nets. ECATNets have 'sound' and 'complete' semantics because of their integration in rewriting logic and its programming language Maude. Rewriting logic is considered as one of very powerful logics in terms of description, verification and programming of concurrent systems We proposed previously a method for translating Ada-95 tasking programs to ECATNets formalism (Ada-ECATNet) and we showed that ECATNets formalism provides a more compact translation for Ada programs compared to the other approaches based on simple Petri nets or Colored Petri nets. We showed also previously how the ECATNet formalism offers to Ada many validation and verification tools like simulation, Model Checking, accessibility analysis and static analysis. In this paper, we describe the implementation of our translation of the Ada programs into ECATNets.

2D-Modeling with Lego Mindstorms

The whole work is based on possibility to use Lego Mindstorms robotics systems to reduce costs. Lego Mindstorms consists of a wide variety of hardware components necessary to simulate, programme and test of robotics systems in practice. To programme algorithm, which simulates space using the ultrasonic sensor, was used development environment supplied with kit. Software Matlab was used to render values afterwards they were measured by ultrasonic sensor. The algorithm created for this paper uses theoretical knowledge from area of signal processing. Data being processed by algorithm are collected by ultrasonic sensor that scans 2D space in front of it. Ultrasonic sensor is placed on moving arm of robot which provides horizontal moving of sensor. Vertical movement of sensor is provided by wheel drive. The robot follows map in order to get correct positioning of measured data. Based on discovered facts it is possible to consider Lego Mindstorm for low-cost and capable kit for real-time modelling.

Measuring the Comprehensibility of a UML-B Model and a B Model

Software maintenance, which involves making enhancements, modifications and corrections to existing software systems, consumes more than half of developer time. Specification comprehensibility plays an important role in software maintenance as it permits the understanding of the system properties more easily and quickly. The use of formal notation such as B increases a specification-s precision and consistency. However, the notation is regarded as being difficult to comprehend. Semi-formal notation such as the Unified Modelling Language (UML) is perceived as more accessible but it lacks formality. Perhaps by combining both notations could produce a specification that is not only accurate and consistent but also accessible to users. This paper presents an experiment conducted on a model that integrates the use of both UML and B notations, namely UML-B, versus a B model alone. The objective of the experiment was to evaluate the comprehensibility of a UML-B model compared to a traditional B model. The measurement used in the experiment focused on the efficiency in performing the comprehension tasks. The experiment employed a cross-over design and was conducted on forty-one subjects, including undergraduate and masters students. The results show that the notation used in the UML-B model is more comprehensible than the B model.

Testing Object-Oriented Framework Applications Using FIST2 Tool: A Case Study

An application framework provides a reusable design and implementation for a family of software systems. Frameworks are introduced to reduce the cost of a product line (i.e., a family of products that shares the common features). Software testing is a timeconsuming and costly ongoing activity during the application software development process. Generating reusable test cases for the framework applications during the framework development stage, and providing and using the test cases to test part of the framework application whenever the framework is used reduces the application development time and cost considerably. This paper introduces the Framework Interface State Transition Tester (FIST2), a tool for automated unit testing of Java framework applications. During the framework development stage, given the formal descriptions of the framework hooks, the specifications of the methods of the framework-s extensible classes, and the illegal behavior description of the Framework Interface Classes (FICs), FIST2 generates unitlevel test cases for the classes. At the framework application development stage, given the customized method specifications of the implemented FICs, FIST2 automates the use, execution, and evaluation of the already generated test cases to test the implemented FICs. The paper illustrates the use of the FIST2 tool for testing several applications that use the SalesPoint framework.

Implementation of a New Neural Network Function Block to Programmable Logic Controllers Library Function

Programmable logic controllers are the main controllers in the today's industries; they are used for several applications in industrial control systems and there are lots of examples exist from the PLC applications in industries especially in big companies and plants such as refineries, power plants, petrochemical companies, steel companies, and food and production companies. In the PLCs there are some functions in the function library in software that can be used in PLC programs as basic program elements. The aim of this project are introducing and implementing a new function block of a neural network to the function library of PLC. This block can be applied for some control applications or nonlinear functions calculations after it has been trained for these applications. The implemented neural network is a Perceptron neural network with three layers, three input nodes and one output node. The block can be used in manual or automatic mode. In this paper the structure of the implemented function block, the parameters and the training method of the network are presented by considering the especial method of PLC programming and its complexities. Finally the application of the new block is compared with a classic simulated block and the results are presented.

Optimal Allocation of DG Units for Power Loss Reduction and Voltage Profile Improvement of Distribution Networks using PSO Algorithm

This paper proposes a Particle Swarm Optimization (PSO) based technique for the optimal allocation of Distributed Generation (DG) units in the power systems. In this paper our aim is to decide optimal number, type, size and location of DG units for voltage profile improvement and power loss reduction in distribution network. Two types of DGs are considered and the distribution load flow is used to calculate exact loss. Load flow algorithm is combined appropriately with PSO till access to acceptable results of this operation. The suggested method is programmed under MATLAB software. Test results indicate that PSO method can obtain better results than the simple heuristic search method on the 30-bus and 33- bus radial distribution systems. It can obtain maximum loss reduction for each of two types of optimally placed multi-DGs. Moreover, voltage profile improvement is achieved.

Complexity of Component-based Development of Embedded Systems

The paper discusses complexity of component-based development (CBD) of embedded systems. Although CBD has its merits, it must be augmented with methods to control the complexities that arise due to resource constraints, timeliness, and run-time deployment of components in embedded system development. Software component specification, system-level testing, and run-time reliability measurement are some ways to control the complexity.

Secure Power Systems Against Malicious Cyber-Physical Data Attacks: Protection and Identification

The security of power systems against malicious cyberphysical data attacks becomes an important issue. The adversary always attempts to manipulate the information structure of the power system and inject malicious data to deviate state variables while evading the existing detection techniques based on residual test. The solutions proposed in the literature are capable of immunizing the power system against false data injection but they might be too costly and physically not practical in the expansive distribution network. To this end, we define an algebraic condition for trustworthy power system to evade malicious data injection. The proposed protection scheme secures the power system by deterministically reconfiguring the information structure and corresponding residual test. More importantly, it does not require any physical effort in either microgrid or network level. The identification scheme of finding meters being attacked is proposed as well. Eventually, a well-known IEEE 30-bus system is adopted to demonstrate the effectiveness of the proposed schemes.

Applying Complex Network Theory to Software Structure Analysis

Complex networks have been intensively studied across many fields, especially in Internet technology, biological engineering, and nonlinear science. Software is built up out of many interacting components at various levels of granularity, such as functions, classes, and packages, representing another important class of complex networks. It can also be studied using complex network theory. Over the last decade, many papers on the interdisciplinary research between software engineering and complex networks have been published. It provides a different dimension to our understanding of software and also is very useful for the design and development of software systems. This paper will explore how to use the complex network theory to analyze software structure, and briefly review the main advances in corresponding aspects.

Distributed Detection and Optimal Traffic-blocking of Network Worms

Despite the recent surge of research in control of worm propagation, currently, there is no effective defense system against such cyber attacks. We first design a distributed detection architecture called Detection via Distributed Blackholes (DDBH). Our novel detection mechanism could be implemented via virtual honeypots or honeynets. Simulation results show that a worm can be detected with virtual honeypots on only 3% of the nodes. Moreover, the worm is detected when less than 1.5% of the nodes are infected. We then develop two control strategies: (1) optimal dynamic trafficblocking, for which we determine the condition that guarantees minimum number of removed nodes when the worm is contained and (2) predictive dynamic traffic-blocking–a realistic deployment of the optimal strategy on scale-free graphs. The predictive dynamic traffic-blocking, coupled with the DDBH, ensures that more than 40% of the network is unaffected by the propagation at the time when the worm is contained.

Publishing Curriculum Vitae using Weblog: An Investigation on its Usefulness, Ease of Use, and Behavioral Intention to Use

In this cyber age, the job market has been rapidly transforming and being digitalized. Submitting a paper-based curriculum vitae (CV) nowadays does not grant a job seeker a high employability rate. This paper calls for attention on the creation of mobile Curriculum Vitae or m-CV (http://mcurriculumvitae. blogspot.com), a sample of an individual CV developed using weblog, which can enhance the job hunter especially fresh graduate-s higher marketability rate. This study is designed to identify the perceptions held by Malaysian university students regarding m-CV grounded on a modified Technology Acceptance Model (TAM). It measures the strength and the direction of relationships among three major variables – Perceived Ease of Use (PEOU), Perceived Usefulness (PU) and Behavioral Intention (BI) to use. The finding shows that university students generally accepted adopting m-CV since they perceived m-CV to be more useful rather than easy to use. Additionally, this study has confirmed TAM to be a useful theoretical model in helping to understand and explain the behavioral intention to use Web 2.0 application-weblog publishing their CV. The result of the study has underlined another significant positive value of using weblog to create personal CV. Further research of m-CV has been highlighted in this paper.

Vehicle Tracking and Disabling Using WIMAX

We see in the present day scenario that the Global positioning system (GPS) has been an effective tool to track the vehicle. However the adverse part of it is that it can only track a vehicle-s position. Our present work provides a better platform to track and disable a vehicle using wireless technology. In our system we embed a microcomputer which monitors the series of automotive systems like engine, fuel and braking system. The external USB modem is connected with the microcomputer to provide 24 x 7 internet accesses. The microcomputer is synchronized with the owner-s multimedia mobile by means of a software tool “REMOTE DESKTOP". A unique username and password is provided to the software tool, so that the owner can only access the microcomputer through the internet on owner-s mobile. The key fact is that our design is placed such that it is known only to the owner.

Design and Implementation of Client Server Network Management System for Ethernet LAN

Network Management Systems have played a great important role in information systems. Management is very important and essential in any fields. There are many managements such as configuration management, fault management, performance management, security management, accounting management and etc. Among them, configuration, fault and security management is more important than others. Because these are essential and useful in any fields. Configuration management is to monitor and maintain the whole system or LAN. Fault management is to detect and troubleshoot the system. Security management is to control the whole system. This paper intends to increase the network management functionalities including configuration management, fault management and security management. In configuration management system, this paper specially can support the USB ports and devices to detect and read devices configuration and solve to detect hardware port and software ports. In security management system, this paper can provide the security feature for the user account setting and user management and proxy server feature. And all of the history of the security such as user account and proxy server history are kept in the java standard serializable file. So the user can view the history of the security and proxy server anytime. If the user uses this system, the user can ping the clients from the network and the user can view the result of the message in fault management system. And this system also provides to check the network card and can show the NIC card setting. This system is used RMI (Remote Method Invocation) and JNI (Java Native Interface) technology. This paper is to implement the client/server network management system using Java 2 Standard Edition (J2SE). This system can provide more than 10 clients. And then this paper intends to show data or message structure of client/server and how to work using TCP/IP protocol.

Efficient Sensors Selection Algorithm in Cyber Physical System

Cyber physical system (CPS) for target tracking, military surveillance, human health monitoring, and vehicle detection all require maximizing the utility and saving the energy. Sensor selection is one of the most important parts of CPS. Sensor selection problem (SSP) is concentrating to balance the tradeoff between the number of sensors which we used and the utility which we will get. In this paper, we propose a performance constrained slide windows (PCSW) based algorithm for SSP in CPS. we present results of extensive simulations that we have carried out to test and validate the PCSW algorithms when we track a target, Experiment shows that the PCSW based algorithm improved the performance including selecting time and communication times for selecting.

A Model for Estimation of Efforts in Development of Software Systems

Software effort estimation is the process of predicting the most realistic use of effort required to develop or maintain software based on incomplete, uncertain and/or noisy input. Effort estimates may be used as input to project plans, iteration plans, budgets. There are various models like Halstead, Walston-Felix, Bailey-Basili, Doty and GA Based models which have already used to estimate the software effort for projects. In this study Statistical Models, Fuzzy-GA and Neuro-Fuzzy (NF) Inference Systems are experimented to estimate the software effort for projects. The performances of the developed models were tested on NASA software project datasets and results are compared with the Halstead, Walston-Felix, Bailey-Basili, Doty and Genetic Algorithm Based models mentioned in the literature. The result shows that the NF Model has the lowest MMRE and RMSE values. The NF Model shows the best results as compared with the Fuzzy-GA based hybrid Inference System and other existing Models that are being used for the Effort Prediction with lowest MMRE and RMSE values.

CAD Based Predictive Models of the Undeformed Chip Geometry in Drilling

Twist drills are geometrical complex tools and thus various researchers have adopted different mathematical and experimental approaches for their simulation. The present paper acknowledges the increasing use of modern CAD systems and using the API (Application Programming Interface) of a CAD system, drilling simulations are carried out. The developed DRILL3D software routine, creates parametrically controlled tool geometries and using different cutting conditions, achieves the generation of solid models for all the relevant data involved (drilling tool, cut workpiece, undeformed chip). The final data derived, consist a platform for further direct simulations regarding the determination of cutting forces, tool wear, drilling optimizations etc.

Enhancements in Blended e-Learning Management System

A learning management system (commonly abbreviated as LMS) is a software application for the administration, documentation, tracking, and reporting of training programs, classroom and online events, e-learning programs, and training content (Ellis 2009). (Hall 2003) defines an LMS as \"software that automates the administration of training events. All Learning Management Systems manage the log-in of registered users, manage course catalogs, record data from learners, and provide reports to management\". Evidence of the worldwide spread of e-learning in recent years is easy to obtain. In April 2003, no fewer than 66,000 fully online courses and 1,200 complete online programs were listed on the TeleCampus portal from TeleEducation (Paulsen 2003). In the report \" The US market in the Self-paced eLearning Products and Services:2010-2015 Forecast and Analysis\" The number of student taken classes exclusively online will be nearly equal (1% less) to the number taken classes exclusively in physical campuses. Number of student taken online course will increase from 1.37 million in 2010 to 3.86 million in 2015 in USA. In another report by The Sloan Consortium three-quarters of institutions report that the economic downturn has increased demand for online courses and programs.

A Complexity Measure for Java Bean based Software Components

The traditional software product and process metrics are neither suitable nor sufficient in measuring the complexity of software components, which ultimately is necessary for quality and productivity improvement within organizations adopting CBSE. Researchers have proposed a wide range of complexity metrics for software systems. However, these metrics are not sufficient for components and component-based system and are restricted to the module-oriented systems and object-oriented systems. In this proposed study it is proposed to find the complexity of the JavaBean Software Components as a reflection of its quality and the component can be adopted accordingly to make it more reusable. The proposed metric involves only the design issues of the component and does not consider the packaging and the deployment complexity. In this way, the software components could be kept in certain limit which in turn help in enhancing the quality and productivity.

Application of Neural Network for Contingency Ranking Based on Combination of Severity Indices

In this paper, an improved technique for contingency ranking using artificial neural network (ANN) is presented. The proposed approach is based on multi-layer perceptrons trained by backpropagation to contingency analysis. Severity indices in dynamic stability assessment are presented. These indices are based on the concept of coherency and three dot products of the system variables. It is well known that some indices work better than others for a particular power system. This paper along with test results using several different systems, demonstrates that combination of indices with ANN provides better ranking than a single index. The presented results are obtained through the use of power system simulation (PSS/E) and MATLAB 6.5 software.

Infrastructure means for Adaptive Camouflage

The paper deals with the perspectives and possibilities of "smart solutions" to critical infrastructure protection. It means that common computer aided technologies are used from the perspective of new, better protection of selected infrastructure objects. The paper is focused on the co-product of the Czech Defence Research Project - ADAPTIV. This project is carrying out by the University of Defence, Faculty of Economics and Management at the Department of Civil Protection. The project creates system and technology for adaptive cybernetic camouflage of armed forces objects, armaments, vehicles and troops and of mobilization infrastructure. These adaptive camouflage system and technology will be useful for army tactic activities protection and for decoys generation also. The fourth chapter of the paper concerns the possibilities of using the introduced technology to the protection of selected civil (economically important), critical infrastructure objects. The aim of this section is to introduce the scientific capabilities and potential of the University of Defence research results and solutions for the practice.