Robust Conversion of Chaos into an Arbitrary Periodic Motion

One of the most attractive and important field of chaos theory is control of chaos. In this paper, we try to present a simple framework for chaotic motion control using the feedback linearization method. Using this approach, we derive a strategy, which can be easily applied to the other chaotic systems. This task presents two novel results: the desired periodic orbit need not be a solution of the original dynamics and the other is the robustness of response against parameter variations. The illustrated simulations show the ability of these. In addition, by a comparison between a conventional state feedback and our proposed method it is demonstrated that the introduced technique is more efficient.

Efficient Program Slicing Algorithms for Measuring Functional Cohesion and Parallelism

Program slicing is the task of finding all statements in a program that directly or indirectly influence the value of a variable occurrence. The set of statements that can affect the value of a variable at some point in a program is called a program slice. In several software engineering applications, such as program debugging and measuring program cohesion and parallelism, several slices are computed at different program points. In this paper, algorithms are introduced to compute all backward and forward static slices of a computer program by traversing the program representation graph once. The program representation graph used in this paper is called Program Dependence Graph (PDG). We have conducted an experimental comparison study using 25 software modules to show the effectiveness of the introduced algorithm for computing all backward static slices over single-point slicing approaches in computing the parallelism and functional cohesion of program modules. The effectiveness of the algorithm is measured in terms of time execution and number of traversed PDG edges. The comparison study results indicate that using the introduced algorithm considerably saves the slicing time and effort required to measure module parallelism and functional cohesion.

People Counting in Transport Vehicles

Counting people from a video stream in a noisy environment is a challenging task. This project aims at developing a counting system for transport vehicles, integrated in a video surveillance product. This article presents a method for the detection and tracking of multiple faces in a video by using a model of first and second order local moments. An iterative process is used to estimate the position and shape of multiple faces in images, and to track them. the trajectories are then processed to count people entering and leaving the vehicle.

Vision Based Robotic Interception in Industrial Manipulation Tasks

In this paper, a solution is presented for a robotic manipulation problem in industrial settings. The problem is sensing objects on a conveyor belt, identifying the target, planning and tracking an interception trajectory between end effector and the target. Such a problem could be formulated as combining object recognition, tracking and interception. For this purpose, we integrated a vision system to the manipulation system and employed tracking algorithms. The control approach is implemented on a real industrial manipulation setting, which consists of a conveyor belt, objects moving on it, a robotic manipulator, and a visual sensor above the conveyor. The trjectory for robotic interception at a rendezvous point on the conveyor belt is analytically calculated. Test results show that tracking the raget along this trajectory results in interception and grabbing of the target object.

Effects of a Recreational Workout Program on Task-Analyzed Exercise Performance of Adults with Severe Cognitive Impairments

The purpose of this study was to investigate the effectiveness of a recreational workout program for adults with disabilities over two semesters. This investigation was an action study conducted in a naturalistic setting. Participants included equal numbers of adults with severe cognitive impairments (n = 35) and adults without disabilities (n = 35). Adults with disabilities severe cognitive impairments were trained 6 self-initiated workout activities over two semesters by adults without disabilities. The numbers of task-analyzed steps of each activity performed correctly by each participant at the first and last weeks of each semester were used for data analysis. Results of the paired t-tests indicate that across two semesters, significant differences between the first and last weeks were found on 4 out of the 6 task-analyzed workout activities at a statistical level of significance p < .05. The recreational workout program developed in this study was effective.

A New Approach to Signal Processing for DC-Electromagnetic Flowmeters

Electromagnetic flowmeters with DC excitation are used for a wide range of fluid measurement tasks, but are rarely found in dosing applications with short measurement cycles due to the achievable accuracy. This paper will identify a number of factors that influence the accuracy of this sensor type when used for short-term measurements. Based on these results a new signal-processing algorithm will be described that overcomes the identified problems to some extend. This new method allows principally a higher accuracy of electromagnetic flowmeters with DC excitation than traditional methods.

A Selective Markovianity Approach for Image Segmentation

A new Markovianity approach is introduced in this paper. This approach reduces the response time of classic Markov Random Fields approach. First, one region is determinated by a clustering technique. Then, this region is excluded from the study. The remaining pixel form the study zone and they are selected for a Markovianity segmentation task. With Selective Markovianity approach, segmentation process is faster than classic one.

Proffering a Brand New Methodology to Resource Discovery in Grid based on Economic Criteria Using Learning Automata

Resource discovery is one of the chief services of a grid. A new approach to discover the provenances in grid through learning automata has been propounded in this article. The objective of the aforementioned resource-discovery service is to select the resource based upon the user-s applications and the mercantile yardsticks that is to say opting for an originator which can accomplish the user-s tasks in the most economic manner. This novel service is submitted in two phases. We proffered an applicationbased categorization by means of an intelligent nerve-prone plexus. The user in question sets his or her application as the input vector of the nerve-prone nexus. The output vector of the aforesaid network limns the appropriateness of any one of the resource for the presented executive procedure. The most scrimping option out of those put forward in the previous stage which can be coped with to fulfill the task in question is picked out. Te resource choice is carried out by means of the presented algorithm based upon the learning automata.

Pig Husbandry and Solid Manures in a Commercial Pig Farm in Beijing, China

Porcine production in China represents approximately the 50% of the worldwide pig production. Information about pig husbandry characteristics in China and manure properties from sows to fatteners in intensive pig farms are not broadly available for scientific studies as it is a time consuming, expensive task and highly inaccessible. This study provides a report about solid pig manures (28% dry matter) in a commercial pig farm located in the peri-urban area of Beijing as well as a general overview of the current pig husbandry techniques including pig breeds, feeds, diseases, housing as well as pig manure and wastewater disposal. The main results are intended to serve as a literature source for young scientists in order to understand the main composition of pig manures as well as to identify the husbandry techniques applied in an intensive pig farm in Beijing.

Towards Clustering of Web-based Document Structures

Methods for organizing web data into groups in order to analyze web-based hypertext data and facilitate data availability are very important in terms of the number of documents available online. Thereby, the task of clustering web-based document structures has many applications, e.g., improving information retrieval on the web, better understanding of user navigation behavior, improving web users requests servicing, and increasing web information accessibility. In this paper we investigate a new approach for clustering web-based hypertexts on the basis of their graph structures. The hypertexts will be represented as so called generalized trees which are more general than usual directed rooted trees, e.g., DOM-Trees. As a important preprocessing step we measure the structural similarity between the generalized trees on the basis of a similarity measure d. Then, we apply agglomerative clustering to the obtained similarity matrix in order to create clusters of hypertext graph patterns representing navigation structures. In the present paper we will run our approach on a data set of hypertext structures and obtain good results in Web Structure Mining. Furthermore we outline the application of our approach in Web Usage Mining as future work.

Post Mining- Discovering Valid Rules from Different Sized Data Sources

A big organization may have multiple branches spread across different locations. Processing of data from these branches becomes a huge task when innumerable transactions take place. Also, branches may be reluctant to forward their data for centralized processing but are ready to pass their association rules. Local mining may also generate a large amount of rules. Further, it is not practically possible for all local data sources to be of the same size. A model is proposed for discovering valid rules from different sized data sources where the valid rules are high weighted rules. These rules can be obtained from the high frequency rules generated from each of the data sources. A data source selection procedure is considered in order to efficiently synthesize rules. Support Equalization is another method proposed which focuses on eliminating low frequency rules at the local sites itself thus reducing the rules by a significant amount.

Multi-agent On-line Monitor for the Safety of Critical Systems

Operational safety of critical systems, such as nuclear power plants, industrial chemical processes and means of transportation, is a major concern for system engineers and operators. A means to assure that is on-line safety monitors that deliver three safety tasks; fault detection and diagnosis, alarm annunciation and fault controlling. While current monitors deliver these tasks, benefits and limitations in their approaches have at the same time been highlighted. Drawing from those benefits, this paper develops a distributed monitor based on semi-independent agents, i.e. a multiagent system, and monitoring knowledge derived from a safety assessment model of the monitored system. Agents are deployed hierarchically and provided with knowledge portions and collaboration protocols to reason and integrate over the operational conditions of the components of the monitored system. The monitor aims to address limitations arising from the large-scale, complicated behaviour and distributed nature of monitored systems and deliver the aforementioned three monitoring tasks effectively.

Robot Task-Level Programming Language and Simulation

This paper presents the development of a software application for Off-line robot task programming and simulation. Such application is designed to assist in robot task planning and to direct manipulator motion on sensor based programmed motion. The concept of the designed programming application is to use the power of the knowledge base for task accumulation. In support of the programming means, an interactive graphical simulation for manipulator kinematics was also developed and integrated into the application as the complimentary factor to the robot programming media. The simulation provides the designer with useful, inexpensive, off-line tools for retain and testing robotics work cells and automated assembly lines for various industrial applications.

Artificial Intelligence Techniques applied to Biomedical Patterns

Pattern recognition is the research area of Artificial Intelligence that studies the operation and design of systems that recognize patterns in the data. Important application areas are image analysis, character recognition, fingerprint classification, speech analysis, DNA sequence identification, man and machine diagnostics, person identification and industrial inspection. The interest in improving the classification systems of data analysis is independent from the context of applications. In fact, in many studies it is often the case to have to recognize and to distinguish groups of various objects, which requires the need for valid instruments capable to perform this task. The objective of this article is to show several methodologies of Artificial Intelligence for data classification applied to biomedical patterns. In particular, this work deals with the realization of a Computer-Aided Detection system (CADe) that is able to assist the radiologist in identifying types of mammary tumor lesions. As an additional biomedical application of the classification systems, we present a study conducted on blood samples which shows how these methods may help to distinguish between carriers of Thalassemia (or Mediterranean Anaemia) and healthy subjects.

An Improved Resource Discovery Approach Using P2P Model for Condor: A Grid Middleware

Resource Discovery in Grids is critical for efficient resource allocation and management. Heterogeneous nature and dynamic availability of resources make resource discovery a challenging task. As numbers of nodes are increasing from tens to thousands, scalability is essentially desired. Peer-to-Peer (P2P) techniques, on the other hand, provide effective implementation of scalable services and applications. In this paper we propose a model for resource discovery in Condor Middleware by using the four axis framework defined in P2P approach. The proposed model enhances Condor to incorporate functionality of a P2P system, thus aim to make Condor more scalable, flexible, reliable and robust.

Chose the Right Mutation Rate for Better Evolve Combinational Logic Circuits

Evolvable hardware (EHW) is a developing field that applies evolutionary algorithm (EA) to automatically design circuits, antennas, robot controllers etc. A lot of research has been done in this area and several different EAs have been introduced to tackle numerous problems, as scalability, evolvability etc. However every time a specific EA is chosen for solving a particular task, all its components, such as population size, initialization, selection mechanism, mutation rate, and genetic operators, should be selected in order to achieve the best results. In the last three decade the selection of the right parameters for the EA-s components for solving different “test-problems" has been investigated. In this paper the behaviour of mutation rate for designing logic circuits, which has not been done before, has been deeply analyzed. The mutation rate for an EHW system modifies the number of inputs of each logic gates, the functionality (for example from AND to NOR) and the connectivity between logic gates. The behaviour of the mutation has been analyzed based on the number of generations, genotype redundancy and number of logic gates for the evolved circuits. The experimental results found provide the behaviour of the mutation rate during evolution for the design and optimization of simple logic circuits. The experimental results propose the best mutation rate to be used for designing combinational logic circuits. The research presented is particular important for those who would like to implement a dynamic mutation rate inside the evolutionary algorithm for evolving digital circuits. The researches on the mutation rate during the last 40 years are also summarized.

Dynamic Visualization on Student's Performance, Retention and Transfer of Procedural Learning

This study examined the effects of two dynamic visualizations on 60 Malaysian primary school student-s performance (time on task), retention and transference. The independent variables in this study were the two dynamic visualizations, the video and the animated instructions. The dependent variables were the gain score of performance, retention and transference. The results showed that the students in the animation group significantly outperformed the students in the video group in retention. There were no significant differences in terms of gain scores in the performance and transference among the animation and the video groups, although the scores were slightly higher in the animation group compared to the video group. The conclusion of this study is that the animation visualization is superior compared to the video in the retention for a procedural task.

A Novel Low Power, High Speed 14 Transistor CMOS Full Adder Cell with 50% Improvement in Threshold Loss Problem

Full adders are important components in applications such as digital signal processors (DSP) architectures and microprocessors. In addition to its main task, which is adding two numbers, it participates in many other useful operations such as subtraction, multiplication, division,, address calculation,..etc. In most of these systems the adder lies in the critical path that determines the overall speed of the system. So enhancing the performance of the 1-bit full adder cell (the building block of the adder) is a significant goal.Demands for the low power VLSI have been pushing the development of aggressive design methodologies to reduce the power consumption drastically. To meet the growing demand, we propose a new low power adder cell by sacrificing the MOS Transistor count that reduces the serious threshold loss problem, considerably increases the speed and decreases the power when compared to the static energy recovery full (SERF) adder. So a new improved 14T CMOS l-bit full adder cell is presented in this paper. Results show 50% improvement in threshold loss problem, 45% improvement in speed and considerable power consumption over the SERF adder and other different types of adders with comparable performance.

A New Model for Discovering XML Association Rules from XML Documents

The inherent flexibilities of XML in both structure and semantics makes mining from XML data a complex task with more challenges compared to traditional association rule mining in relational databases. In this paper, we propose a new model for the effective extraction of generalized association rules form a XML document collection. We directly use frequent subtree mining techniques in the discovery process and do not ignore the tree structure of data in the final rules. The frequent subtrees based on the user provided support are split to complement subtrees to form the rules. We explain our model within multi-steps from data preparation to rule generation.

Dynamic Load Balancing in PVM Using Intelligent Application

This paper deals with dynamic load balancing using PVM. In distributed environment Load Balancing and Heterogeneity are very critical issues and needed to drill down in order to achieve the optimal results and efficiency. Various techniques are being used in order to distribute the load dynamically among different nodes and to deal with heterogeneity. These techniques are using different approaches where Process Migration is basic concept with different optimal flavors. But Process Migration is not an easy job, it impose lot of burden and processing effort in order to track each process in nodes. We will propose a dynamic load balancing technique in which application will intelligently balance the load among different nodes, resulting in efficient use of system and have no overheads of process migration. It would also provide a simple solution to problem of load balancing in heterogeneous environment.