The Problem of Using the Calculation of the Critical Path to Solver Instances of the Job Shop Scheduling Problem

A procedure commonly used in Job Shop Scheduling Problem (JSSP) to evaluate the neighborhoods functions that use the non-deterministic algorithms is the calculation of the critical path in a digraph. This paper presents an experimental study of the cost of computation that exists when the calculation of the critical path in the solution for instances in which a JSSP of large size is involved. The results indicate that if the critical path is use in order to generate neighborhoods in the meta-heuristics that are used in JSSP, an elevated cost of computation exists in spite of the fact that the calculation of the critical path in any digraph is of polynomial complexity.

Multi-Agent Model for Automation of Business Process Management System Based on Service Oriented Architecture

Business process automation is an important task in an enterprise business environment software development. The requirements of processing acceleration and automation level of enterprises are inherently different from one organization to another. We present a methodology and system for automation of business process management system architecture by multi-agent collaboration based on SOA. Design layer processes are modeled in semantic markup language for web services application. At the core of our system is considering certain types of human tasks to their further automation across over multiple platform environments. An improved abnormality processing with model for automation of BPMS architecture by multi-agent collaboration based on SOA is introduced. Validating system for efficiency of process automation, an application for educational knowledge base instance would also be described.

Visual Object Tracking in 3D with Color Based Particle Filter

This paper addresses the problem of determining the current 3D location of a moving object and robustly tracking it from a sequence of camera images. The approach presented here uses a particle filter and does not perform any explicit triangulation. Only the color of the object to be tracked is required, but not any precisemotion model. The observation model we have developed avoids the color filtering of the entire image. That and the Monte Carlotechniques inside the particle filter provide real time performance.Experiments with two real cameras are presented and lessons learned are commented. The approach scales easily to more than two cameras and new sensor cues.

Controllable Electrical Power Plug Adapters Made As A ZigBee Wireless Sensor Network

Using Internet communication, new home electronics have functions of monitoring and control from remote. However in many case these electronics work as standalone, and old electronics are not followed. Then, we developed the total remote system include not only new electronics but olds. This systems node is a adapter of electrical power plug that embed relay switch and some sensors, and these nodes communicate with each other. the system server was build on the Internet, and users access to this system from web browsers. To reduce the cost to set up of this system, communication between adapters are used ZigBee wireless network instead of wired LAN cable[3]. From measured RSSI(received signal strength indicator) information between each nodes, the system can estimate roughly adapters were mounted on which room, and where in the room. So also it reduces the cost of mapping nodes. Using this system, energy saving and house monitoring are expected.

Application of Artificial Neural Network for the Prediction of Pressure Distribution of a Plunging Airfoil

Series of experimental tests were conducted on a section of a 660 kW wind turbine blade to measure the pressure distribution of this model oscillating in plunging motion. In order to minimize the amount of data required to predict aerodynamic loads of the airfoil, a General Regression Neural Network, GRNN, was trained using the measured experimental data. The network once proved to be accurate enough, was used to predict the flow behavior of the airfoil for the desired conditions. Results showed that with using a few of the acquired data, the trained neural network was able to predict accurate results with minimal errors when compared with the corresponding measured values. Therefore with employing this trained network the aerodynamic coefficients of the plunging airfoil, are predicted accurately at different oscillation frequencies, amplitudes, and angles of attack; hence reducing the cost of tests while achieving acceptable accuracy.

Robust Regression and its Application in Financial Data Analysis

This research is aimed to describe the application of robust regression and its advantages over the least square regression method in analyzing financial data. To do this, relationship between earning per share, book value of equity per share and share price as price model and earning per share, annual change of earning per share and return of stock as return model is discussed using both robust and least square regressions, and finally the outcomes are compared. Comparing the results from the robust regression and the least square regression shows that the former can provide the possibility of a better and more realistic analysis owing to eliminating or reducing the contribution of outliers and influential data. Therefore, robust regression is recommended for getting more precise results in financial data analysis.

Adaptive Algorithm to Predict the QoS of Web Processes and Workflows

Workflow Management Systems (WfMS) alloworganizations to streamline and automate business processes and reengineer their structure. One important requirement for this type of system is the management and computation of the Quality of Service(QoS) of processes and workflows. Currently, a range of Web processes and workflow languages exist. Each language can be characterized by the set of patterns they support. Developing andimplementing a suitable and generic algorithm to compute the QoSof processes that have been designed using different languages is a difficult task. This is because some patterns are specific to particular process languages and new patterns may be introduced in future versions of a language. In this paper, we describe an adaptive algorithm implemented to cope with these two problems. The algorithm is called adaptive since it can be dynamically changed as the patterns of a process language also change.

Malaysia Folk Literature in Early Childhood Education

Malay Folk Literature in early childhood education served as an important agent in child development that involved emotional, thinking and language aspects. Up to this moment not much research has been carried out in Malaysia particularly in the teaching and learning aspects nor has there been an effort to publish “big books." Hence this article will discuss the stance taken by university undergraduate students, teachers and parents in evaluating Malay Folk Literature in early childhood education to be used as big books. The data collated and analyzed were taken from 646 respondents comprising 347 undergraduates and 299 teachers. Results of the study indicated that Malay Folk Literature can be absorbed into teaching and learning for early childhood with a mean of 4.25 while it can be in big books with a mean of 4.14. Meanwhile the highest mean value required for placing Malay Folk Literature genre as big books in early childhood education rests on exemplary stories for undergraduates with mean of 4.47; animal fables for teachers with a mean of 4.38. The lowest mean value of 3.57 is given to lipurlara stories. The most popular Malay Folk Literature found suitable for early children is Sang Kancil and the Crocodile, followed by Bawang Putih Bawang Merah. Pak Padir, Legends of Mahsuri, Origin of Malacca, and Origin of Rainbow are among the popular stories as well. Overall the undergraduates show a positive attitude toward all the items compared to teachers. The t-test analysis has revealed a non significant relationship between the undergraduate students and teachers with all the items for the teaching and learning of Malay Folk Literature.

Transmitter Macrodiversity in Multihopping- SFN Based Algorithm for Improved Node Reachability and Robust Routing

A novel idea presented in this paper is to combine multihop routing with single-frequency networks (SFNs) for a broadcasting scenario. An SFN is a set of multiple nodes that transmit the same data simultaneously, resulting in transmitter macrodiversity. Two of the most important performance factors of multihop networks, node reachability and routing robustness, are analyzed. Simulation results show that our proposed SFN-D routing algorithm improves the node reachability by 37 percentage points as compared to non-SFN multihop routing. It shows a diversity gain of 3.7 dB, meaning that 3.7 dB lower transmission powers are required for the same reachability. Even better results are possible for larger networks. If an important node becomes inactive, this algorithm can find new routes that a non-SFN scheme would not be able to find. Thus, two of the major problems in multihopping are addressed; achieving robust routing as well as improving node reachability or reducing transmission power.

Clustering Multivariate Empiric Characteristic Functions for Multi-Class SVM Classification

A dissimilarity measure between the empiric characteristic functions of the subsamples associated to the different classes in a multivariate data set is proposed. This measure can be efficiently computed, and it depends on all the cases of each class. It may be used to find groups of similar classes, which could be joined for further analysis, or it could be employed to perform an agglomerative hierarchical cluster analysis of the set of classes. The final tree can serve to build a family of binary classification models, offering an alternative approach to the multi-class SVM problem. We have tested this dendrogram based SVM approach with the oneagainst- one SVM approach over four publicly available data sets, three of them being microarray data. Both performances have been found equivalent, but the first solution requires a smaller number of binary SVM models.

Dynamic TDMA Slot Reservation Protocol for QoS Provisioning in Cognitive Radio Ad Hoc Networks

In this paper, we propose a dynamic TDMA slot reservation (DTSR) protocol for cognitive radio ad hoc networks. Quality of Service (QoS) guarantee plays a critically important role in such networks. We consider the problem of providing QoS guarantee to users as well as to maintain the most efficient use of scarce bandwidth resources. According to one hop neighboring information and the bandwidth requirement, our proposed protocol dynamically changes the frame length and the transmission schedule. A dynamic frame length expansion and shrinking scheme that controls the excessive increase of unassigned slots has been proposed. This method efficiently utilizes the channel bandwidth by assigning unused slots to new neighboring nodes and increasing the frame length when the number of slots in the frame is insufficient to support the neighboring nodes. It also shrinks the frame length when half of the slots in the frame of a node are empty. An efficient slot reservation protocol not only guarantees successful data transmissions without collisions but also enhance channel spatial reuse to maximize the system throughput. Our proposed scheme, which provides both QoS guarantee and efficient resource utilization, be employed to optimize the channel spatial reuse and maximize the system throughput. Extensive simulation results show that the proposed mechanism achieves desirable performance in multichannel multi-rate cognitive radio ad hoc networks.

Trust and Reliability for Public Sector Data

The public sector holds large amounts of data of various areas such as social affairs, economy, or tourism. Various initiatives such as Open Government Data or the EU Directive on public sector information aim to make these data available for public and private service providers. Requirements for the provision of public sector data are defined by legal and organizational frameworks. Surprisingly, the defined requirements hardly cover security aspects such as integrity or authenticity. In this paper we discuss the importance of these missing requirements and present a concept to assure the integrity and authenticity of provided data based on electronic signatures. We show that our concept is perfectly suitable for the provisioning of unaltered data. We also show that our concept can also be extended to data that needs to be anonymized before provisioning by incorporating redactable signatures. Our proposed concept enhances trust and reliability of provided public sector data.

Comparative Survey of Object Serialization Techniques and the Programming Supports

This paper compares six approaches of object serialization from qualitative and quantitative aspects. Those are object serialization in Java, IDL, XStream, Protocol Buffers, Apache Avro, and MessagePack. Using each approach, a common example is serialized to a file and the size of the file is measured. The qualitative comparison works are investigated in the way of checking whether schema definition is required or not, whether schema compiler is required or not, whether serialization is based on ascii or binary, and which programming languages are supported. It is clear that there is no best solution. Each solution makes good in the context it was developed.

Generating Speq Rules based on Automatic Proof of Logical Equivalence

In the Equivalent Transformation (ET) computation model, a program is constructed by the successive accumulation of ET rules. A method by meta-computation by which a correct ET rule is generated has been proposed. Although the method covers a broad range in the generation of ET rules, all important ET rules are not necessarily generated. Generation of more ET rules can be achieved by supplementing generation methods which are specialized for important ET rules. A Specialization-by-Equation (Speq) rule is one of those important rules. A Speq rule describes a procedure in which two variables included in an atom conjunction are equalized due to predicate constraints. In this paper, we propose an algorithm that systematically and recursively generate Speq rules and discuss its effectiveness in the synthesis of ET programs. A Speq rule is generated based on proof of a logical formula consisting of given atom set and dis-equality. The proof is carried out by utilizing some ET rules and the ultimately obtained rules in generating Speq rules.

Functional Lipids and Bioactive Compounds from Oil Rich Indigenous Seeds

Indian subcontinent has a plethora of traditional medicine systems that provide promising solutions to lifestyle disorders in an 'all natural way'. Spices and oilseeds hold prominence in Indian cuisine hence the focus of the current study was to evaluate the bioactive molecules from Linum usitatissinum (LU), Lepidium sativum (LS), Nigella sativa (NS) and Guizotia abyssinica (GA) seeds. The seeds were characterized for functional lipids like omega-3 fatty acid, antioxidant capacity, phenolic compounds, dietary fiber and anti-nutritional factors. Analysis of the seeds revealed LU and LS to be a rich source of α-linolenic acid (41.85 ± 0.33%, 26.71 ± 0.63%), an omega 3 fatty acid (using GCMS). While studying antioxidant potential NS seeds demonstrated highest antioxidant ability (61.68 ± 0.21 TEAC/ 100 gm DW) due to the presence of phenolics and terpenes as assayed by the Mass spectral analysis. When screened for anti-nutritional factor cyanogenic glycoside, LS seeds showed content as high as 1674 ± 54 mg HCN / kg. GA is a probable good source of a stable vegetable oil (SFA: PUFA 1:2.3). The seeds showed diversified bioactive profile and hence further studies to use different bio molecules in tandem for the development of a possible 'nutraceutical cocktail' have been initiated..

ANN Models for Microstrip Line Synthesis and Analysis

Microstrip lines, widely used for good reason, are broadband in frequency and provide circuits that are compact and light in weight. They are generally economical to produce since they are readily adaptable to hybrid and monolithic integrated circuit (IC) fabrication technologies at RF and microwave frequencies. Although, the existing EM simulation models used for the synthesis and analysis of microstrip lines are reasonably accurate, they are computationally intensive and time consuming. Neural networks recently gained attention as fast and flexible vehicles to microwave modeling, simulation and optimization. After learning and abstracting from microwave data, through a process called training, neural network models are used during microwave design to provide instant answers to the task learned.This paper presents simple and accurate ANN models for the synthesis and analysis of Microstrip lines to more accurately compute the characteristic parameters and the physical dimensions respectively for the required design specifications.

On a Way for Constructing Numerical Methods on the Joint of Multistep and Hybrid Methods

Taking into account that many problems of natural sciences and engineering are reduced to solving initial-value problem for ordinary differential equations, beginning from Newton, the scientists investigate approximate solution of ordinary differential equations. There are papers of different authors devoted to the solution of initial value problem for ODE. The Euler-s known method that was developed under the guidance of the famous scientists Adams, Runge and Kutta is the most popular one among these methods. Recently the scientists began to construct the methods preserving some properties of Adams and Runge-Kutta methods and called them hybrid methods. The constructions of such methods are investigated from the middle of the XX century. Here we investigate one generalization of multistep and hybrid methods and on their base we construct specific methods of accuracy order p = 5 and p = 6 for k = 1 ( k is the order of the difference method).

Effect of Passive Modified Atmosphere in Different Packaging Materials on Fresh-Cut Mixed Fruit Salad Quality during Storage

Experiments were carried out at the Latvia State Institute of Fruit-Growing in 2011. Fresh-cut minimally processed apple and pear mixed salad were packed by passive modified atmosphere (MAP) in PP containers, which were hermetically sealed by breathable conventional BOPP PropafreshTM P2GAF, and Amcor Agrifresh films. Biodegradable NatureFlexTM NVS INNOVIA Films and VC999 BioPack PLA films coated with a barrier of pure silicon oxide (SiOx) were used to compare the fresh-cut produce quality with this packed in conventional packaging films. Samples were cold stored at temperature +4.0±0.5 °C up to 10 days. The quality of salad was evaluated by physicochemical properties – weight losses, moisture, firmness, the effect of packaging modes on the colour, dynamics in headspace atmosphere concentration (CO2 and O2), titratable acidity values, as well as by microbiological contamination (yeasts, moulds and total bacteria count) of salads, analyzing before packaging and after 2, 4, 6, 8, and 10 storage days.

Alertness States Classification By SOM and LVQ Neural Networks

Several studies have been carried out, using various techniques, including neural networks, to discriminate vigilance states in humans from electroencephalographic (EEG) signals, but we are still far from results satisfactorily useable results. The work presented in this paper aims at improving this status with regards to 2 aspects. Firstly, we introduce an original procedure made of the association of two neural networks, a self organizing map (SOM) and a learning vector quantization (LVQ), that allows to automatically detect artefacted states and to separate the different levels of vigilance which is a major breakthrough in the field of vigilance. Lastly and more importantly, our study has been oriented toward real-worked situation and the resulting model can be easily implemented as a wearable device. It benefits from restricted computational and memory requirements and data access is very limited in time. Furthermore, some ongoing works demonstrate that this work should shortly results in the design and conception of a non invasive electronic wearable device.

Analytical and Experimental Methods of Design for Supersonic Two-Stage Ejectors

In this paper the supersonic ejectors are experimentally and analytically studied. Ejector is a device that uses the energy of a fluid to move another fluid. This device works like a vacuum pump without usage of piston, rotor or any other moving component. An ejector contains an active nozzle, a passive nozzle, a mixing chamber and a diffuser. Since the fluid viscosity is large, and the flow is turbulent and three dimensional in the mixing chamber, the numerical methods consume long time and high cost to analyze the flow in ejectors. Therefore this paper presents a simple analytical method that is based on the precise governing equations in fluid mechanics. According to achieved analytical relations, a computer code has been prepared to analyze the flow in different components of the ejector. An experiment has been performed in supersonic regime 1.5