A Distributed Group Mutual Exclusion Algorithm for Soft Real Time Systems

The group mutual exclusion (GME) problem is an interesting generalization of the mutual exclusion problem. Several solutions of the GME problem have been proposed for message passing distributed systems. However, none of these solutions is suitable for real time distributed systems. In this paper, we propose a token-based distributed algorithms for the GME problem in soft real time distributed systems. The algorithm uses the concepts of priority queue, dynamic request set and the process state. The algorithm uses first come first serve approach in selecting the next session type between the same priority levels and satisfies the concurrent occupancy property. The algorithm allows all n processors to be inside their CS provided they request for the same session. The performance analysis and correctness proof of the algorithm has also been included in the paper.

Software Industrialization in Systems Integration

Today-s economy is in a permanent change, causing merger and acquisitions and co operations between enterprises. As a consequence, process adaptations and realignments result in systems integration and software development projects. Processes and procedures to execute such projects are still reliant on craftsman-ship of highly skilled workers. A generally accepted, industrialized production, characterized by high efficiency and quality, seems inevitable. In spite of this, current concepts of software industrialization are aimed at traditional software engineering and do not consider the characteristics of systems integration. The present work points out these particularities and discusses the applicability of existing industrial concepts in the systems integration domain. Consequently it defines further areas of research necessary to bring the field of systems integration closer to an industrialized production, allowing a higher efficiency, quality and return on investment.

Evaluation of Electronic Payment Systems Using Fuzzy Multi-Criteria Decision Making Approach

Global competitiveness has recently become the biggest concern of both manufacturing and service companies. Electronic commerce, as a key technology enables the firms to reach all the potential consumers from all over the world. In this study, we have presented commonly used electronic payment systems, and then we have shown the evaluation of these systems in respect to different criteria. The payment systems which are included in this research are the credit card, the virtual credit card, the electronic money, the mobile payment, the credit transfer and the debit instruments. We have realized a systematic comparison of these systems in respect to three main criteria: Technical, economical and social. We have conducted a fuzzy multi-criteria decision making procedure to deal with the multi-attribute nature of the problem. The subjectiveness and imprecision of the evaluation process are modeled using triangular fuzzy numbers.

Testing Visual Abilities of Machines - Visual Intelligence Tests

Intelligence tests are series of tasks designed to measure the capacity to make abstractions, to learn, and to deal with novel situations. Testing of the visual abilities of the shape understanding system (SUS) is performed based on the visual intelligence tests. In this paper the progressive matrices tests are formulated as tasks given to SUS. These tests require good visual problem solving abilities of the human subject. SUS solves these tests by performing complex visual reasoning transforming the visual forms (tests) into the string forms. The experiment proved that the proposed method, which is part of the SUS visual understanding abilities, can solve a test that is very difficult for human subject.

Binary Decision Diagrams: An Improved Variable Ordering using Graph Representation of Boolean Functions

This paper presents an improved variable ordering method to obtain the minimum number of nodes in Reduced Ordered Binary Decision Diagrams (ROBDD). The proposed method uses the graph topology to find the best variable ordering. Therefore the input Boolean function is converted to a unidirectional graph. Three levels of graph parameters are used to increase the probability of having a good variable ordering. The initial level uses the total number of nodes (NN) in all the paths, the total number of paths (NP) and the maximum number of nodes among all paths (MNNAP). The second and third levels use two extra parameters: The shortest path among two variables (SP) and the sum of shortest path from one variable to all the other variables (SSP). A permutation of the graph parameters is performed at each level for each variable order and the number of nodes is recorded. Experimental results are promising; the proposed method is found to be more effective in finding the variable ordering for the majority of benchmark circuits.

Auto Regressive Tree Modeling for Parametric Optimization in Fuzzy Logic Control System

The advantage of solving the complex nonlinear problems by utilizing fuzzy logic methodologies is that the experience or expert-s knowledge described as a fuzzy rule base can be directly embedded into the systems for dealing with the problems. The current limitation of appropriate and automated designing of fuzzy controllers are focused in this paper. The structure discovery and parameter adjustment of the Branched T-S fuzzy model is addressed by a hybrid technique of type constrained sparse tree algorithms. The simulation result for different system model is evaluated and the identification error is observed to be minimum.

Hardware Implementation of Stack-Based Replacement Algorithms

Block replacement algorithms to increase hit ratio have been extensively used in cache memory management. Among basic replacement schemes, LRU and FIFO have been shown to be effective replacement algorithms in terms of hit rates. In this paper, we introduce a flexible stack-based circuit which can be employed in hardware implementation of both LRU and FIFO policies. We propose a simple and efficient architecture such that stack-based replacement algorithms can be implemented without the drawbacks of the traditional architectures. The stack is modular and hence, a set of stack rows can be cascaded depending on the number of blocks in each cache set. Our circuit can be implemented in conjunction with the cache controller and static/dynamic memories to form a cache system. Experimental results exhibit that our proposed circuit provides an average value of 26% improvement in storage bits and its maximum operating frequency is increased by a factor of two

Power-Efficient AND-EXOR-INV Based Realization of Achilles' heel Logic Functions

This paper deals with a power-conscious ANDEXOR- Inverter type logic implementation for a complex class of Boolean functions, namely Achilles- heel functions. Different variants of the above function class have been considered viz. positive, negative and pure horn for analysis and simulation purposes. The proposed realization is compared with the decomposed implementation corresponding to an existing standard AND-EXOR logic minimizer; both result in Boolean networks with good testability attribute. It could be noted that an AND-OR-EXOR type logic network does not exist for the positive phase of this unique class of logic function. Experimental results report significant savings in all the power consumption components for designs based on standard cells pertaining to a 130nm UMC CMOS process The simulations have been extended to validate the savings across all three library corners (typical, best and worst case specifications).

Database Placement on Large-Scale Systems

Large-scale systems such as Grids offer infrastructures for both data distribution and parallel processing. The use of Grid infrastructures is a more recent issue that is already impacting the Distributed Database Management System industry. In DBMS, distributed query processing has emerged as a fundamental technique for ensuring high performance in distributed databases. Database placement is particularly important in large-scale systems because it reduces communication costs and improves resource usage. In this paper, we propose a dynamic database placement policy that depends on query patterns and Grid sites capabilities. We evaluate the performance of the proposed database placement policy using simulations. The obtained results show that dynamic database placement can significantly improve the performance of distributed query processing.

Comparative Study of Some Adaptive Fuzzy Algorithms for Manipulator Control

The problem of manipulator control is a highly complex problem of controlling a system which is multi-input, multioutput, non-linear and time variant. In this paper some adaptive fuzzy, and a new hybrid fuzzy control algorithm have been comparatively evaluated through simulations, for manipulator control. The adaptive fuzzy controllers consist of self-organizing, self-tuning, and coarse/fine adaptive fuzzy schemes. These controllers are tested for different trajectories and for varying manipulator parameters through simulations. Various performance indices like the RMS error, steady state error and maximum error are used for comparison. It is observed that the self-organizing fuzzy controller gives the best performance. The proposed hybrid fuzzy plus integral error controller also performs remarkably well, given its simple structure.

Tongue Diagnosis System Based on PCA and SVM

In this study, we propose a tongue diagnosis method which detects the tongue from face image and divides the tongue area into six areas, and finally generates tongue coating ratio of each area. To detect the tongue area from face image, we use ASM as one of the active shape models. Detected tongue area is divided into six areas widely used in the Korean traditional medicine and the distribution of tongue coating of the six areas is examined by SVM(Support Vector Machine). For SVM, we use a 3-dimensional vector calculated by PCA(Principal Component Analysis) from a 12-dimentional vector consisting of RGB, HIS, Lab, and Luv. As a result, we detected the tongue area stably using ASM and found that PCA and SVM helped raise the ratio of tongue coating detection.

Embedding a Large Amount of Information Using High Secure Neural Based Steganography Algorithm

In this paper, we construct and implement a new Steganography algorithm based on learning system to hide a large amount of information into color BMP image. We have used adaptive image filtering and adaptive non-uniform image segmentation with bits replacement on the appropriate pixels. These pixels are selected randomly rather than sequentially by using new concept defined by main cases with sub cases for each byte in one pixel. According to the steps of design, we have been concluded 16 main cases with their sub cases that covere all aspects of the input information into color bitmap image. High security layers have been proposed through four layers of security to make it difficult to break the encryption of the input information and confuse steganalysis too. Learning system has been introduces at the fourth layer of security through neural network. This layer is used to increase the difficulties of the statistical attacks. Our results against statistical and visual attacks are discussed before and after using the learning system and we make comparison with the previous Steganography algorithm. We show that our algorithm can embed efficiently a large amount of information that has been reached to 75% of the image size (replace 18 bits for each pixel as a maximum) with high quality of the output.

2D Gabor Functions and FCMI Algorithm for Flaws Detection in Ultrasonic Images

In this paper we present a new approach to detecting a flaw in T.O.F.D (Time Of Flight Diffraction) type ultrasonic image based on texture features. Texture is one of the most important features used in recognizing patterns in an image. The paper describes texture features based on 2D Gabor functions, i.e., Gaussian shaped band-pass filters, with dyadic treatment of the radial spatial frequency range and multiple orientations, which represent an appropriate choice for tasks requiring simultaneous measurement in both space and frequency domains. The most relevant features are used as input data on a Fuzzy c-mean clustering classifier. The classes that exist are only two: 'defects' or 'no defects'. The proposed approach is tested on the T.O.F.D image achieved at the laboratory and on the industrial field.

Biometric Technology in Securing the Internet Using Large Neural Network Technology

The article examines the methods of protection of citizens' personal data on the Internet using biometric identity authentication technology. It`s celebrated their potential danger due to the threat of loss of base biometric templates. To eliminate the threat of compromised biometric templates is proposed to use neural networks large and extra-large sizes, which will on the one hand securely (Highly reliable) to authenticate a person by his biometrics, and on the other hand make biometrics a person is not available for observation and understanding. This article also describes in detail the transformation of personal biometric data access code. It`s formed the requirements for biometrics converter code for his work with the images of "Insider," "Stranger", all the "Strangers". It`s analyzed the effect of the dimension of neural networks on the quality of converters mystery of biometrics in access code.

A New Divide and Conquer Software Process Model

The software system goes through a number of stages during its life and a software process model gives a standard format for planning, organizing and running a project. The article presents a new software development process model named as “Divide and Conquer Process Model", based on the idea first it divides the things to make them simple and then gathered them to get the whole work done. The article begins with the backgrounds of different software process models and problems in these models. This is followed by a new divide and conquer process model, explanation of its different stages and at the end edge over other models is shown.

Extraction of Temporal Relation by the Creation of Historical Natural Disaster Archive

In historical science and social science, the influence of natural disaster upon society is a matter of great interest. In recent years, some archives are made through many hands for natural disasters, however it is inefficiency and waste. So, we suppose a computer system to create a historical natural disaster archive. As the target of this analysis, we consider newspaper articles. The news articles are considered to be typical examples that prescribe the temporal relations of affairs for natural disaster. In order to do this analysis, we identify the occurrences in newspaper articles by some index entries, considering the affairs which are specific to natural disasters, and show the temporal relation between natural disasters. We designed and implemented the automatic system of “extraction of the occurrences of natural disaster" and “temporal relation table for natural disaster."

Multi-view Description of Real-Time Systems- Architecture

Real-time embedded systems should benefit from component-based software engineering to handle complexity and deal with dependability. In these systems, applications should not only be logically correct but also behave within time windows. However, in the current component based software engineering approaches, a few of component models handles time properties in a manner that allows efficient analysis and checking at the architectural level. In this paper, we present a meta-model for component-based software description that integrates timing issues. To achieve a complete functional model of software components, our meta-model focuses on four functional aspects: interface, static behavior, dynamic behavior, and interaction protocol. With each aspect we have explicitly associated a time model. Such a time model can be used to check a component-s design against certain properties and to compute the timing properties of component assemblies.

Extended Well-Founded Semantics in Bilattices

One of the most used assumptions in logic programming and deductive databases is the so-called Closed World Assumption (CWA), according to which the atoms that cannot be inferred from the programs are considered to be false (i.e. a pessimistic assumption). One of the most successful semantics of conventional logic programs based on the CWA is the well-founded semantics. However, the CWA is not applicable in all circumstances when information is handled. That is, the well-founded semantics, if conventionally defined, would behave inadequately in different cases. The solution we adopt in this paper is to extend the well-founded semantics in order for it to be based also on other assumptions. The basis of (default) negative information in the well-founded semantics is given by the so-called unfounded sets. We extend this concept by considering optimistic, pessimistic, skeptical and paraconsistent assumptions, used to complete missing information from a program. Our semantics, called extended well-founded semantics, expresses also imperfect information considered to be missing/incomplete, uncertain and/or inconsistent, by using bilattices as multivalued logics. We provide a method of computing the extended well-founded semantics and show that Kripke-Kleene semantics is captured by considering a skeptical assumption. We show also that the complexity of the computation of our semantics is polynomial time.

Featured based Segmentation of Color Textured Images using GLCM and Markov Random Field Model

In this paper, we propose a new image segmentation approach for colour textured images. The proposed method for image segmentation consists of two stages. In the first stage, textural features using gray level co-occurrence matrix(GLCM) are computed for regions of interest (ROI) considered for each class. ROI acts as ground truth for the classes. Ohta model (I1, I2, I3) is the colour model used for segmentation. Statistical mean feature at certain inter pixel distance (IPD) of I2 component was considered to be the optimized textural feature for further segmentation. In the second stage, the feature matrix obtained is assumed to be the degraded version of the image labels and modeled as Markov Random Field (MRF) model to model the unknown image labels. The labels are estimated through maximum a posteriori (MAP) estimation criterion using ICM algorithm. The performance of the proposed approach is compared with that of the existing schemes, JSEG and another scheme which uses GLCM and MRF in RGB colour space. The proposed method is found to be outperforming the existing ones in terms of segmentation accuracy with acceptable rate of convergence. The results are validated with synthetic and real textured images.

Weight-Based Query Optimization System Using Buffer

Fast retrieval of data has been a need of user in any database application. This paper introduces a buffer based query optimization technique in which queries are assigned weights according to their number of execution in a query bank. These queries and their optimized executed plans are loaded into the buffer at the start of the database application. For every query the system searches for a match in the buffer and executes the plan without creating new plans.