An Improved Resource Discovery Approach Using P2P Model for Condor: A Grid Middleware

Resource Discovery in Grids is critical for efficient resource allocation and management. Heterogeneous nature and dynamic availability of resources make resource discovery a challenging task. As numbers of nodes are increasing from tens to thousands, scalability is essentially desired. Peer-to-Peer (P2P) techniques, on the other hand, provide effective implementation of scalable services and applications. In this paper we propose a model for resource discovery in Condor Middleware by using the four axis framework defined in P2P approach. The proposed model enhances Condor to incorporate functionality of a P2P system, thus aim to make Condor more scalable, flexible, reliable and robust.

Target Detection using Adaptive Progressive Thresholding Based Shifted Phase-Encoded Fringe-Adjusted Joint Transform Correlator

A new target detection technique is presented in this paper for the identification of small boats in coastal surveillance. The proposed technique employs an adaptive progressive thresholding (APT) scheme to first process the given input scene to separate any objects present in the scene from the background. The preprocessing step results in an image having only the foreground objects, such as boats, trees and other cluttered regions, and hence reduces the search region for the correlation step significantly. The processed image is then fed to the shifted phase-encoded fringe-adjusted joint transform correlator (SPFJTC) technique which produces single and delta-like correlation peak for a potential target present in the input scene. A post-processing step involves using a peak-to-clutter ratio (PCR) to determine whether the boat in the input scene is authorized or unauthorized. Simulation results are presented to show that the proposed technique can successfully determine the presence of an authorized boat and identify any intruding boat present in the given input scene.

Wavelet Based Qualitative Assessment of Femur Bone Strength Using Radiographic Imaging

In this work, the primary compressive strength components of human femur trabecular bone are qualitatively assessed using image processing and wavelet analysis. The Primary Compressive (PC) component in planar radiographic femur trabecular images (N=50) is delineated by semi-automatic image processing procedure. Auto threshold binarization algorithm is employed to recognize the presence of mineralization in the digitized images. The qualitative parameters such as apparent mineralization and total area associated with the PC region are derived for normal and abnormal images.The two-dimensional discrete wavelet transforms are utilized to obtain appropriate features that quantify texture changes in medical images .The normal and abnormal samples of the human femur are comprehensively analyzed using Harr wavelet.The six statistical parameters such as mean, median, mode, standard deviation, mean absolute deviation and median absolute deviation are derived at level 4 decomposition for both approximation and horizontal wavelet coefficients. The correlation coefficient of various wavelet derived parameters with normal and abnormal for both approximated and horizontal coefficients are estimated. It is seen that in almost all cases the abnormal show higher degree of correlation than normals. Further the parameters derived from approximation coefficient show more correlation than those derived from the horizontal coefficients. The parameters mean and median computed at the output of level 4 Harr wavelet channel was found to be a useful predictor to delineate the normal and the abnormal groups.

Action Recognition in Video Sequences using a Mealy Machine

In this paper the use of sequential machines for recognizing actions taken by the objects detected by a general tracking algorithm is proposed. The system may deal with the uncertainty inherent in medium-level vision data. For this purpose, fuzzification of input data is performed. Besides, this transformation allows to manage data independently of the tracking application selected and enables adding characteristics of the analyzed scenario. The representation of actions by means of an automaton and the generation of the input symbols for finite automaton depending on the object and action compared are described. The output of the comparison process between an object and an action is a numerical value that represents the membership of the object to the action. This value is computed depending on how similar the object and the action are. The work concludes with the application of the proposed technique to identify the behavior of vehicles in road traffic scenes.

Electronic Government in the GCC Countries

The study investigated the practices of organisations in Gulf Cooperation Council (GCC) countries with regards to G2C egovernment maturity. It reveals that e-government G2C initiatives in the surveyed countries in particular, and arguably around the world in general, are progressing slowly because of the lack of a trusted and secure medium to authenticate the identities of online users. The authors conclude that national ID schemes will play a major role in helping governments reap the benefits of e-government if the three advanced technologies of smart card, biometrics and public key infrastructure (PKI) are utilised to provide a reliable and trusted authentication medium for e-government services.

Face Authentication for Access Control based on SVM using Class Characteristics

Face authentication for access control is a face membership authentication which passes the person of the incoming face if he turns out to be one of an enrolled person based on face recognition or rejects if not. Face membership authentication belongs to the two class classification problem where SVM(Support Vector Machine) has been successfully applied and shows better performance compared to the conventional threshold-based classification. However, most of previous SVMs have been trained using image feature vectors extracted from face images of each class member(enrolled class/unenrolled class) so that they are not robust to variations in illuminations, poses, and facial expressions and much affected by changes in member configuration of the enrolled class In this paper, we propose an effective face membership authentication method based on SVM using class discriminating features which represent an incoming face image-s associability with each class distinctively. These class discriminating features are weakly related with image features so that they are less affected by variations in illuminations, poses and facial expression. Through experiments, it is shown that the proposed face membership authentication method performs better than the threshold rule-based or the conventional SVM-based authentication methods and is relatively less affected by changes in member size and membership.

Efficient Boosting-Based Active Learning for Specific Object Detection Problems

In this work, we present a novel active learning approach for learning a visual object detection system. Our system is composed of an active learning mechanism as wrapper around a sub-algorithm which implement an online boosting-based learning object detector. In the core is a combination of a bootstrap procedure and a semi automatic learning process based on the online boosting procedure. The idea is to exploit the availability of classifier during learning to automatically label training samples and increasingly improves the classifier. This addresses the issue of reducing labeling effort meanwhile obtain better performance. In addition, we propose a verification process for further improvement of the classifier. The idea is to allow re-update on seen data during learning for stabilizing the detector. The main contribution of this empirical study is a demonstration that active learning based on an online boosting approach trained in this manner can achieve results comparable or even outperform a framework trained in conventional manner using much more labeling effort. Empirical experiments on challenging data set for specific object deteciton problems show the effectiveness of our approach.

Discovering Complex Regularities: from Tree to Semi-Lattice Classifications

Data mining uses a variety of techniques each of which is useful for some particular task. It is important to have a deep understanding of each technique and be able to perform sophisticated analysis. In this article we describe a tool built to simulate a variation of the Kohonen network to perform unsupervised clustering and support the entire data mining process up to results visualization. A graphical representation helps the user to find out a strategy to optimize classification by adding, moving or delete a neuron in order to change the number of classes. The tool is able to automatically suggest a strategy to optimize the number of classes optimization, but also support both tree classifications and semi-lattice organizations of the classes to give to the users the possibility of passing from one class to the ones with which it has some aspects in common. Examples of using tree and semi-lattice classifications are given to illustrate advantages and problems. The tool is applied to classify macroeconomic data that report the most developed countries- import and export. It is possible to classify the countries based on their economic behaviour and use the tool to characterize the commercial behaviour of a country in a selected class from the analysis of positive and negative features that contribute to classes formation. Possible interrelationships between the classes and their meaning are also discussed.

A Similarity Measure for Clustering and its Applications

This paper introduces a measure of similarity between two clusterings of the same dataset produced by two different algorithms, or even the same algorithm (K-means, for instance, with different initializations usually produce different results in clustering the same dataset). We then apply the measure to calculate the similarity between pairs of clusterings, with special interest directed at comparing the similarity between various machine clusterings and human clustering of datasets. The similarity measure thus can be used to identify the best (in terms of most similar to human) clustering algorithm for a specific problem at hand. Experimental results pertaining to the text categorization problem of a Portuguese corpus (wherein a translation-into-English approach is used) are presented, as well as results on the well-known benchmark IRIS dataset. The significance and other potential applications of the proposed measure are discussed.

Clustered Signatures for Modeling and Recognizing 3D Rigid Objects

This paper describes a probabilistic method for three-dimensional object recognition using a shared pool of surface signatures. This technique uses flatness, orientation, and convexity signatures that encode the surface of a free-form object into three discriminative vectors, and then creates a shared pool of data by clustering the signatures using a distance function. This method applies the Bayes-s rule for recognition process, and it is extensible to a large collection of three-dimensional objects.

Performance Analysis of Quantum Cascaded Lasers

Improving the performance of the QCL through block diagram as well as mathematical models is the main scope of this paper. In order to enhance the performance of the underlined device, the mathematical model parameters are used in a reliable manner in such a way that the optimum behavior was achieved. These parameters play the central role in specifying the optical characteristics of the considered laser source. Moreover, it is important to have a large amount of radiated power, where increasing the amount of radiated power represents the main hopping process that can be predicted from the behavior of quantum laser devices. It was found that there is a good agreement between the calculated values from our mathematical model and those obtained with VisSim and experimental results. These demonstrate the strength of mplementation of both mathematical and block diagram models.

Through Biometric Card in Romania: Person Identification by Face, Fingerprint and Voice Recognition

In this paper three different approaches for person verification and identification, i.e. by means of fingerprints, face and voice recognition, are studied. Face recognition uses parts-based representation methods and a manifold learning approach. The assessment criterion is recognition accuracy. The techniques under investigation are: a) Local Non-negative Matrix Factorization (LNMF); b) Independent Components Analysis (ICA); c) NMF with sparse constraints (NMFsc); d) Locality Preserving Projections (Laplacianfaces). Fingerprint detection was approached by classical minutiae (small graphical patterns) matching through image segmentation by using a structural approach and a neural network as decision block. As to voice / speaker recognition, melodic cepstral and delta delta mel cepstral analysis were used as main methods, in order to construct a supervised speaker-dependent voice recognition system. The final decision (e.g. “accept-reject" for a verification task) is taken by using a majority voting technique applied to the three biometrics. The preliminary results, obtained for medium databases of fingerprints, faces and voice recordings, indicate the feasibility of our study and an overall recognition precision (about 92%) permitting the utilization of our system for a future complex biometric card.

Using Genetic Algorithm to Improve Information Retrieval Systems

This study investigates the use of genetic algorithms in information retrieval. The method is shown to be applicable to three well-known documents collections, where more relevant documents are presented to users in the genetic modification. In this paper we present a new fitness function for approximate information retrieval which is very fast and very flexible, than cosine similarity fitness function.

Multimedia E-Books for Digital Mechanism and Gear Library

This paper presents a digital engineering library – the Digital Mechanism and Gear Library, DMG-Lib – providing a multimedia collection of e-books, pictures, videos and animations in the domain of mechanisms and machines. The specific characteristic about DMG-Lib is the enrichment and cross-linking of the different sources. DMG-Lib e-books not only present pages as pixel images but also selected figures augmented with interactive animations. The presentation of animations in e-books increases the clearness of the information. To present the multimedia e-books and make them available in the DMG-Lib internet portal a special e-book reader called StreamBook was developed for optimal presentation of digitized books and to enable reading the e-books as well as working efficiently and individually with the enriched information. The objective is to support different user tasks ranging from information retrieval to development and design of mechanisms.

A New Framework for Evaluation and Prioritization of Suppliers using a Hierarchical Fuzzy TOPSIS

This paper suggests an algorithm for the evaluation and selection of suppliers. At the beginning, all the needed materials and services used by the organization were identified and categorized with regard to their nature by ABC method. Afterwards, in order to reduce risk factors and maximize the organization's profit, purchase strategies were determined. Then, appropriate criteria were identified for primary evaluation of suppliers applying to the organization. The output of this stage was a list of suppliers qualified by the organization to participate in its tenders. Subsequently, considering a material in particular, appropriate criteria on the ordering of the mentioned material were determined, taking into account the particular materials' specifications as well as the organization's needs. Finally, for the purpose of validation and verification of the proposed model, it was applied to Mobarakeh Steel Company (MSC), the qualified suppliers of this Company are ranked by the means of a Hierarchical Fuzzy TOPSIS method. The obtained results show that the proposed algorithm is quite effective, efficient and easy to apply.

Design of Digital Differentiator to Optimize Relative Error

It is observed that the Weighted least-square (WLS) technique, including the modifications, results in equiripple error curve. The resultant error as a percent of the ideal value is highly non-uniformly distributed over the range of frequencies for which the differentiator is designed. The present paper proposes a modification to the technique so that the optimization procedure results in lower maximum relative error compared to the ideal values. Simulation results for first order as well as higher order differentiators are given to illustrate the excellent performance of the proposed method.

Adaptive Kernel Filtering Used in Video Processing

In this paper we present a noise reduction filter for video processing. It is based on the recently proposed two dimensional steering kernel, extended to three dimensions and further augmented to suit the spatial-temporal domain of video processing. Two alternative filters are proposed - the time symmetric kernel and the time asymmetric kernel. The first reduces the noise on single sequences, but to handle the problems at scene shift the asymmetric kernel is introduced. The performance of both are tested on simulated data and on a real video sequence together with the existing steering kernel. The proposed kernels improves the Rooted Mean Squared Error (RMSE) compared to the original steering kernel method on video material.

Parallel Joint Channel Coding and Cryptography

Method of Parallel Joint Channel Coding and Cryptography has been analyzed and simulated in this paper. The method is an extension of Soft Input Decryption with feedback, which is used for improvement of channel decoding of secured messages. Parallel Joint Channel Coding and Cryptography results in improved coding gain of channel decoding, which achieves more than 2 dB. Such results are an implication of a combination of receiver components and their interoperability.

Increased Capacity of Information Hiding in LSB-s Method for Text and Image

Steganography, derived from Greek, literally means “covered writing". It includes a vast array of secret communications methods that conceal the message-s very existence. These methods include invisible inks, microdots, character arrangement, digital signatures, covert channels, and spread spectrum communications. This paper proposes a new improved version of Least Significant Bit (LSB) method. The approach proposed is simple for implementation when compared to Pixel value Differencing (PVD) method and yet achieves a High embedding capacity and imperceptibility. The proposed method can also be applied to 24 bit color images and achieve embedding capacity much higher than PVD.

Novel and Different Definitions for Fuzzy Union and Intersection Operations

This paper presents three new methodologies for the basic operations, which aim at finding new ways of computing union (maximum) and intersection (minimum) membership values by taking into effect the entire membership values in a fuzzy set. The new methodologies are conceptually simple and easy from the application point of view and are illustrated with a variety of problems such as Cartesian product of two fuzzy sets, max –min composition of two fuzzy sets in different product spaces and an application of an inverted pendulum to determine the impact of the new methodologies. The results clearly indicate a difference based on the nature of the fuzzy sets under consideration and hence will be highly useful in quite a few applications where different values have significant impact on the behavior of the system.