Levenberg-Marquardt Algorithm for Karachi Stock Exchange Share Rates Forecasting

Financial forecasting is an example of signal processing problems. A number of ways to train/learn the network are available. We have used Levenberg-Marquardt algorithm for error back-propagation for weight adjustment. Pre-processing of data has reduced much of the variation at large scale to small scale, reducing the variation of training data.

A Consistency Protocol Multi-Layer for Replicas Management in Large Scale Systems

Large scale systems such as computational Grid is a distributed computing infrastructure that can provide globally available network resources. The evolution of information processing systems in Data Grid is characterized by a strong decentralization of data in several fields whose objective is to ensure the availability and the reliability of the data in the reason to provide a fault tolerance and scalability, which cannot be possible only with the use of the techniques of replication. Unfortunately the use of these techniques has a height cost, because it is necessary to maintain consistency between the distributed data. Nevertheless, to agree to live with certain imperfections can improve the performance of the system by improving competition. In this paper, we propose a multi-layer protocol combining the pessimistic and optimistic approaches conceived for the data consistency maintenance in large scale systems. Our approach is based on a hierarchical representation model with tree layers, whose objective is with double vocation, because it initially makes it possible to reduce response times compared to completely pessimistic approach and it the second time to improve the quality of service compared to an optimistic approach.

The Application of Non-quantitative Modelling in the Analysis of a Network Warfare Environment

Network warfare is an emerging concept that focuses on the network and computer based forms through which information is attacked and defended. Various computer and network security concepts thus play a role in network warfare. Due the intricacy of the various interacting components, a model to better understand the complexity in a network warfare environment would be beneficial. Non-quantitative modeling is a useful method to better characterize the field due to the rich ideas that can be generated based on the use of secular associations, chronological origins, linked concepts, categorizations and context specifications. This paper proposes the use of non-quantitative methods through a morphological analysis to better explore and define the influential conditions in a network warfare environment.

A Symbol by Symbol Clustering Based Blind Equalizer

A new blind symbol by symbol equalizer is proposed. The operation of the proposed equalizer is based on the geometric properties of the two dimensional data constellation. An unsupervised clustering technique is used to locate the clusters formed by the received data. The symmetric properties of the clusters labels are subsequently utilized in order to label the clusters. Following this step, the received data are compared to clusters and decisions are made on a symbol by symbol basis, by assigning to each data the label of the nearest cluster. The operation of the equalizer is investigated both in linear and nonlinear channels. The performance of the proposed equalizer is compared to the performance of a CMAbased blind equalizer.

QoS Management in the Future Internet

The talks about technological convergence had been around for almost twenty years. Today Internet made it possible. And this is not only technical evolution. The way it changed our lives reflected in variety of applications, services and technologies used in day-to-day life. Such benefits imposed even more requirements on heterogeneous and unreliable IP networks. Current paper outlines QoS management system developed in the NetQoS [1] project. It describes an overall architecture of management system for heterogeneous networks and proposes automated multi-layer QoS management. Paper focuses on the structure of the most crucial modules of the system that enable autonomous and multi-layer provisioning and dynamic adaptation.

A Modular On-line Profit Sharing Approach in Multiagent Domains

How to coordinate the behaviors of the agents through learning is a challenging problem within multi-agent domains. Because of its complexity, recent work has focused on how coordinated strategies can be learned. Here we are interested in using reinforcement learning techniques to learn the coordinated actions of a group of agents, without requiring explicit communication among them. However, traditional reinforcement learning methods are based on the assumption that the environment can be modeled as Markov Decision Process, which usually cannot be satisfied when multiple agents coexist in the same environment. Moreover, to effectively coordinate each agent-s behavior so as to achieve the goal, it-s necessary to augment the state of each agent with the information about other existing agents. Whereas, as the number of agents in a multiagent environment increases, the state space of each agent grows exponentially, which will cause the combinational explosion problem. Profit sharing is one of the reinforcement learning methods that allow agents to learn effective behaviors from their experiences even within non-Markovian environments. In this paper, to remedy the drawback of the original profit sharing approach that needs much memory to store each state-action pair during the learning process, we firstly address a kind of on-line rational profit sharing algorithm. Then, we integrate the advantages of modular learning architecture with on-line rational profit sharing algorithm, and propose a new modular reinforcement learning model. The effectiveness of the technique is demonstrated using the pursuit problem.

A New Implementation of PCA for Fast Face Detection

Principal Component Analysis (PCA) has many different important applications especially in pattern detection such as face detection / recognition. Therefore, for real time applications, the response time is required to be as small as possible. In this paper, new implementation of PCA for fast face detection is presented. Such new implementation is designed based on cross correlation in the frequency domain between the input image and eigenvectors (weights). Simulation results show that the proposed implementation of PCA is faster than conventional one.

Automatic Removal of Ocular Artifacts using JADE Algorithm and Neural Network

The ElectroEncephaloGram (EEG) is useful for clinical diagnosis and biomedical research. EEG signals often contain strong ElectroOculoGram (EOG) artifacts produced by eye movements and eye blinks especially in EEG recorded from frontal channels. These artifacts obscure the underlying brain activity, making its visual or automated inspection difficult. The goal of ocular artifact removal is to remove ocular artifacts from the recorded EEG, leaving the underlying background signals due to brain activity. In recent times, Independent Component Analysis (ICA) algorithms have demonstrated superior potential in obtaining the least dependent source components. In this paper, the independent components are obtained by using the JADE algorithm (best separating algorithm) and are classified into either artifact component or neural component. Neural Network is used for the classification of the obtained independent components. Neural Network requires input features that exactly represent the true character of the input signals so that the neural network could classify the signals based on those key characters that differentiate between various signals. In this work, Auto Regressive (AR) coefficients are used as the input features for classification. Two neural network approaches are used to learn classification rules from EEG data. First, a Polynomial Neural Network (PNN) trained by GMDH (Group Method of Data Handling) algorithm is used and secondly, feed-forward neural network classifier trained by a standard back-propagation algorithm is used for classification and the results show that JADE-FNN performs better than JADEPNN.

Super Resolution Blind Reconstruction of Low Resolution Images using Wavelets based Fusion

Crucial information barely visible to the human eye is often embedded in a series of low resolution images taken of the same scene. Super resolution reconstruction is the process of combining several low resolution images into a single higher resolution image. The ideal algorithm should be fast, and should add sharpness and details, both at edges and in regions without adding artifacts. In this paper we propose a super resolution blind reconstruction technique for linearly degraded images. In our proposed technique the algorithm is divided into three parts an image registration, wavelets based fusion and an image restoration. In this paper three low resolution images are considered which may sub pixels shifted, rotated, blurred or noisy, the sub pixel shifted images are registered using affine transformation model; A wavelet based fusion is performed and the noise is removed using soft thresolding. Our proposed technique reduces blocking artifacts and also smoothens the edges and it is also able to restore high frequency details in an image. Our technique is efficient and computationally fast having clear perspective of real time implementation.

Design Techniques and Implementation of Low Power High-Throughput Discrete Wavelet Transform Tilters for JPEG 2000 Standard

In this paper, the implementation of low power, high throughput convolutional filters for the one dimensional Discrete Wavelet Transform and its inverse are presented. The analysis filters have already been used for the implementation of a high performance DWT encoder [15] with minimum memory requirements for the JPEG 2000 standard. This paper presents the design techniques and the implementation of the convolutional filters included in the JPEG2000 standard for the forward and inverse DWT for achieving low-power operation, high performance and reduced memory accesses. Moreover, they have the ability of performing progressive computations so as to minimize the buffering between the decomposition and reconstruction phases. The experimental results illustrate the filters- low power high throughput characteristics as well as their memory efficient operation.

Hardware Implementation of Stack-Based Replacement Algorithms

Block replacement algorithms to increase hit ratio have been extensively used in cache memory management. Among basic replacement schemes, LRU and FIFO have been shown to be effective replacement algorithms in terms of hit rates. In this paper, we introduce a flexible stack-based circuit which can be employed in hardware implementation of both LRU and FIFO policies. We propose a simple and efficient architecture such that stack-based replacement algorithms can be implemented without the drawbacks of the traditional architectures. The stack is modular and hence, a set of stack rows can be cascaded depending on the number of blocks in each cache set. Our circuit can be implemented in conjunction with the cache controller and static/dynamic memories to form a cache system. Experimental results exhibit that our proposed circuit provides an average value of 26% improvement in storage bits and its maximum operating frequency is increased by a factor of two

New Features for Specific JPEG Steganalysis

We present in this paper a new approach for specific JPEG steganalysis and propose studying statistics of the compressed DCT coefficients. Traditionally, steganographic algorithms try to preserve statistics of the DCT and of the spatial domain, but they cannot preserve both and also control the alteration of the compressed data. We have noticed a deviation of the entropy of the compressed data after a first embedding. This deviation is greater when the image is a cover medium than when the image is a stego image. To observe this deviation, we pointed out new statistic features and combined them with the Multiple Embedding Method. This approach is motivated by the Avalanche Criterion of the JPEG lossless compression step. This criterion makes possible the design of detectors whose detection rates are independent of the payload. Finally, we designed a Fisher discriminant based classifier for well known steganographic algorithms, Outguess, F5 and Hide and Seek. The experiemental results we obtained show the efficiency of our classifier for these algorithms. Moreover, it is also designed to work with low embedding rates (< 10-5) and according to the avalanche criterion of RLE and Huffman compression step, its efficiency is independent of the quantity of hidden information.

Robust H8 Fuzzy Control Design for Nonlinear Two-Time Scale System with Markovian Jumps based on LMI Approach

This paper examines the problem of designing a robust H8 state-feedback controller for a class of nonlinear two-time scale systems with Markovian Jumps described by a Takagi-Sugeno (TS) fuzzy model. Based on a linear matrix inequality (LMI) approach, LMI-based sufficient conditions for the uncertain Markovian jump nonlinear two-time scale systems to have an H8 performance are derived. The proposed approach does not involve the separation of states into slow and fast ones and it can be applied not only to standard, but also to nonstandard nonlinear two-time scale systems. A numerical example is provided to illustrate the design developed in this paper.

An Agent Oriented Approach to Operational Profile Management

Software reliability, defined as the probability of a software system or application functioning without failure or errors over a defined period of time, has been an important area of research for over three decades. Several research efforts aimed at developing models to improve reliability are currently underway. One of the most popular approaches to software reliability adopted by some of these research efforts involves the use of operational profiles to predict how software applications will be used. Operational profiles are a quantification of usage patterns for a software application. The research presented in this paper investigates an innovative multiagent framework for automatic creation and management of operational profiles for generic distributed systems after their release into the market. The architecture of the proposed Operational Profile MAS (Multi-Agent System) is presented along with detailed descriptions of the various models arrived at following the analysis and design phases of the proposed system. The operational profile in this paper is extended to comprise seven different profiles. Further, the criticality of operations is defined using a new composed metrics in order to organize the testing process as well as to decrease the time and cost involved in this process. A prototype implementation of the proposed MAS is included as proof-of-concept and the framework is considered as a step towards making distributed systems intelligent and self-managing.

Qualitative Parametric Comparison of Load Balancing Algorithms in Parallel and Distributed Computing Environment

Decrease in hardware costs and advances in computer networking technologies have led to increased interest in the use of large-scale parallel and distributed computing systems. One of the biggest issues in such systems is the development of effective techniques/algorithms for the distribution of the processes/load of a parallel program on multiple hosts to achieve goal(s) such as minimizing execution time, minimizing communication delays, maximizing resource utilization and maximizing throughput. Substantive research using queuing analysis and assuming job arrivals following a Poisson pattern, have shown that in a multi-host system the probability of one of the hosts being idle while other host has multiple jobs queued up can be very high. Such imbalances in system load suggest that performance can be improved by either transferring jobs from the currently heavily loaded hosts to the lightly loaded ones or distributing load evenly/fairly among the hosts .The algorithms known as load balancing algorithms, helps to achieve the above said goal(s). These algorithms come into two basic categories - static and dynamic. Whereas static load balancing algorithms (SLB) take decisions regarding assignment of tasks to processors based on the average estimated values of process execution times and communication delays at compile time, Dynamic load balancing algorithms (DLB) are adaptive to changing situations and take decisions at run time. The objective of this paper work is to identify qualitative parameters for the comparison of above said algorithms. In future this work can be extended to develop an experimental environment to study these Load balancing algorithms based on comparative parameters quantitatively.

Shift Invariant Support Vector Machines Face Recognition System

In this paper, we present a new method for incorporating global shift invariance in support vector machines. Unlike other approaches which incorporate a feature extraction stage, we first scale the image and then classify it by using the modified support vector machines classifier. Shift invariance is achieved by replacing dot products between patterns used by the SVM classifier with the maximum cross-correlation value between them. Unlike the normal approach, in which the patterns are treated as vectors, in our approach the patterns are treated as matrices (or images). Crosscorrelation is computed by using computationally efficient techniques such as the fast Fourier transform. The method has been tested on the ORL face database. The tests indicate that this method can improve the recognition rate of an SVM classifier.

Web Usability : A Fuzzy Approach to the Navigation Structure Enhancement in a Website System, Case of Iranian Civil Aviation Organization Website

With the proliferation of World Wide Web, development of web-based technologies and the growth in web content, the structure of a website becomes more complex and web navigation becomes a critical issue to both web designers and users. In this paper we define the content and web pages as two important and influential factors in website navigation and paraphrase the enhancement in the website navigation as making some useful changes in the link structure of the website based on the aforementioned factors. Then we suggest a new method for proposing the changes using fuzzy approach to optimize the website architecture. Applying the proposed method to a real case of Iranian Civil Aviation Organization (CAO) website, we discuss the results of the novel approach at the final section.

Kalman-s Shrinkage for Wavelet-Based Despeckling of SAR Images

In this paper, a new probability density function (pdf) is proposed to model the statistics of wavelet coefficients, and a simple Kalman-s filter is derived from the new pdf using Bayesian estimation theory. Specifically, we decompose the speckled image into wavelet subbands, we apply the Kalman-s filter to the high subbands, and reconstruct a despeckled image from the modified detail coefficients. Experimental results demonstrate that our method compares favorably to several other despeckling methods on test synthetic aperture radar (SAR) images.

Evaluating some Feature Selection Methods for an Improved SVM Classifier

Text categorization is the problem of classifying text documents into a set of predefined classes. After a preprocessing step the documents are typically represented as large sparse vectors. When training classifiers on large collections of documents, both the time and memory restrictions can be quite prohibitive. This justifies the application of features selection methods to reduce the dimensionality of the document-representation vector. Four feature selection methods are evaluated: Random Selection, Information Gain (IG), Support Vector Machine (called SVM_FS) and Genetic Algorithm with SVM (GA_FS). We showed that the best results were obtained with SVM_FS and GA_FS methods for a relatively small dimension of the features vector comparative with the IG method that involves longer vectors, for quite similar classification accuracies. Also we present a novel method to better correlate SVM kernel-s parameters (Polynomial or Gaussian kernel).

Customer Knowledge and Service Development, the Web 2.0 Role in Co-production

The paper is concerned with relationships between SSME and ICTs and focuses on the role of Web 2.0 tools in the service development process. The research presented aims at exploring how collaborative technologies can support and improve service processes, highlighting customer centrality and value coproduction. The core idea of the paper is the centrality of user participation and the collaborative technologies as enabling factors; Wikipedia is analyzed as an example. The result of such analysis is the identification and description of a pattern characterising specific services in which users collaborate by means of web tools with value co-producers during the service process. The pattern of collaborative co-production concerning several categories of services including knowledge based services is then discussed.