Design of Digital Differentiator to Optimize Relative Error

It is observed that the Weighted least-square (WLS) technique, including the modifications, results in equiripple error curve. The resultant error as a percent of the ideal value is highly non-uniformly distributed over the range of frequencies for which the differentiator is designed. The present paper proposes a modification to the technique so that the optimization procedure results in lower maximum relative error compared to the ideal values. Simulation results for first order as well as higher order differentiators are given to illustrate the excellent performance of the proposed method.

Regulatory Effects of Carbon Sources on Tabtoxin Production (A β-lactam Phytotoxin of Pseudomonas syringae pv. tabaci)

The effects of divers carbon substrates were investigated for the tabtoxin production of an isolated pathogenic Pseudomonas syringae pv. tabaci, the causal agent of wildfire of tobacco and are discussed in relation to the bacterium growth. The isolated organism was grown in batch culture on Woolley's medium (28°C, 200 rpm, during 5 days). The growth has been measured by the optical density (OD) at 620 nm and the tabtoxin production quantified by Escherichia coli (K-12) bioassay technique. The growth and the tabtoxin production were both influenced by the substrates (sugars, amino acids, organic acids) used, each, as a sole carbon source and as a supplement for the same amino acids. The most significant quantities of tabtoxin were obtained in presence of some amino acids used as sole carbon source and/or as supplement.

Real Power Generation Scheduling to Improve Steady State Stability Limit in the Java-Bali 500kV Interconnection Power System

This paper will discuss about an active power generator scheduling method in order to increase the limit level of steady state systems. Some power generator optimization methods such as Langrange, PLN (Indonesian electricity company) Operation, and the proposed Z-Thevenin-based method will be studied and compared in respect of their steady state aspects. A method proposed in this paper is built upon the thevenin equivalent impedance values between each load respected to each generator. The steady state stability index obtained with the REI DIMO method. This research will review the 500kV-Jawa-Bali interconnection system. The simulation results show that the proposed method has the highest limit level of steady state stability compared to other optimization techniques such as Lagrange, and PLN operation. Thus, the proposed method can be used to create the steady state stability limit of the system especially in the peak load condition.

Software Maintenance Severity Prediction for Object Oriented Systems

As the majority of faults are found in a few of its modules so there is a need to investigate the modules that are affected severely as compared to other modules and proper maintenance need to be done in time especially for the critical applications. As, Neural networks, which have been already applied in software engineering applications to build reliability growth models predict the gross change or reusability metrics. Neural networks are non-linear sophisticated modeling techniques that are able to model complex functions. Neural network techniques are used when exact nature of input and outputs is not known. A key feature is that they learn the relationship between input and output through training. In this present work, various Neural Network Based techniques are explored and comparative analysis is performed for the prediction of level of need of maintenance by predicting level severity of faults present in NASA-s public domain defect dataset. The comparison of different algorithms is made on the basis of Mean Absolute Error, Root Mean Square Error and Accuracy Values. It is concluded that Generalized Regression Networks is the best algorithm for classification of the software components into different level of severity of impact of the faults. The algorithm can be used to develop model that can be used for identifying modules that are heavily affected by the faults.

Multi-Agents Coordination Model in Inter- Organizational Workflow: Applying in Egovernment

Inter-organizational Workflow (IOW) is commonly used to support the collaboration between heterogeneous and distributed business processes of different autonomous organizations in order to achieve a common goal. E-government is considered as an application field of IOW. The coordination of the different organizations is the fundamental problem in IOW and remains the major cause of failure in e-government projects. In this paper, we introduce a new coordination model for IOW that improves the collaboration between government administrations and that respects IOW requirements applied to e-government. For this purpose, we adopt a Multi-Agent approach, which deals more easily with interorganizational digital government characteristics: distribution, heterogeneity and autonomy. Our model integrates also different technologies to deal with the semantic and technologic interoperability. Moreover, it conserves the existing systems of government administrations by offering a distributed coordination based on interfaces communication. This is especially applied in developing countries, where administrations are not necessary equipped with workflow systems. The use of our coordination techniques allows an easier migration for an e-government solution and with a lower cost. To illustrate the applicability of the proposed model, we present a case study of an identity card creation in Tunisia.

A Consistency Protocol Multi-Layer for Replicas Management in Large Scale Systems

Large scale systems such as computational Grid is a distributed computing infrastructure that can provide globally available network resources. The evolution of information processing systems in Data Grid is characterized by a strong decentralization of data in several fields whose objective is to ensure the availability and the reliability of the data in the reason to provide a fault tolerance and scalability, which cannot be possible only with the use of the techniques of replication. Unfortunately the use of these techniques has a height cost, because it is necessary to maintain consistency between the distributed data. Nevertheless, to agree to live with certain imperfections can improve the performance of the system by improving competition. In this paper, we propose a multi-layer protocol combining the pessimistic and optimistic approaches conceived for the data consistency maintenance in large scale systems. Our approach is based on a hierarchical representation model with tree layers, whose objective is with double vocation, because it initially makes it possible to reduce response times compared to completely pessimistic approach and it the second time to improve the quality of service compared to an optimistic approach.

Efficient Spectral Analysis of Quasi Stationary Time Series

Power Spectral Density (PSD) of quasi-stationary processes can be efficiently estimated using the short time Fourier series (STFT). In this paper, an algorithm has been proposed that computes the PSD of quasi-stationary process efficiently using offline autoregressive model order estimation algorithm, recursive parameter estimation technique and modified sliding window discrete Fourier Transform algorithm. The main difference in this algorithm and STFT is that the sliding window (SW) and window for spectral estimation (WSA) are separately defined. WSA is updated and its PSD is computed only when change in statistics is detected in the SW. The computational complexity of the proposed algorithm is found to be lesser than that for standard STFT technique.

Evaluating Content Based Image Retrieval Techniques with the One Million Images CLIC Test Bed

Pattern recognition and image recognition methods are commonly developed and tested using testbeds, which contain known responses to a query set. Until now, testbeds available for image analysis and content-based image retrieval (CBIR) have been scarce and small-scale. Here we present the one million images CEA-List Image Collection (CLIC) testbed that we have produced, and report on our use of this testbed to evaluate image analysis merging techniques. This testbed will soon be made publicly available through the EU MUSCLE Network of Excellence.

Effect of Catalyst Preparation on the Performance of CaO-ZnO Catalysts for Transesterification

In this research, CaO-ZnO catalysts (with various Ca:Zn atomic ratios of 1:5, 1:3, 1:1, and 3:1) prepared by incipientwetness impregnation (IWI) and co-precipitation (CP) methods were used as a catalyst in the transesterification of palm oil with methanol for biodiesel production. The catalysts were characterized by several techniques, including BET method, CO2-TPD, and Hemmett Indicator. The effects of precursor concentration, and calcination temperature on the catalytic performance were studied under reaction conditions of a 15:1 methanol to oil molar ratio, 6 wt% catalyst, reaction temperature of 60°C, and reaction time of 8 h. At Ca:Zn atomic ratio of 1:3 gave the highest FAME value owing to a basic properties and surface area of the prepared catalyst.

Use of Caffeine and Human Pharmaceutical Compounds to Identify Sewage Contamination

Fecal coliform bacteria are widely used as indicators of sewage contamination in surface water. However, there are some disadvantages in these microbial techniques including time consuming (18-48h) and inability in discriminating between human and animal fecal material sources. Therefore, it is necessary to seek a more specific indicator of human sanitary waste. In this study, the feasibility was investigated to apply caffeine and human pharmaceutical compounds to identify the human-source contamination. The correlation between caffeine and fecal coliform was also explored. Surface water samples were collected from upstream, middle-stream and downstream points respectively, along Rochor Canal, as well as 8 locations of Marina Bay. Results indicate that caffeine is a suitable chemical tracer in Singapore because of its easy detection (in the range of 0.30-2.0 ng/mL), compared with other chemicals monitored. Relative low concentrations of human pharmaceutical compounds (< 0.07 ng/mL) in Rochor Canal and Marina Bay water samples make them hard to be detected and difficult to be chemical tracer. However, their existence can help to validate sewage contamination. In addition, it was discovered the high correlation exists between caffeine concentration and fecal coliform density in the Rochor Canal water samples, demonstrating that caffeine is highly related to the human-source contamination.

Ensembling Classifiers – An Application toImage Data Classification from Cherenkov Telescope Experiment

Ensemble learning algorithms such as AdaBoost and Bagging have been in active research and shown improvements in classification results for several benchmarking data sets with mainly decision trees as their base classifiers. In this paper we experiment to apply these Meta learning techniques with classifiers such as random forests, neural networks and support vector machines. The data sets are from MAGIC, a Cherenkov telescope experiment. The task is to classify gamma signals from overwhelmingly hadron and muon signals representing a rare class classification problem. We compare the individual classifiers with their ensemble counterparts and discuss the results. WEKA a wonderful tool for machine learning has been used for making the experiments.

An Investigation to Effective Parameters on the Damage of Dual Phase Steels by Acoustic Emission Using Energy Ratio

Dual phase steels (DPS)s have a microstructure consisting of a hard second phase called Martensite in the soft Ferrite matrix. In recent years, there has been interest in dual-phase steels, because the application of these materials has made significant usage; particularly in the automotive sector Composite microstructure of (DPS)s exhibit interesting characteristic mechanical properties such as continuous yielding, low yield stress to tensile strength ratios(YS/UTS), and relatively high formability; which offer advantages compared with conventional high strength low alloy steels(HSLAS). The research dealt with the characterization of damage in (DPS)s. In this study by review the mechanisms of failure due to volume fraction of martensite second phase; a new method is introduced to identifying the mechanisms of failure in the various phases of these types of steels. In this method the acoustic emission (AE) technique was used to detect damage progression. These failure mechanisms consist of Ferrite-Martensite interface decohesion and/or martensite phase fracture. For this aim, dual phase steels with different volume fraction of martensite second phase has provided by various heat treatment methods on a low carbon steel (0.1% C), and then AE monitoring is used during tensile test of these DPSs. From AE measurements and an energy ratio curve elaborated from the value of AE energy (it was obtained as the ratio between the strain energy to the acoustic energy), that allows detecting important events, corresponding to the sudden drops. These AE signals events associated with various failure mechanisms are classified for ferrite and (DPS)s with various amount of Vm and different martensite morphology. It is found that AE energy increase with increasing Vm. This increasing of AE energy is because of more contribution of martensite fracture in the failure of samples with higher Vm. Final results show a good relationship between the AE signals and the mechanisms of failure.

Achieving Fair Share Objectives via Goal-Oriented Parallel Computer Job Scheduling Policies

Fair share is one of the scheduling objectives supported on many production systems. However, fair share has been shown to cause performance problems for some users, especially the users with difficult jobs. This work is focusing on extending goaloriented parallel computer job scheduling policies to cover the fair share objective. Goal-oriented parallel computer job scheduling policies have been shown to achieve good scheduling performances when conflicting objectives are required. Goal-oriented policies achieve such good performance by using anytime combinatorial search techniques to find a good compromised schedule within a time limit. The experimental results show that the proposed goal-oriented parallel computer job scheduling policy (namely Tradeofffs( Tw:avgX)) achieves good scheduling performances and also provides good fair share performance.

Feature Extraction of Dorsal Hand Vein Pattern Using a Fast Modified PCA Algorithm Based On Cholesky Decomposition and Lanczos Technique

Dorsal hand vein pattern is an emerging biometric which is attracting the attention of researchers, of late. Research is being carried out on existing techniques in the hope of improving them or finding more efficient ones. In this work, Principle Component Analysis (PCA) , which is a successful method, originally applied on face biometric is being modified using Cholesky decomposition and Lanczos algorithm to extract the dorsal hand vein features. This modified technique decreases the number of computation and hence decreases the processing time. The eigenveins were successfully computed and projected onto the vein space. The system was tested on a database of 200 images and using a threshold value of 0.9 to obtain the False Acceptance Rate (FAR) and False Rejection Rate (FRR). This modified algorithm is desirable when developing biometric security system since it significantly decreases the matching time.

Power Optimization Techniques in FPGA Devices: A Combination of System- and Low-Levels

This paper presents preliminary results regarding system-level power awareness for FPGA implementations in wireless sensor networks. Re-configurability of field programmable gate arrays (FPGA) allows for significant flexibility in its applications to embedded systems. However, high power consumption in FPGA becomes a significant factor in design considerations. We present several ideas and their experimental verifications on how to optimize power consumption at high level of designing process while maintaining the same energy per operation (low-level methods can be used additionally). This paper demonstrates that it is possible to estimate feasible power consumption savings even at the high level of designing process. It is envisaged that our results can be also applied to other embedded systems applications, not limited to FPGA-based.

Flexible, Adaptable and Scaleable Business Rules Management System for Data Validation

The policies governing the business of any organization are well reflected in her business rules. The business rules are implemented by data validation techniques, coded during the software development process. Any change in business policies results in change in the code written for data validation used to enforce the business policies. Implementing the change in business rules without changing the code is the objective of this paper. The proposed approach enables users to create rule sets at run time once the software has been developed. The newly defined rule sets by end users are associated with the data variables for which the validation is required. The proposed approach facilitates the users to define business rules using all the comparison operators and Boolean operators. Multithreading is used to validate the data entered by end user against the business rules applied. The evaluation of the data is performed by a newly created thread using an enhanced form of the RPN (Reverse Polish Notation) algorithm.

Linear Phase High Pass FIR Filter Design using Improved Particle Swarm Optimization

This paper presents an optimal design of linear phase digital high pass finite impulse response (FIR) filter using Improved Particle Swarm Optimization (IPSO). In the design process, the filter length, pass band and stop band frequencies, feasible pass band and stop band ripple sizes are specified. FIR filter design is a multi-modal optimization problem. An iterative method is introduced to find the optimal solution of FIR filter design problem. Evolutionary algorithms like real code genetic algorithm (RGA), particle swarm optimization (PSO), improved particle swarm optimization (IPSO) have been used in this work for the design of linear phase high pass FIR filter. IPSO is an improved PSO that proposes a new definition for the velocity vector and swarm updating and hence the solution quality is improved. A comparison of simulation results reveals the optimization efficacy of the algorithm over the prevailing optimization techniques for the solution of the multimodal, nondifferentiable, highly non-linear, and constrained FIR filter design problems.

An Improvement of PDLZW implementation with a Modified WSC Updating Technique on FPGA

In this paper, an improvement of PDLZW implementation with a new dictionary updating technique is proposed. A unique dictionary is partitioned into hierarchical variable word-width dictionaries. This allows us to search through dictionaries in parallel. Moreover, the barrel shifter is adopted for loading a new input string into the shift register in order to achieve a faster speed. However, the original PDLZW uses a simple FIFO update strategy, which is not efficient. Therefore, a new window based updating technique is implemented to better classify the difference in how often each particular address in the window is referred. The freezing policy is applied to the address most often referred, which would not be updated until all the other addresses in the window have the same priority. This guarantees that the more often referred addresses would not be updated until their time comes. This updating policy leads to an improvement on the compression efficiency of the proposed algorithm while still keep the architecture low complexity and easy to implement.

Noise Analysis of Single-Ended Input Differential Amplifier using Stochastic Differential Equation

In this paper, we analyze the effect of noise in a single- ended input differential amplifier working at high frequencies. Both extrinsic and intrinsic noise are analyzed using time domain method employing techniques from stochastic calculus. Stochastic differential equations are used to obtain autocorrelation functions of the output noise voltage and other solution statistics like mean and variance. The analysis leads to important design implications and suggests changes in the device parameters for improved noise characteristics of the differential amplifier.

Efficient Large Numbers Karatsuba-Ofman Multiplier Designs for Embedded Systems

Long number multiplications (n ≥ 128-bit) are a primitive in most cryptosystems. They can be performed better by using Karatsuba-Ofman technique. This algorithm is easy to parallelize on workstation network and on distributed memory, and it-s known as the practical method of choice. Multiplying long numbers using Karatsuba-Ofman algorithm is fast but is highly recursive. In this paper, we propose different designs of implementing Karatsuba-Ofman multiplier. A mixture of sequential and combinational system design techniques involving pipelining is applied to our proposed designs. Multiplying large numbers can be adapted flexibly to time, area and power criteria. Computationally and occupation constrained in embedded systems such as: smart cards, mobile phones..., multiplication of finite field elements can be achieved more efficiently. The proposed designs are compared to other existing techniques. Mathematical models (Area (n), Delay (n)) of our proposed designs are also elaborated and evaluated on different FPGAs devices.