Deterministic Random Number Generators for Online Applications

Cryptography, Image watermarking and E-banking are filled with apparent oxymora and paradoxes. Random sequences are used as keys to encrypt information to be used as watermark during embedding the watermark and also to extract the watermark during detection. Also, the keys are very much utilized for 24x7x365 banking operations. Therefore a deterministic random sequence is very much useful for online applications. In order to obtain the same random sequence, we need to supply the same seed to the generator. Many researchers have used Deterministic Random Number Generators (DRNGs) for cryptographic applications and Pseudo Noise Random sequences (PNs) for watermarking. Even though, there are some weaknesses in PN due to attacks, the research community used it mostly in digital watermarking. On the other hand, DRNGs have not been widely used in online watermarking due to its computational complexity and non-robustness. Therefore, we have invented a new design of generating DRNG using Pi-series to make it useful for online Cryptographic, Digital watermarking and Banking applications.

Characterization of Adhesive Layers in Sandwich Composites by Nondestructive Technique

New nondestructive technique, namely an inverse technique based on vibration tests, to characterize nonlinear mechanical properties of adhesive layers in sandwich composites is developed. An adhesive layer is described as a viscoelastic isotropic material with storage and loss moduli which are both frequency dependent values in wide frequency range. An optimization based on the planning of experiments and response surface technique to minimize the error functional is applied to decrease considerably the computational expenses. The developed identification technique has been tested on aluminum panels and successfully applied to characterize viscoelastic material properties of 3M damping polymer ISD-112 used as a core material in sandwich panels.

A Survey of Job Scheduling and Resource Management in Grid Computing

Grid computing is a form of distributed computing that involves coordinating and sharing computational power, data storage and network resources across dynamic and geographically dispersed organizations. Scheduling onto the Grid is NP-complete, so there is no best scheduling algorithm for all grid computing systems. An alternative is to select an appropriate scheduling algorithm to use in a given grid environment because of the characteristics of the tasks, machines and network connectivity. Job and resource scheduling is one of the key research area in grid computing. The goal of scheduling is to achieve highest possible system throughput and to match the application need with the available computing resources. Motivation of the survey is to encourage the amateur researcher in the field of grid computing, so that they can understand easily the concept of scheduling and can contribute in developing more efficient scheduling algorithm. This will benefit interested researchers to carry out further work in this thrust area of research.

Primer Design with Specific PCR Product using Particle Swarm Optimization

Before performing polymerase chain reactions (PCR), a feasible primer set is required. Many primer design methods have been proposed for design a feasible primer set. However, the majority of these methods require a relatively long time to obtain an optimal solution since large quantities of template DNA need to be analyzed. Furthermore, the designed primer sets usually do not provide a specific PCR product. In recent years, evolutionary computation has been applied to PCR primer design and yielded promising results. In this paper, a particle swarm optimization (PSO) algorithm is proposed to solve primer design problems associated with providing a specific product for PCR experiments. A test set of the gene CYP1A1, associated with a heightened lung cancer risk was analyzed and the comparison of accuracy and running time with the genetic algorithm (GA) and memetic algorithm (MA) was performed. A comparison of results indicated that the proposed PSO method for primer design finds optimal or near-optimal primer sets and effective PCR products in a relatively short time.

Visual Hull with Imprecise Input

Imprecision is a long-standing problem in CAD design and high accuracy image-based reconstruction applications. The visual hull which is the closed silhouette equivalent shape of the objects of interest is an important concept in image-based reconstruction. We extend the domain-theoretic framework, which is a robust and imprecision capturing geometric model, to analyze the imprecision in the output shape when the input vertices are given with imprecision. Under this framework, we show an efficient algorithm to generate the 2D partial visual hull which represents the exact information of the visual hull with only basic imprecision assumptions. We also show how the visual hull from polyhedra problem can be efficiently solved in the context of imprecise input.

Development of Coronal Field and Solar Wind Components for MHD Interplanetary Simulations

The connection between solar activity and adverse phenomena in the Earth’s environment that can affect space and ground based technologies has spurred interest in Space Weather (SW) research. A great effort has been put on the development of suitable models that can provide advanced forecast of SW events. With the progress in computational technology, it is becoming possible to develop operational large scale physics based models which can incorporate the most important physical processes and domains of the Sun-Earth system. In order to enhance our SW prediction capabilities we are developing advanced numerical tools. With operational requirements in mind, our goal is to develop a modular simulation framework of propagation of the disturbances from the Sun through interplanetary space to the Earth. Here, we report and discuss on the development of coronal field and solar wind components for a large scale MHD code. The model for these components is based on a potential field source surface model and an empirical Wang-Sheeley-Arge solar wind relation. 

Visualization and Indexing of Spectral Databases

On-line (near infrared) spectroscopy is widely used to support the operation of complex process systems. Information extracted from spectral database can be used to estimate unmeasured product properties and monitor the operation of the process. These techniques are based on looking for similar spectra by nearest neighborhood algorithms and distance based searching methods. Search for nearest neighbors in the spectral space is an NP-hard problem, the computational complexity increases by the number of points in the discrete spectrum and the number of samples in the database. To reduce the calculation time some kind of indexing could be used. The main idea presented in this paper is to combine indexing and visualization techniques to reduce the computational requirement of estimation algorithms by providing a two dimensional indexing that can also be used to visualize the structure of the spectral database. This 2D visualization of spectral database does not only support application of distance and similarity based techniques but enables the utilization of advanced clustering and prediction algorithms based on the Delaunay tessellation of the mapped spectral space. This means the prediction has not to use the high dimension space but can be based on the mapped space too. The results illustrate that the proposed method is able to segment (cluster) spectral databases and detect outliers that are not suitable for instance based learning algorithms.

Formal Analysis of a Public-Key Algorithm

In this article, a formal specification and verification of the Rabin public-key scheme in a formal proof system is presented. The idea is to use the two views of cryptographic verification: the computational approach relying on the vocabulary of probability theory and complexity theory and the formal approach based on ideas and techniques from logic and programming languages. A major objective of this article is the presentation of the first computer-proved implementation of the Rabin public-key scheme in Isabelle/HOL. Moreover, we explicate a (computer-proven) formalization of correctness as well as a computer verification of security properties using a straight-forward computation model in Isabelle/HOL. The analysis uses a given database to prove formal properties of our implemented functions with computer support. The main task in designing a practical formalization of correctness as well as efficient computer proofs of security properties is to cope with the complexity of cryptographic proving. We reduce this complexity by exploring a light-weight formalization that enables both appropriate formal definitions as well as efficient formal proofs. Consequently, we get reliable proofs with a minimal error rate augmenting the used database, what provides a formal basis for more computer proof constructions in this area.

Effect of Valve Pressure Drop in Exergy Analysis of C2+ Recovery Plants Refrigeration Cycles

This paper provides an exergy analysis of the multistage refrigeration cycle used for C2+ recovery plant. The behavior of an industrial refrigeration cycle with refrigerant propane has been investigated by the exergy method. A computational model based on the exergy analysis is presented for the investigation of the effects of the valves on the exergy losses, the second law of efficiency, and the coefficient of performance (COP) of a vapor compression refrigeration cycle. The equations of exergy destruction and exergetic efficiency for the main cycle components such as evaporators, condensers, compressors, and expansion valves are developed. The relations for the total exergy destruction in the cycle and the cycle exergetic efficiency are obtained. An ethane recovery unit with its refrigeration cycle has been simulated to prepare the exergy analysis. Using a typical actual work input value; the exergetic efficiency of the refrigeration cycle is determined to be 39.90% indicating a great potential for improvements. The simulation results reveal that the exergetic efficiencies of the heat exchanger and expansion sections get the lowest rank among the other compartments of refrigeration cycle. Refrigeration calculations have been carried out through the analysis of T–S and P–H diagrams where coefficient of performance (COP) was obtained as 1.85. The novelty of this article includes the effect and sensitivity analysis of molar flow, pressure drops and temperature on the exergy efficiency and coefficient of performance of the cycle.

Speech Encryption and Decryption Using Linear Feedback Shift Register (LFSR)

This paper is taken into consideration the problem of cryptanalysis of stream ciphers. There is some attempts need to improve the existing attacks on stream cipher and to make an attempt to distinguish the portions of cipher text obtained by the encryption of plain text in which some parts of the text are random and the rest are non-random. This paper presents a tutorial introduction to symmetric cryptography. The basic information theoretic and computational properties of classic and modern cryptographic systems are presented, followed by an examination of the application of cryptography to the security of VoIP system in computer networks using LFSR algorithm. The implementation program will be developed Java 2. LFSR algorithm is appropriate for the encryption and decryption of online streaming data, e.g. VoIP (voice chatting over IP). This paper is implemented the encryption module of speech signals to cipher text and decryption module of cipher text to speech signals.

Recognition of Isolated Speech Signals using Simplified Statistical Parameters

We present a novel scheme to recognize isolated speech signals using certain statistical parameters derived from those signals. The determination of the statistical estimates is based on extracted signal information rather than the original signal information in order to reduce the computational complexity. Subtle details of these estimates, after extracting the speech signal from ambience noise, are first exploited to segregate the polysyllabic words from the monosyllabic ones. Precise recognition of each distinct word is then carried out by analyzing the histogram, obtained from these information.

Mutation Rate for Evolvable Hardware

Evolvable hardware (EHW) refers to a selfreconfiguration hardware design, where the configuration is under the control of an evolutionary algorithm (EA). A lot of research has been done in this area several different EA have been introduced. Every time a specific EA is chosen for solving a particular problem, all its components, such as population size, initialization, selection mechanism, mutation rate, and genetic operators, should be selected in order to achieve the best results. In the last three decade a lot of research has been carried out in order to identify the best parameters for the EA-s components for different “test-problems". However different researchers propose different solutions. In this paper the behaviour of mutation rate on (1+λ) evolution strategy (ES) for designing logic circuits, which has not been done before, has been deeply analyzed. The mutation rate for an EHW system modifies values of the logic cell inputs, the cell type (for example from AND to NOR) and the circuit output. The behaviour of the mutation has been analyzed based on the number of generations, genotype redundancy and number of logic gates used for the evolved circuits. The experimental results found provide the behaviour of the mutation rate to be used during evolution for the design and optimization of logic circuits. The researches on the best mutation rate during the last 40 years are also summarized.

Design Techniques and Implementation of Low Power High-Throughput Discrete Wavelet Transform Tilters for JPEG 2000 Standard

In this paper, the implementation of low power, high throughput convolutional filters for the one dimensional Discrete Wavelet Transform and its inverse are presented. The analysis filters have already been used for the implementation of a high performance DWT encoder [15] with minimum memory requirements for the JPEG 2000 standard. This paper presents the design techniques and the implementation of the convolutional filters included in the JPEG2000 standard for the forward and inverse DWT for achieving low-power operation, high performance and reduced memory accesses. Moreover, they have the ability of performing progressive computations so as to minimize the buffering between the decomposition and reconstruction phases. The experimental results illustrate the filters- low power high throughput characteristics as well as their memory efficient operation.

Iterative Process to Improve Simple Adaptive Subdivision Surfaces Method with Butterfly Scheme

Subdivision surfaces were applied to the entire meshes in order to produce smooth surfaces refinement from coarse mesh. Several schemes had been introduced in this area to provide a set of rules to converge smooth surfaces. However, to compute and render all the vertices are really inconvenient in terms of memory consumption and runtime during the subdivision process. It will lead to a heavy computational load especially at a higher level of subdivision. Adaptive subdivision is a method that subdivides only at certain areas of the meshes while the rest were maintained less polygons. Although adaptive subdivision occurs at the selected areas, the quality of produced surfaces which is their smoothness can be preserved similar as well as regular subdivision. Nevertheless, adaptive subdivision process burdened from two causes; calculations need to be done to define areas that are required to be subdivided and to remove cracks created from the subdivision depth difference between the selected and unselected areas. Unfortunately, the result of adaptive subdivision when it reaches to the higher level of subdivision, it still brings the problem with memory consumption. This research brings to iterative process of adaptive subdivision to improve the previous adaptive method that will reduce memory consumption applied on triangular mesh. The result of this iterative process was acceptable better in memory and appearance in order to produce fewer polygons while it preserves smooth surfaces.

A Computational Comparison between Revetec Engine and Conventional Internal Combustion Engines on the Indicated Torque

This paper investigates the effect of replacing crankshaft with cam on the indicated torque during compression and power strokes in internal combustion engines. A Cycloidal cam profile was used in Revetec engine to calculate and compare the torque to a conventional engine, using a computational method. Firstly, the cylinder pressure was calculated using Ferguson equation, and then the torque calculated depending on cylinder pressure values in every crank angle. the results showed that by using Cycloidal cam profile in Revetec engine the torque can increased by 14% compared with conventional engines, which means an increase in engine efficiency.

Rotor Flow Analysis using Animplicit Harmonic Balance Method

This paper is an extension of a previous work where a diagonally implicit harmonic balance method was developed and applied to simulate oscillatory motions of pitching airfoil and wing. A more detailed study on the accuracy, convergence, and the efficiency of the method is carried out in the current paperby varying the number of harmonics in the solution approximation. As the main advantage of the method is itsusage for the design optimization of the unsteady problems, its application to more practical case of rotor flow analysis during forward flight is carried out and compared with flight test data and time-accurate computation results.

A Study of Computational Organizational Narrative Generation for Decision Support

Narratives are invaluable assets of human lives. Due to the distinct features of narratives, they are useful for supporting human reasoning processes. However, many useful narratives become residuals in organizations or human minds nowadays. Researchers have contributed effort to investigate and improve narrative generation processes. This paper attempts to contemplate essential components in narratives and explore a computational approach to acquire and extract knowledge to generate narratives. The methodology and significant benefit for decision support are presented.

The Effect of Increment in Simulation Samples on a Combined Selection Procedure

Statistical selection procedures are used to select the best simulated system from a finite set of alternatives. In this paper, we present a procedure that can be used to select the best system when the number of alternatives is large. The proposed procedure consists a combination between Ranking and Selection, and Ordinal Optimization procedures. In order to improve the performance of Ordinal Optimization, Optimal Computing Budget Allocation technique is used to determine the best simulation lengths for all simulation systems and to reduce the total computation time. We also argue the effect of increment in simulation samples for the combined procedure. The results of numerical illustration show clearly the effect of increment in simulation samples on the proposed combination of selection procedure.

Two-dimensional Differential Transform Method for Solving Linear and Non-linear Goursat Problem

A method for solving linear and non-linear Goursat problem is given by using the two-dimensional differential transform method. The approximate solution of this problem is calculated in the form of a series with easily computable terms and also the exact solutions can be achieved by the known forms of the series solutions. The method can easily be applied to many linear and non-linear problems and is capable of reducing the size of computational work. Several examples are given to demonstrate the reliability and the performance of the presented method.

Implementation of Adder-Subtracter Design with VerilogHDL

According to the density of the chips, designers are trying to put so any facilities of computational and storage on single chips. Along with the complexity of computational and storage circuits, the designing, testing and debugging become more and more complex and expensive. So, hardware design will be built by using very high speed hardware description language, which is more efficient and cost effective. This paper will focus on the implementation of 32-bit ALU design based on Verilog hardware description language. Adder and subtracter operate correctly on both unsigned and positive numbers. In ALU, addition takes most of the time if it uses the ripple-carry adder. The general strategy for designing fast adders is to reduce the time required to form carry signals. Adders that use this principle are called carry look- ahead adder. The carry look-ahead adder is to be designed with combination of 4-bit adders. The syntax of Verilog HDL is similar to the C programming language. This paper proposes a unified approach to ALU design in which both simulation and formal verification can co-exist.