Abstract: The most reliable and accurate description of the actual behavior of a software system is its source code. However, not all questions about the system can be answered directly by resorting to this repository of information. What the reverse engineering methodology aims at is the extraction of abstract, goal-oriented “views" of the system, able to summarize relevant properties of the computation performed by the program. While concentrating on reverse engineering we had modeled the C++ files by designing the translator.
Abstract: A parallel computational fluid dynamics code has been
developed for the study of aerodynamic heating problem in hypersonic
flows. The code employs the 3D Navier-Stokes equations as the basic
governing equations to simulate the laminar hypersonic flow. The cell
centered finite volume method based on structured grid is applied for
spatial discretization. The AUSMPW+ scheme is used for the inviscid
fluxes, and the MUSCL approach is used for higher order spatial
accuracy. The implicit LU-SGS scheme is applied for time integration
to accelerate the convergence of computations in steady flows. A
parallel programming method based on MPI is employed to shorten
the computing time. The validity of the code is demonstrated by
comparing the numerical calculation result with the experimental data
of a hypersonic flow field around a blunt body.
Abstract: This paper investigates the performance of a speech
recognizer in an interactive voice response system for various coded
speech signals, coded by using a vector quantization technique namely
Multi Switched Split Vector Quantization Technique. The process of
recognizing the coded output can be used in Voice banking application.
The recognition technique used for the recognition of the coded speech
signals is the Hidden Markov Model technique. The spectral distortion
performance, computational complexity, and memory requirements of
Multi Switched Split Vector Quantization Technique and the
performance of the speech recognizer at various bit rates have been
computed. From results it is found that the speech recognizer is
showing better performance at 24 bits/frame and it is found that the
percentage of recognition is being varied from 100% to 93.33% for
various bit rates.
Abstract: Laser Profiler (LP) data from aerial laser surveys have
been increasingly used as topographical inputs to numerical
simulations of flooding and inundation in river basins. LP data has
great potential for reproducing topography, but its effective usage has
not yet been fully established. In this study, flooding and inundation
are simulated numerically using LP data for the Jobaru River basin of
Japan’s Saga Plain. The analysis shows that the topography is
reproduced satisfactorily in the computational domain with urban and
agricultural areas requiring different grid sizes. A 2-D numerical
simulation shows that flood flow behavior changes as grid size is
varied.
Abstract: Fast forecasting of stock market prices is very important for
strategic planning. In this paper, a new approach for fast forecasting of
stock market prices is presented. Such algorithm uses new high speed
time delay neural networks (HSTDNNs). The operation of these
networks relies on performing cross correlation in the frequency
domain between the input data and the input weights of neural
networks. It is proved mathematically and practically that the number
of computation steps required for the presented HSTDNNs is less
than that needed by traditional time delay neural networks
(TTDNNs). Simulation results using MATLAB confirm the
theoretical computations.
Abstract: Encryption and decryption in RSA are done by modular exponentiation which is achieved by repeated modular multiplication. Hence efficiency of modular multiplication directly determines the efficiency of RSA cryptosystem. This paper designs a Modified Montgomery Modular Multiplication in which addition of operands is computed by 4:2 compressor. The basic logic operations in addition are partitioned over two iterations such that parallel computations are performed. This reduces the critical path delay of proposed Montgomery design. The proposed design and RSA are implemented on Virtex 2 and Virtex 5 FPGAs. The two factors partitioning and parallelism have improved the frequency and throughput of proposed design.
Abstract: Variable speed drives are growing and varying. Drives expanse depend on progress in different part of science like power system, microelectronic, control methods, and so on. Artificial intelligent contains hard computation and soft computation. Artificial intelligent has found high application in most nonlinear systems same as motors drive. Because it has intelligence like human but there are no sentimental against human like angriness and.... Artificial intelligent is used for various points like approximation, control, and monitoring. Because artificial intelligent techniques can use as controller for any system without requirement to system mathematical model, it has been used in electrical drive control. With this manner, efficiency and reliability of drives increase and volume, weight and cost of them decrease.
Abstract: In this paper we propose a method for vision systems
to consistently represent functional dependencies between different
visual routines along with relational short- and long-term knowledge
about the world. Here the visual routines are bound to visual properties
of objects stored in the memory of the system. Furthermore,
the functional dependencies between the visual routines are seen
as a graph also belonging to the object-s structure. This graph is
parsed in the course of acquiring a visual property of an object to
automatically resolve the dependencies of the bound visual routines.
Using this representation, the system is able to dynamically rearrange
the processing order while keeping its functionality. Additionally, the
system is able to estimate the overall computational costs of a certain
action. We will also show that the system can efficiently use that
structure to incorporate already acquired knowledge and thus reduce
the computational demand.
Abstract: In this paper, we have combined some spatial derivatives with the optimised time derivative proposed by Tam and Webb in order to approximate the linear advection equation which is given by = 0. Ôêé Ôêé + Ôêé Ôêé x f t u These spatial derivatives are as follows: a standard 7-point 6 th -order central difference scheme (ST7), a standard 9-point 8 th -order central difference scheme (ST9) and optimised schemes designed by Tam and Webb, Lockard et al., Zingg et al., Zhuang and Chen, Bogey and Bailly. Thus, these seven different spatial derivatives have been coupled with the optimised time derivative to obtain seven different finite-difference schemes to approximate the linear advection equation. We have analysed the variation of the modified wavenumber and group velocity, both with respect to the exact wavenumber for each spatial derivative. The problems considered are the 1-D propagation of a Boxcar function, propagation of an initial disturbance consisting of a sine and Gaussian function and the propagation of a Gaussian profile. It is known that the choice of the cfl number affects the quality of results in terms of dissipation and dispersion characteristics. Based on the numerical experiments solved and numerical methods used to approximate the linear advection equation, it is observed in this work, that the quality of results is dependent on the choice of the cfl number, even for optimised numerical methods. The errors from the numerical results have been quantified into dispersion and dissipation using a technique devised by Takacs. Also, the quantity, Exponential Error for Low Dispersion and Low Dissipation, eeldld has been computed from the numerical results. Moreover, based on this work, it has been found that when the quantity, eeldld can be used as a measure of the total error. In particular, the total error is a minimum when the eeldld is a minimum.
Abstract: As networking has become popular, Web-learning
tends to be a trend while designing a tool. Moreover, five-axis
machining has been widely used in industry recently; however, it has
potential axial table colliding problems. Thus this paper aims at
proposing an efficient web-learning collision detection tool on
five-axis machining. However, collision detection consumes heavy
resource that few devices can support, thus this research uses a
systematic approach based on web knowledge to detect collision. The
methodologies include the kinematics analyses for five-axis motions,
separating axis method for collision detection, and computer
simulation for verification. The machine structure is modeled as STL
format in CAD software. The input to the detection system is the
g-code part program, which describes the tool motions to produce the
part surface. This research produced a simulation program with C
programming language and demonstrated a five-axis machining
example with collision detection on web site. The system simulates the
five-axis CNC motion for tool trajectory and detects for any collisions
according to the input g-codes and also supports high-performance
web service benefiting from C. The result shows that our method
improves 4.5 time of computational efficiency, comparing to the
conventional detection method.
Abstract: In this paper is investigated a possible
optimization of some linear algebra problems which can be
solved by parallel processing using the special arrays called
systolic arrays. In this paper are used some special types of
transformations for the designing of these arrays. We show
the characteristics of these arrays. The main focus is on
discussing the advantages of these arrays in parallel
computation of matrix product, with special approach to the
designing of systolic array for matrix multiplication.
Multiplication of large matrices requires a lot of
computational time and its complexity is O(n3 ). There are
developed many algorithms (both sequential and parallel) with
the purpose of minimizing the time of calculations. Systolic
arrays are good suited for this purpose. In this paper we show
that using an appropriate transformation implicates in finding
more optimal arrays for doing the calculations of this type.
Abstract: The main goal of the present work is to decrease the
computational burden for optimum design of steel frames with
frequency constraints using a new type of neural networks called
Wavelet Neural Network. It is contested to train a suitable neural
network for frequency approximation work as the analysis program.
The combination of wavelet theory and Neural Networks (NN)
has lead to the development of wavelet neural networks.
Wavelet neural networks are feed-forward networks using
wavelet as activation function. Wavelets are mathematical
functions within suitable inner parameters, which help them to
approximate arbitrary functions. WNN was used to predict the
frequency of the structures. In WNN a RAtional function with
Second order Poles (RASP) wavelet was used as a transfer
function. It is shown that the convergence speed was faster
than other neural networks. Also comparisons of WNN with
the embedded Artificial Neural Network (ANN) and with
approximate techniques and also with analytical solutions are
available in the literature.
Abstract: This paper presents a computational methodology
based on matrix operations for a computer based solution to the
problem of performance analysis of software reliability models
(SRMs). A set of seven comparison criteria have been formulated to
rank various non-homogenous Poisson process software reliability
models proposed during the past 30 years to estimate software
reliability measures such as the number of remaining faults, software
failure rate, and software reliability. Selection of optimal SRM for
use in a particular case has been an area of interest for researchers in
the field of software reliability. Tools and techniques for software
reliability model selection found in the literature cannot be used with
high level of confidence as they use a limited number of model
selection criteria. A real data set of middle size software project from
published papers has been used for demonstration of matrix method.
The result of this study will be a ranking of SRMs based on the
Permanent value of the criteria matrix formed for each model based
on the comparison criteria. The software reliability model with
highest value of the Permanent is ranked at number – 1 and so on.
Abstract: A state of the art Speaker Identification (SI) system requires a robust feature extraction unit followed by a speaker modeling scheme for generalized representation of these features. Over the years, Mel-Frequency Cepstral Coefficients (MFCC) modeled on the human auditory system has been used as a standard acoustic feature set for SI applications. However, due to the structure of its filter bank, it captures vocal tract characteristics more effectively in the lower frequency regions. This paper proposes a new set of features using a complementary filter bank structure which improves distinguishability of speaker specific cues present in the higher frequency zone. Unlike high level features that are difficult to extract, the proposed feature set involves little computational burden during the extraction process. When combined with MFCC via a parallel implementation of speaker models, the proposed feature set outperforms baseline MFCC significantly. This proposition is validated by experiments conducted on two different kinds of public databases namely YOHO (microphone speech) and POLYCOST (telephone speech) with Gaussian Mixture Models (GMM) as a Classifier for various model orders.
Abstract: Corporate credit rating prediction using statistical and
artificial intelligence (AI) techniques has been one of the attractive
research topics in the literature. In recent years, multiclass
classification models such as artificial neural network (ANN) or
multiclass support vector machine (MSVM) have become a very
appealing machine learning approaches due to their good
performance. However, most of them have only focused on classifying
samples into nominal categories, thus the unique characteristic of the
credit rating - ordinality - has been seldom considered in their
approaches. This study proposes new types of ANN and MSVM
classifiers, which are named OMANN and OMSVM respectively.
OMANN and OMSVM are designed to extend binary ANN or SVM
classifiers by applying ordinal pairwise partitioning (OPP) strategy.
These models can handle ordinal multiple classes efficiently and
effectively. To validate the usefulness of these two models, we applied
them to the real-world bond rating case. We compared the results of
our models to those of conventional approaches. The experimental
results showed that our proposed models improve classification
accuracy in comparison to typical multiclass classification techniques
with the reduced computation resource.
Abstract: Independent component analysis (ICA) is a computational method for finding underlying signals or components from multivariate statistical data. The ICA method has been successfully applied in many fields, e.g. in vision research, brain imaging, geological signals and telecommunications. In this paper, we apply the ICA method to an analysis of mass spectra of oligomeric species emerged from aluminium sulphate. Mass spectra are typically complex, because they are linear combinations of spectra from different types of oligomeric species. The results show that ICA can decomposite the spectral components for useful information. This information is essential in developing coagulation phases of water treatment processes.
Abstract: Octree compression techniques have been used
for several years for compressing large three dimensional data
sets into homogeneous regions. This compression technique
is ideally suited to datasets which have similar values in
clusters. Oil engineers represent reservoirs as a three dimensional
grid where hydrocarbons occur naturally in clusters. This
research looks at the efficiency of storing these grids using
octree compression techniques where grid cells are broken
into active and inactive regions. Initial experiments yielded
high compression ratios as only active leaf nodes and their
ancestor, header nodes are stored as a bitstream to file on
disk. Savings in computational time and memory were possible
at decompression, as only active leaf nodes are sent to the
graphics card eliminating the need of reconstructing the original
matrix. This results in a more compact vertex table, which can
be loaded into the graphics card quicker and generating shorter
refresh delay times.
Abstract: In this paper we present a soft timing phase estimation (STPE) method for wireless mobile receivers operating in low signal to noise ratios (SNRs). Discrete Polyphase Matched (DPM) filters, a Log-maximum a posterior probability (MAP) and/or a Soft-output Viterbi algorithm (SOVA) are combined to derive a new timing recovery (TR) scheme. We apply this scheme to wireless cellular communication system model that comprises of a raised cosine filter (RCF), a bit-interleaved turbo-coded multi-level modulation (BITMM) scheme and the channel is assumed to be memory-less. Furthermore, no clock signals are transmitted to the receiver contrary to the classical data aided (DA) models. This new model ensures that both the bandwidth and power of the communication system is conserved. However, the computational complexity of ideal turbo synchronization is increased by 50%. Several simulation tests on bit error rate (BER) and block error rate (BLER) versus low SNR reveal that the proposed iterative soft timing recovery (ISTR) scheme outperforms the conventional schemes.
Abstract: Self-Excited Induction Generator (SEIG) builds up voltage while it enters in its magnetic saturation region. Due to non-linear magnetic characteristics, the performance analysis of SEIG involves cumbersome mathematical computations. The dependence of air-gap voltage on saturated magnetizing reactance can only be established at rated frequency by conducting a laboratory test commonly known as synchronous run test. But, there is no laboratory method to determine saturated magnetizing reactance and air-gap voltage of SEIG at varying speed, terminal capacitance and other loading conditions. For overall analysis of SEIG, prior information of magnetizing reactance, generated frequency and air-gap voltage is essentially required. Thus, analytical methods are the only alternative to determine these variables. Non-existence of direct mathematical relationship of these variables for different terminal conditions has forced the researchers to evolve new computational techniques. Artificial Neural Networks (ANNs) are very useful for solution of such complex problems, as they do not require any a priori information about the system. In this paper, an attempt is made to use cascaded neural networks to first determine the generated frequency and magnetizing reactance with varying terminal conditions and then air-gap voltage of SEIG. The results obtained from the ANN model are used to evaluate the overall performance of SEIG and are found to be in good agreement with experimental results. Hence, it is concluded that analysis of SEIG can be carried out effectively using ANNs.
Abstract: This paper presents a new approach using Combined Artificial Neural Network (CANN) module for daily peak load forecasting. Five different computational techniques –Constrained method, Unconstrained method, Evolutionary Programming (EP), Particle Swarm Optimization (PSO), and Genetic Algorithm (GA) – have been used to identify the CANN module for peak load forecasting. In this paper, a set of neural networks has been trained with different architecture and training parameters. The networks are trained and tested for the actual load data of Chennai city (India). A set of better trained conventional ANNs are selected to develop a CANN module using different algorithms instead of using one best conventional ANN. Obtained results using CANN module confirm its validity.