Abstract: Wind energy has been shown to be one of the most
viable sources of renewable energy. With current technology, the low
cost of wind energy is competitive with more conventional sources of
energy such as coal. Most blades available for commercial grade
wind turbines incorporate a straight span-wise profile and airfoil
shaped cross sections. These blades are found to be very efficient at
lower wind speeds in comparison to the potential energy that can be
extracted. However as the oncoming wind speed increases the
efficiency of the blades decreases as they approach a stall point. This
paper explores the possibility of increasing the efficiency of the
blades at higher wind speeds while maintaining efficiency at the
lower wind speeds. The design intends to maintain efficiency at
lower wind speeds by selecting the appropriate orientation and size
of the airfoil cross sections based on a low oncoming wind speed and
given constant rotation rate. The blades will be made more efficient
at higher wind speeds by implementing a swept blade profile.
Performance was investigated using the computational fluid
dynamics (CFD).
Abstract: Extended Kalman Filter (EKF) is probably the most
widely used estimation algorithm for nonlinear systems. However,
not only it has difficulties arising from linearization but also many
times it becomes numerically unstable because of computer round off
errors that occur in the process of its implementation. To overcome
linearization limitations, the unscented transformation (UT) was
developed as a method to propagate mean and covariance
information through nonlinear transformations. Kalman filter that
uses UT for calculation of the first two statistical moments is called
Unscented Kalman Filter (UKF). Square-root form of UKF (SRUKF)
developed by Rudolph van der Merwe and Eric Wan to
achieve numerical stability and guarantee positive semi-definiteness
of the Kalman filter covariances. This paper develops another
implementation of SR-UKF for sequential update measurement
equation, and also derives a new UD covariance factorization filter
for the implementation of UKF. This filter is equivalent to UKF but
is computationally more efficient.
Abstract: In this paper developed and realized absolutely new
algorithm for solving three-dimensional Poisson equation. This
equation used in research of turbulent mixing, computational fluid
dynamics, atmospheric front, and ocean flows and so on. Moreover in
the view of rising productivity of difficult calculation there was
applied the most up-to-date and the most effective parallel
programming technology - MPI in combination with OpenMP
direction, that allows to realize problems with very large data
content. Resulted products can be used in solving of important
applications and fundamental problems in mathematics and physics.
Abstract: Detecting protein-protein interactions is a central problem in computational biology and aberrant such interactions may have implicated in a number of neurological disorders. As a result, the prediction of protein-protein interactions has recently received considerable attention from biologist around the globe. Computational tools that are capable of effectively identifying protein-protein interactions are much needed. In this paper, we propose a method to detect protein-protein interaction based on substring similarity measure. Two protein sequences may interact by the mean of the similarities of the substrings they contain. When applied on the currently available protein-protein interaction data for the yeast Saccharomyces cerevisiae, the proposed method delivered reasonable improvement over the existing ones.
Abstract: In this paper, a neural tree (NT) classifier having a
simple perceptron at each node is considered. A new concept for
making a balanced tree is applied in the learning algorithm of the
tree. At each node, if the perceptron classification is not accurate and
unbalanced, then it is replaced by a new perceptron. This separates
the training set in such a way that almost the equal number of patterns
fall into each of the classes. Moreover, each perceptron is trained only
for the classes which are present at respective node and ignore other
classes. Splitting nodes are employed into the neural tree architecture
to divide the training set when the current perceptron node repeats
the same classification of the parent node. A new error function based
on the depth of the tree is introduced to reduce the computational
time for the training of a perceptron. Experiments are performed to
check the efficiency and encouraging results are obtained in terms of
accuracy and computational costs.
Abstract: A new, rapidly convergent, numerical procedure for
internal loading distribution computation in statically loaded, singlerow,
angular-contact ball bearings, subjected to a known combined
radial and thrust load, which must be applied so that to avoid tilting
between inner and outer rings, is used to find the load distribution
differences between a loaded unfitted bearing at room temperature,
and the same loaded bearing with interference fits that might
experience radial temperature gradients between inner and outer
rings. For each step of the procedure it is required the iterative
solution of Z + 2 simultaneous nonlinear equations – where Z is the
number of the balls – to yield exact solution for axial and radial
deflections, and contact angles.
Abstract: A concern that researchers usually face in different
applications of Artificial Neural Network (ANN) is determination of
the size of effective domain in time series. In this paper, trial and
error method was used on groundwater depth time series to determine
the size of effective domain in the series in an observation well in
Union County, New Jersey, U.S. different domains of 20, 40, 60, 80,
100, and 120 preceding day were examined and the 80 days was
considered as effective length of the domain. Data sets in different
domains were fed to a Feed Forward Back Propagation ANN with
one hidden layer and the groundwater depths were forecasted. Root
Mean Square Error (RMSE) and the correlation factor (R2) of
estimated and observed groundwater depths for all domains were
determined. In general, groundwater depth forecast improved, as
evidenced by lower RMSEs and higher R2s, when the domain length
increased from 20 to 120. However, 80 days was selected as the
effective domain because the improvement was less than 1% beyond
that. Forecasted ground water depths utilizing measured daily data
(set #1) and data averaged over the effective domain (set #2) were
compared. It was postulated that more accurate nature of measured
daily data was the reason for a better forecast with lower RMSE
(0.1027 m compared to 0.255 m) in set #1. However, the size of input
data in this set was 80 times the size of input data in set #2; a factor
that may increase the computational effort unpredictably. It was
concluded that 80 daily data may be successfully utilized to lower the
size of input data sets considerably, while maintaining the effective
information in the data set.
Abstract: Residue Number System (RNS) is a modular representation and is proved to be an instrumental tool in many digital signal processing (DSP) applications which require high-speed computations. RNS is an integer and non weighted number system; it can support parallel, carry-free, high-speed and low power arithmetic. A very interesting correspondence exists between the concepts of Multiple Valued Logic (MVL) and Residue Number Arithmetic. If the number of levels used to represent MVL signals is chosen to be consistent with the moduli which create the finite rings in the RNS, MVL becomes a very natural representation for the RNS. There are two concerns related to the application of this Number System: reaching the most possible speed and the largest dynamic range. There is a conflict when one wants to resolve both these problem. That is augmenting the dynamic range results in reducing the speed in the same time. For achieving the most performance a method is considere named “One-Hot Residue Number System" in this implementation the propagation is only equal to one transistor delay. The problem with this method is the huge increase in the number of transistors they are increased in order m2 . In real application this is practically impossible. In this paper combining the Multiple Valued Logic and One-Hot Residue Number System we represent a new method to resolve both of these two problems. In this paper we represent a novel design of an OHRNS-based adder circuit. This circuit is useable for Multiple Valued Logic moduli, in comparison to other RNS design; this circuit has considerably improved the number of transistors and power consumption.
Abstract: Cloud Computing is a new technology that helps us to
use the Cloud for compliance our computation needs. Cloud refers to a scalable network of computers that work together like Internet. An
important element in Cloud Computing is that we shift processing, managing, storing and implementing our data from, locality into the
Cloud; So it helps us to improve the efficiency. Because of it is new
technology, it has both advantages and disadvantages that are
scrutinized in this article. Then some vanguards of this technology
are studied. Afterwards we find out that Cloud Computing will have
important roles in our tomorrow life!
Abstract: One of the long standing challenging aspect in mobile robotics is the ability to navigate autonomously, avoiding modeled and unmodeled obstacles especially in crowded and unpredictably changing environment. A successful way of structuring the navigation task in order to deal with the problem is within behavior based navigation approaches. In this study, Issues of individual behavior design and action coordination of the behaviors will be addressed using fuzzy logic. A layered approach is employed in this work in which a supervision layer based on the context makes a decision as to which behavior(s) to process (activate) rather than processing all behavior(s) and then blending the appropriate ones, as a result time and computational resources are saved.
Abstract: Large scale systems such as computational Grid is
a distributed computing infrastructure that can provide globally
available network resources. The evolution of information processing
systems in Data Grid is characterized by a strong decentralization of
data in several fields whose objective is to ensure the availability and
the reliability of the data in the reason to provide a fault tolerance
and scalability, which cannot be possible only with the use of the
techniques of replication. Unfortunately the use of these techniques
has a height cost, because it is necessary to maintain consistency
between the distributed data. Nevertheless, to agree to live with
certain imperfections can improve the performance of the system by
improving competition. In this paper, we propose a multi-layer protocol
combining the pessimistic and optimistic approaches conceived
for the data consistency maintenance in large scale systems. Our
approach is based on a hierarchical representation model with tree
layers, whose objective is with double vocation, because it initially
makes it possible to reduce response times compared to completely
pessimistic approach and it the second time to improve the quality
of service compared to an optimistic approach.
Abstract: Power Spectral Density (PSD) of quasi-stationary processes can be efficiently estimated using the short time Fourier series (STFT). In this paper, an algorithm has been proposed that computes the PSD of quasi-stationary process efficiently using offline autoregressive model order estimation algorithm, recursive parameter estimation technique and modified sliding window discrete Fourier Transform algorithm. The main difference in this algorithm and STFT is that the sliding window (SW) and window for spectral estimation (WSA) are separately defined. WSA is updated and its PSD is computed only when change in statistics is detected in the SW. The computational complexity of the proposed algorithm is found to be lesser than that for standard STFT technique.
Abstract: Future space vehicles will require the use of non-toxic, cryogenic propellants, because of the performance advantages over the toxic hypergolic propellants and also because of the environmental and handling concerns. A prototypical capillary flow liquid acquisition device (LAD) for cryogenic propellants was fabricated with a mesh screen, covering a rectangular flow channel with a cylindrical outlet tube, and was tested with liquid oxygen (LOX). In order to better understand the performance in various gravity environments and orientations with different submersion depths of the LAD, a series of computational fluid dynamics (CFD) simulations of LOX flow through the LAD screen channel, including horizontally and vertically submersions of the LAD channel assembly at normal gravity environment was conducted. Gravity effects on the flow field in LAD channel are inspected and analyzed through comparing the simulations.
Abstract: Dorsal hand vein pattern is an emerging biometric which is attracting the attention of researchers, of late. Research is being carried out on existing techniques in the hope of improving them or finding more efficient ones. In this work, Principle Component Analysis (PCA) , which is a successful method, originally applied on face biometric is being modified using Cholesky decomposition and Lanczos algorithm to extract the dorsal hand vein features. This modified technique decreases the number of computation and hence decreases the processing time. The eigenveins were successfully computed and projected onto the vein space. The system was tested on a database of 200 images and using a threshold value of 0.9 to obtain the False Acceptance Rate (FAR) and False Rejection Rate (FRR). This modified algorithm is desirable when developing biometric security system since it significantly decreases the matching time.
Abstract: In this paper we present discretization and decomposition methods for a multi-component transport model of a chemical vapor deposition (CVD) process. CVD processes are used to manufacture deposition layers or bulk materials. In our transport model we simulate the deposition of thin layers. The microscopic model is based on the heavy particles, which are derived by approximately solving a linearized multicomponent Boltzmann equation. For the drift-process of the particles we propose diffusionreaction equations as well as for the effects of heat conduction. We concentrate on solving the diffusion-reaction equation with analytical and numerical methods. For the chemical processes, modelled with reaction equations, we propose decomposition methods and decouple the multi-component models to simpler systems of differential equations. In the numerical experiments we present the computational results of our proposed models.
Abstract: Long number multiplications (n ≥ 128-bit) are a
primitive in most cryptosystems. They can be performed better by
using Karatsuba-Ofman technique. This algorithm is easy to
parallelize on workstation network and on distributed memory, and
it-s known as the practical method of choice. Multiplying long
numbers using Karatsuba-Ofman algorithm is fast but is highly
recursive. In this paper, we propose different designs of
implementing Karatsuba-Ofman multiplier. A mixture of sequential
and combinational system design techniques involving pipelining is
applied to our proposed designs. Multiplying large numbers can be
adapted flexibly to time, area and power criteria. Computationally
and occupation constrained in embedded systems such as: smart
cards, mobile phones..., multiplication of finite field elements can be
achieved more efficiently. The proposed designs are compared to
other existing techniques. Mathematical models (Area (n), Delay (n))
of our proposed designs are also elaborated and evaluated on
different FPGAs devices.
Abstract: In order to make conventional implicit algorithm to be applicable in large scale parallel computers , an interface prediction and correction of discontinuous finite element method is presented to solve time-dependent neutron transport equations under 2-D cylindrical geometry. Domain decomposition is adopted in the computational domain.The numerical experiments show that our parallel algorithm with explicit prediction and implicit correction has good precision, parallelism and simplicity. Especially, it can reach perfect speedup even on hundreds of processors for large-scale problems.
Abstract: Electronics Products that achieve high levels of integrated communications, computing and entertainment, multimedia features in small, stylish and robust new form factors are winning in the market place. Due to the high costs that an industry may undergo and how a high yield is directly proportional to high profits, IC (Integrated Circuit) manufacturers struggle to maximize yield, but today-s customers demand miniaturization, low costs, high performance and excellent reliability making the yield maximization a never ending research of an enhanced assembly process. With factors such as minimum tolerances, tighter parameter variations a systematic approach is needed in order to predict the assembly process. In order to evaluate the quality of upcoming circuits, yield models are used which not only predict manufacturing costs but also provide vital information in order to ease the process of correction when the yields fall below expectations. For an IC manufacturer to obtain higher assembly yields all factors such as boards, placement, components, the material from which the components are made of and processes must be taken into consideration. Effective placement yield depends heavily on machine accuracy and the vision of the system which needs the ability to recognize the features on the board and component to place the device accurately on the pads and bumps of the PCB. There are currently two methods for accurate positioning, using the edge of the package and using solder ball locations also called footprints. The only assumption that a yield model makes is that all boards and devices are completely functional. This paper will focus on the Monte Carlo method which consists in a class of computational algorithms (information processed algorithms) which depends on repeated random samplings in order to compute the results. This method utilized in order to recreate the simulation of placement and assembly processes within a production line.
Abstract: Temperature, relative humidity and overhygroscopic
moisture fields in a sandstone wall provided with interior thermal
insulation were calculated in order to assess the hygric performance
of the retrofitted wall. Computational simulations showed that during
the time period of 10 years which was subject of investigation no
overhygroscopic moisture appeared in the analyzed building
envelope so that it performed in a satisfactory way from the hygric
point of view.
Abstract: In competitive electricity markets all over the world, an adoption of suitable transmission pricing model is a problem as transmission segment still operates as a monopoly. Transmission pricing is an important tool to promote investment for various transmission services in order to provide economic, secure and reliable electricity to bulk and retail customers. The nodal pricing based on SRMC (Short Run Marginal Cost) is found extremely useful by researchers for sending correct economic signals. The marginal prices must be determined as a part of solution to optimization problem i.e. to maximize the social welfare. The need to maximize the social welfare subject to number of system operational constraints is a major challenge from computation and societal point of views. The purpose of this paper is to present a nodal transmission pricing model based on SRMC by developing new mathematical expressions of real and reactive power marginal prices using GA-Fuzzy based optimal power flow framework. The impacts of selecting different social welfare functions on power marginal prices are analyzed and verified with results reported in literature. Network revenues for two different power systems are determined using expressions derived for real and reactive power marginal prices in this paper.