Abstract: In this study, a new root-finding method for solving nonlinear equations is proposed. This method requires two starting values that do not necessarily bracketing a root. However, when the starting values are selected to be close to a root, the proposed method converges to the root quicker than the secant method. Another advantage over all iterative methods is that; the proposed method usually converges to two distinct roots when the given function has more than one root, that is, the odd iterations of this new technique converge to a root and the even iterations converge to another root. Some numerical examples, including a sine-polynomial equation, are solved by using the proposed method and compared with results obtained by the secant method; perfect agreements are found.
Abstract: The paper provides an in-depth tutorial of mathematical
construction of maximal length sequences (m-sequences) via primitive
polynomials and how to map the same when implemented in
shift registers. It is equally important to check whether a polynomial
is primitive or not so as to get proper m-sequences. A fast method to
identify primitive polynomials over binary fields is proposed where
the complexity is considerably less in comparison with the standard
procedures for the same purpose.
Abstract: This paper presents a methodology towards the emulation of the electrical power consumption of the RF device during the cellular phone/handset transmission mode using the LTE technology. The emulation methodology takes the physical environmental variables and the logical interface between the baseband and the RF system as inputs to compute the emulated power dissipation of the RF device. The emulated power, in between the measured points corresponding to the discrete values of the logical interface parameters is computed as a polynomial interpolation using polynomial basis functions. The evaluation of polynomial and spline curve fitting models showed a respective divergence (test error) of 8% and 0.02% from the physically measured power consumption. The precisions of the instruments used for the physical measurements have been modeled as intervals. We have been able to model the power consumption of the RF device operating at 5MHz using homotopy between 2 continuous power consumptions of the RF device operating at the bandwidths 3MHz and 10MHz.
Abstract: The present work represents an investigation of the
hydrolysis of hull-less pumpkin (Cucurbita Pepo L.) oil cake protein
isolate (PuOC PI) by pepsin. To examine the effectiveness and
suitability of pepsin towards PuOC PI the kinetic parameters for
pepsin on PuOC PI were determined and then, the hydrolysis process
was studied using Response Surface Methodology (RSM). The
hydrolysis was carried out at temperature of 30°C and pH 3.00. Time
and initial enzyme/substrate ratio (E/S) at three levels were selected
as the independent parameters. The degree of hydrolysis, DH, was
mesuared after 20, 30 and 40 minutes, at initial E/S of 0.7, 1 and 1.3
mA/mg proteins. Since the proposed second-order polynomial model
showed good fit with the experimental data (R2 = 0.9822), the
obtained mathematical model could be used for monitoring the
hydrolysis of PuOC PI by pepsin, under studied experimental
conditions, varying the time and initial E/S. To achieve the highest
value of DH (39.13 %), the obtained optimum conditions for time
and initial E/S were 30 min and 1.024 mA/mg proteins.
Abstract: The Boundary Representation of a 3D manifold contains
FACES (connected subsets of a parametric surface S : R2 -!
R3). In many science and engineering applications it is cumbersome
and algebraically difficult to deal with the polynomial set and
constraints (LOOPs) representing the FACE. Because of this reason, a
Piecewise Linear (PL) approximation of the FACE is needed, which is
usually represented in terms of triangles (i.e. 2-simplices). Solving the
problem of FACE triangulation requires producing quality triangles
which are: (i) independent of the arguments of S, (ii) sensitive to the
local curvatures, and (iii) compliant with the boundaries of the FACE
and (iv) topologically compatible with the triangles of the neighboring
FACEs. In the existing literature there are no guarantees for the point
(iii). This article contributes to the topic of triangulations conforming
to the boundaries of the FACE by applying the concept of parameterindependent
Gabriel complex, which improves the correctness of the
triangulation regarding aspects (iii) and (iv). In addition, the article
applies the geometric concept of tangent ball to a surface at a point to
address points (i) and (ii). Additional research is needed in algorithms
that (i) take advantage of the concepts presented in the heuristic
algorithm proposed and (ii) can be proved correct.
Abstract: A color image edge detection algorithm is proposed in
this paper using Pseudo-complement and matrix rotation operations.
First, pseudo-complement method is applied on the image for each
channel. Then, matrix operations are applied on the output image of
the first stage. Dominant pixels are obtained by image differencing
between the pseudo-complement image and the matrix operated
image. Median filtering is carried out to smoothen the image thereby
removing the isolated pixels. Finally, the dominant or core pixels
occurring in at least two channels are selected. On plotting the
selected edge pixels, the final edge map of the given color image is
obtained. The algorithm is also tested in HSV and YCbCr color
spaces. Experimental results on both synthetic and real world images
show that the accuracy of the proposed method is comparable to
other color edge detectors. All the proposed procedures can be
applied to any image domain and runs in polynomial time.
Abstract: Finding synchronizing sequences for the finite automata is a very important problem in many practical applications (part orienters in industry, reset problem in biocomputing theory, network issues etc). Problem of finding the shortest synchronizing sequence is NP-hard, so polynomial algorithms probably can work only as heuristic ones. In this paper we propose two versions of polynomial algorithms which work better than well-known Eppstein-s Greedy and Cycle algorithms.
Abstract: The temperature distribution and the heat transfer
rates through a multi-layer door of a furnace were investigated. The
inside of the door was in contact with hot air and the other side of the
door was in contact with room air. Radiation heat transfer from the
walls of the furnace to the door and the door to the surrounding area
was included in the problem. This work is a two dimensional steady
state problem. The Churchill and Chu correlation was used to find
local convection heat transfer coefficients at the surfaces of the
furnace door. The thermophysical properties of air were the functions
of the temperatures. Polynomial curve fitting for the fluid properties
were carried out. Finite difference method was used to discretize for
conduction heat transfer within the furnace door. The Gauss-Seidel
Iteration was employed to compute the temperature distribution in
the door.
The temperature distribution in the horizontal mid plane of the
furnace door in a two dimensional problem agrees with the one
dimensional problem. The local convection heat transfer coefficients
at the inside and outside surfaces of the furnace door are exhibited.
Abstract: We construct an exponentially weighted Legendre- Gauss Tau method for solving differential equations with oscillatory solutions. The proposed method is applied to Sturm-Liouville problems. Numerical examples illustrating the efficiency and the high accuracy of our results are presented.
Abstract: In this paper we use quintic non-polynomial
spline functions to develop numerical methods for approximation
to the solution of a system of fourth-order boundaryvalue
problems associated with obstacle, unilateral and contact
problems. The convergence analysis of the methods has been
discussed and shown that the given approximations are better
than collocation and finite difference methods. Numerical
examples are presented to illustrate the applications of these
methods, and to compare the computed results with other
known methods.
Abstract: Psoriasis is a widespread skin disease affecting up to 2% population with plaque psoriasis accounting to about 80%. It can be identified as a red lesion and for the higher severity the lesion is usually covered with rough scale. Psoriasis Area Severity Index (PASI) scoring is the gold standard method for measuring psoriasis severity. Scaliness is one of PASI parameter that needs to be quantified in PASI scoring. Surface roughness of lesion can be used as a scaliness feature, since existing scale on lesion surface makes the lesion rougher. The dermatologist usually assesses the severity through their tactile sense, therefore direct contact between doctor and patient is required. The problem is the doctor may not assess the lesion objectively. In this paper, a digital image analysis technique is developed to objectively determine the scaliness of the psoriasis lesion and provide the PASI scaliness score. Psoriasis lesion is modelled by a rough surface. The rough surface is created by superimposing a smooth average (curve) surface with a triangular waveform. For roughness determination, a polynomial surface fitting is used to estimate average surface followed by a subtraction between rough and average surface to give elevation surface (surface deviations). Roughness index is calculated by using average roughness equation to the height map matrix. The roughness algorithm has been tested to 444 lesion models. From roughness validation result, only 6 models can not be accepted (percentage error is greater than 10%). These errors occur due the scanned image quality. Roughness algorithm is validated for roughness measurement on abrasive papers at flat surface. The Pearson-s correlation coefficient of grade value (G) of abrasive paper and Ra is -0.9488, its shows there is a strong relation between G and Ra. The algorithm needs to be improved by surface filtering, especially to overcome a problem with noisy data.
Abstract: The majority of existing predictors for time series are
model-dependent and therefore require some prior knowledge for the
identification of complex systems, usually involving system
identification, extensive training, or online adaptation in the case of
time-varying systems. Additionally, since a time series is usually
generated by complex processes such as the stock market or other
chaotic systems, identification, modeling or the online updating of
parameters can be problematic. In this paper a model-free predictor
(MFP) for a time series produced by an unknown nonlinear system or
process is derived using tracking theory. An identical derivation of the
MFP using the property of the Newton form of the interpolating
polynomial is also presented. The MFP is able to accurately predict
future values of a time series, is stable, has few tuning parameters and
is desirable for engineering applications due to its simplicity, fast
prediction speed and extremely low computational load. The
performance of the proposed MFP is demonstrated using the
prediction of the Dow Jones Industrial Average stock index.
Abstract: The purpose of this investigation is to relate the rain
power and the overland flow power to soil erodibility to assess the
effects of both parameters on soil erosion using variable rainfall
intensity on remoulded agricultural soil. Six rainfall intensities were
used to simulate the natural rainfall and are as follows: 12.4mm/h,
20.3mm/h, 28.6mm/h, 52mm/h, 73.5mm/h and 103mm/h. The results
have shown that the relationship between overland flow power and
rain power is best represented by a linear function (R2=0.99). As
regards the relationships between soil erodibility factor and rain and
overland flow powers, the evolution of both parameters with the
erodibility factor follow a polynomial function with high coefficient
of determination. From their coefficients of determination (R2=0.95)
for rain power and (R2=0.96) for overland flow power, we can
conclude that the flow has more power to detach particles than rain.
This could be explained by the fact that the presence of particles,
already detached by rain and transported by the flow, give the flow
more weight and then contribute to the detachment of particles by
collision.
Abstract: In the control theory one attempts to find a controller
that provides the best possible performance with respect to some
given measures of performance. There are many sorts of controllers
e.g. a typical PID controller, LQR controller, Fuzzy controller etc. In
the paper will be introduced polynomial controller with novel tuning
method which is based on the special pole placement encoding
scheme and optimization by Genetic Algorithms (GA). The examples
will show the performance of the novel designed polynomial
controller with comparison to common PID controller.
Abstract: New theory for functionally graded (FG) shell based on expansion of the equations of elasticity for functionally graded materials (GFMs) into Legendre polynomials series has been developed. Stress and strain tensors, vectors of displacements, traction and body forces have been expanded into Legendre polynomials series in a thickness coordinate. In the same way functions that describe functionally graded relations has been also expanded. Thereby all equations of elasticity including Hook-s law have been transformed to corresponding equations for Fourier coefficients. Then system of differential equations in term of displacements and boundary conditions for Fourier coefficients has been obtained. Cases of the first and second approximations have been considered in more details. For obtained boundary-value problems solution finite element (FE) has been used of Numerical calculations have been done with Comsol Multiphysics and Matlab.
Abstract: We deal with the numerical solution of time-dependent convection-diffusion-reaction equations. We combine the local projection stabilization method for the space discretization with two different time discretization schemes: the continuous Galerkin-Petrov (cGP) method and the discontinuous Galerkin (dG) method of polynomial of degree k. We establish the optimal error estimates and present numerical results which shows that the cGP(k) and dG(k)- methods are accurate of order k +1, respectively, in the whole time interval. Moreover, the cGP(k)-method is superconvergent of order 2k and dG(k)-method is of order 2k +1 at the discrete time points. Furthermore, the dependence of the results on the choice of the stabilization parameter are discussed and compared.
Abstract: The density estimates considered in this paper comprise
a base density and an adjustment component consisting of a linear
combination of orthogonal polynomials. It is shown that, in the
context of density approximation, the coefficients of the linear combination
can be determined either from a moment-matching technique
or a weighted least-squares approach. A kernel representation of
the corresponding density estimates is obtained. Additionally, two
refinements of the Kronmal-Tarter stopping criterion are proposed
for determining the degree of the polynomial adjustment. By way of
illustration, the density estimation methodology advocated herein is
applied to two data sets.
Abstract: The Algorithm 2 for a n-link manipulator movement amidst arbitrary unknown static obstacles for a case when a sensor system supplies information about local neighborhoods of different points in the configuration space is presented. The Algorithm 2 guarantees the reaching of a target position in a finite number of steps. The Algorithm 2 is reduced to a finite number of calls of a subroutine for planning a trajectory in the presence of known forbidden states. The polynomial approximation algorithm which is used as the subroutine is presented. The results of the Algorithm2 implementation are given.
Abstract: In a wireless communication system, a
predistorter(PD) is often employed to alleviate nonlinear distortions
due to operating a power amplifier near saturation, thereby improving
the system performance and reducing the interference to adjacent
channels. This paper presents a new adaptive polynomial digital
predistorter(DPD). The proposed DPD uses Coordinate Rotation
Digital Computing(CORDIC) processors and PD process by pipelined
architecture. It is simpler and faster than conventional adaptive
polynomial DPD. The performance of the proposed DPD is proved by
MATLAB simulation.
Abstract: Image Searching was always a problem specially when these images are not properly managed or these are distributed over different locations. Currently different techniques are used for image search. On one end, more features of the image are captured and stored to get better results. Storing and management of such features is itself a time consuming job. While on the other extreme if fewer features are stored the accuracy rate is not satisfactory. Same image stored with different visual properties can further reduce the rate of accuracy. In this paper we present a new concept of using polynomials of sorted histogram of the image. This approach need less overhead and can cope with the difference in visual features of image.