Abstract: The idea of cropping-system is a method used by
farmers. It is an environmentally-friendly method, protecting the
natural resources (soil, water, air, nutritive substances) and increase
the production at the same time, taking into account some crop
particularities. The combination of this powerful method with the
concepts of genetic algorithms results into a possibility of generating
sequences of crops in order to form a rotation. The usage of this type
of algorithms has been efficient in solving problems related to
optimization and their polynomial complexity allows them to be used
at solving more difficult and various problems. In our case, the
optimization consists in finding the most profitable rotation of
cultures. One of the expected results is to optimize the usage of the
resources, in order to minimize the costs and maximize the profit. In
order to achieve these goals, a genetic algorithm was designed. This
algorithm ensures the finding of several optimized solutions of
cropping-systems possibilities which have the highest profit and,
thus, which minimize the costs. The algorithm uses genetic-based
methods (mutation, crossover) and structures (genes, chromosomes).
A cropping-system possibility will be considered a chromosome and
a crop within the rotation is a gene within a chromosome. Results
about the efficiency of this method will be presented in a special
section. The implementation of this method would bring benefits into
the activity of the farmers by giving them hints and helping them to
use the resources efficiently.
Abstract: A uniquely restricted matching is defined to be a
matching M whose matched vertices induces a sub-graph which has
only one perfect matching. In this paper, we make progress on the
open question of the status of this problem on interval graphs (graphs
obtained as the intersection graph of intervals on a line). We give
an algorithm to compute maximum cardinality uniquely restricted
matchings on certain sub-classes of interval graphs. We consider two
sub-classes of interval graphs, the former contained in the latter, and
give O(|E|^2) time algorithms for both of them. It is to be noted that
both sub-classes are incomparable to proper interval graphs (graphs
obtained as the intersection graph of intervals in which no interval
completely contains another interval), on which the problem can be
solved in polynomial time.
Abstract: In this paper, numerical approximate Laplace transform inversion algorithm based on Chebyshev polynomial of second kind is developed using odd cosine series. The technique has been tested for three different functions to work efficiently. The illustrations show that the new developed numerical inverse Laplace transform is very much close to the classical analytic inverse Laplace transform.
Abstract: Piecewise polynomial regression model is very flexible model for modeling the data. If the piecewise polynomial regression model is matched against the data, its parameters are not generally known. This paper studies the parameter estimation problem of piecewise polynomial regression model. The method which is used to estimate the parameters of the piecewise polynomial regression model is Bayesian method. Unfortunately, the Bayes estimator cannot be found analytically. Reversible jump MCMC algorithm is proposed to solve this problem. Reversible jump MCMC algorithm generates the Markov chain that converges to the limit distribution of the posterior distribution of piecewise polynomial regression model parameter. The resulting Markov chain is used to calculate the Bayes estimator for the parameters of piecewise polynomial regression model.
Abstract: CO2 capture and storage technologies play a significant role in contributing to the control of climate change through the reduction of carbon dioxide emissions into the atmosphere. The present study evaluates and optimizes CO2 capture through a process, where carbon dioxide is passed into pH adjusted high salinity water and reacted with sodium chloride to form a precipitate of sodium bicarbonate. This process is based on a modified Solvay process with higher CO2 capture efficiency, higher sodium removal, and higher pH level without the use of ammonia. The process was tested in a bubble column semi-batch reactor and was optimized using response surface methodology (RSM). CO2 capture efficiency and sodium removal were optimized in terms of major operating parameters based on four levels and variables in Central Composite Design (CCD). The operating parameters were gas flow rate (0.5–1.5 L/min), reactor temperature (10 to 50 oC), buffer concentration (0.2-2.6%) and water salinity (25-197 g NaCl/L). The experimental data were fitted to a second-order polynomial using multiple regression and analyzed using analysis of variance (ANOVA). The optimum values of the selected variables were obtained using response optimizer. The optimum conditions were tested experimentally using desalination reject brine with salinity ranging from 65,000 to 75,000 mg/L. The CO2 capture efficiency in 180 min was 99% and the maximum sodium removal was 35%. The experimental and predicted values were within 95% confidence interval, which demonstrates that the developed model can successfully predict the capture efficiency and sodium removal using the modified Solvay method.
Abstract: Medical digital images usually have low resolution because of nature of their acquisition. Therefore, this paper focuses on zooming these images to obtain better level of information, required for the purpose of medical diagnosis. For this purpose, a strategy for selecting pixels in zooming operation is proposed. It is based on the principle of analog clock and utilizes a combination of point and neighborhood image processing. In this approach, the hour hand of clock covers the portion of image to be processed. For alignment, the center of clock points at middle pixel of the selected portion of image. The minute hand is longer in length, and is used to gain information about pixels of the surrounding area. This area is called neighborhood pixels region. This information is used to zoom the selected portion of the image. The proposed algorithm is implemented and its performance is evaluated for many medical images obtained from various sources such as X-ray, Computerized Tomography (CT) scan and Magnetic Resonance Imaging (MRI). However, for illustration and simplicity, the results obtained from a CT scanned image of head is presented. The performance of algorithm is evaluated in comparison to various traditional algorithms in terms of Peak signal-to-noise ratio (PSNR), maximum error, SSIM index, mutual information and processing time. From the results, the proposed algorithm is found to give better performance than traditional algorithms.
Abstract: In this paper, we presented not only development technology of an explosion proof type and portable combustible gas leak detector but also algorithm to improve accuracy for measuring gas concentrations. The presented techniques are to apply the flame-proof enclosure and intrinsic safe explosion proof to an infrared gas leak detector at first in Korea and to improve accuracy using linearization recursion equation and Lagrange interpolation polynomial. Together, we tested sensor characteristics and calibrated suitable input gases and output voltages. Then, we advanced the performances of combustible gaseous detectors through reflecting demands of gas safety management fields. To check performances of two company's detectors, we achieved the measurement tests with eight standard gases made by Korea Gas Safety Corporation. We demonstrated our instruments better in detecting accuracy other than detectors through experimental results.
Abstract: In this era of online communication, which transacts data in 0s and 1s, confidentiality is a priced commodity. Ensuring safe transmission of encrypted data and their uncorrupted recovery is a matter of prime concern. Among the several techniques for secure sharing of images, this paper proposes a k out of n region incrementing image sharing scheme for color images. The highlight of this scheme is the use of simple Boolean and arithmetic operations for generating shares and the Lagrange interpolation polynomial for authenticating shares. Additionally, this scheme addresses problems faced by existing algorithms such as color reversal and pixel expansion. This paper regenerates the original secret image whereas the existing systems regenerates only the half toned secret image.
Abstract: Experimental data of refractive index, excess molar volume and viscosity of binary mixture of morpholine with cumene over the whole composition range at 298.15 K, 303.15 K, 308.15 K and normal atmospheric pressure have been measured. The experimental data were used to compute the density, deviation in molar refraction, deviation in viscosity and excess Gibbs free energy of activation as a function of composition. The experimental viscosity data have been correlated with empirical equations like Grunberg- Nissan, Herric correlation and three body McAllister’s equation. The excess thermodynamic properties were fitted to Redlich-Kister polynomial equation. The variation of these properties with composition and temperature of the binary mixtures are discussed in terms of intermolecular interactions.
Abstract: Vertex Enumeration Algorithms explore the methods and procedures of generating the vertices of general polyhedra formed by system of equations or inequalities. These problems of enumerating the extreme points (vertices) of general polyhedra are shown to be NP-Hard. This lead to exploring how to count the vertices of general polyhedra without listing them. This is also shown to be #P-Complete. Some fully polynomial randomized approximation schemes (fpras) of counting the vertices of some special classes of polyhedra associated with Down-Sets, Independent Sets, 2-Knapsack problems and 2 x n transportation problems are presented together with some discovered open problems.
Abstract: The very well-known stacked sets of numbers referred
to as Pascal’s triangle present the coefficients of the binomial
expansion of the form (x+y)n. This paper presents an approach (the
Staircase Horizontal Vertical, SHV-method) to the generalization of
planar Pascal’s triangle for polynomial expansion of the form
(x+y+z+w+r+⋯)n. The presented generalization of Pascal’s triangle
is different from other generalizations of Pascal’s triangles given in
the literature. The coefficients of the generalized Pascal’s triangles,
presented in this work, are generated by inspection, using embedded
Pascal’s triangles. The coefficients of I-variables expansion are
generated by horizontally laying out the Pascal’s elements of (I-1)
variables expansion, in a staircase manner, and multiplying them with
the relevant columns of vertically laid out classical Pascal’s elements,
hence avoiding factorial calculations for generating the coefficients
of the polynomial expansion. Furthermore, the classical Pascal’s
triangle has some pattern built into it regarding its odd and even
numbers. Such pattern is known as the Sierpinski’s triangle. In this
study, a presentation of Sierpinski-like patterns of the generalized
Pascal’s triangles is given. Applications related to those coefficients
of the binomial expansion (Pascal’s triangle), or polynomial
expansion (generalized Pascal’s triangles) can be in areas of
combinatorics, and probabilities.
Abstract: We investigate the large scale of networks in the
context of network survivability under attack. We use appropriate
techniques to evaluate and the attacker-based- and the defenderbased-
network survivability. The attacker is unaware of the operated
links by the defender. Each attacked link has some pre-specified
probability to be disconnected. The defender choice is so that to
maximize the chance of successfully sending the flow to the
destination node. The attacker however will select the cut-set with
the highest chance to be disabled in order to partition the network.
Moreover, we extend the problem to the case of selecting the best p
paths to operate by the defender and the best k cut-sets to target by
the attacker, for arbitrary integers p,k>1. We investigate some
variations of the problem and suggest polynomial-time solutions.
Abstract: Digital images are widely used in computer
applications. To store or transmit the uncompressed images
requires considerable storage capacity and transmission bandwidth.
Image compression is a means to perform transmission or storage of
visual data in the most economical way. This paper explains about
how images can be encoded to be transmitted in a multiplexing
time-frequency domain channel. Multiplexing involves packing
signals together whose representations are compact in the working
domain. In order to optimize transmission resources each 4 × 4
pixel block of the image is transformed by a suitable polynomial
approximation, into a minimal number of coefficients. Less than
4 × 4 coefficients in one block spares a significant amount of
transmitted information, but some information is lost. Different
approximations for image transformation have been evaluated as
polynomial representation (Vandermonde matrix), least squares +
gradient descent, 1-D Chebyshev polynomials, 2-D Chebyshev
polynomials or singular value decomposition (SVD). Results have
been compared in terms of nominal compression rate (NCR),
compression ratio (CR) and peak signal-to-noise ratio (PSNR)
in order to minimize the error function defined as the difference
between the original pixel gray levels and the approximated
polynomial output. Polynomial coefficients have been later encoded
and handled for generating chirps in a target rate of about two
chirps per 4 × 4 pixel block and then submitted to a transmission
multiplexing operation in the time-frequency domain.
Abstract: Fading noise degrades the performance of cellular
communication, most notably in femto- and pico-cells in 3G and 4G
systems. When the wireless channel consists of a small number of
scattering paths, the statistics of fading noise is not analytically
tractable and poses a serious challenge to developing closed
canonical forms that can be analysed and used in the design of
efficient and optimal receivers. In this context, noise is multiplicative
and is referred to as stochastically local fading. In many analytical
investigation of multiplicative noise, the exponential or Gamma
statistics are invoked. More recent advances by the author of this
paper utilized a Poisson modulated-weighted generalized Laguerre
polynomials with controlling parameters and uncorrelated noise
assumptions. In this paper, we investigate the statistics of multidiversity
stochastically local area fading channel when the channel
consists of randomly distributed Rayleigh and Rician scattering
centers with a coherent Nakagami-distributed line of sight component
and an underlying doubly stochastic Poisson process driven by a
lognormal intensity. These combined statistics form a unifying triply
stochastic filtered marked Poisson point process model.
Abstract: The steady flow of a second order fluid through
constricted tube with slip velocity at wall is modeled and analyzed
theoretically. The governing equations are simplified by implying no
slip in radial direction. Based on Karman Pohlhausen procedure
polynomial solution for axial velocity profile is presented.
Expressions for pressure gradient, shear stress, separation and
reattachment points, and radial velocity are also calculated. The
effect of slip and no slip velocity on magnitude velocity, shear stress,
and pressure gradient are discussed and depicted graphically. It is
noted that when Reynolds number increases magnitude velocity of
the fluid decreases in both slip and no slip conditions. It is also found
that the wall shear stress, separation, and reattachment points are
strongly affected by Reynolds number.
Abstract: Presently various computational techniques are used
in modeling and analyzing environmental engineering data. In the
present study, an intra-comparison of polynomial and radial basis
kernel functions based on Support Vector Regression and, in turn, an
inter-comparison with Multi Linear Regression has been attempted in
modeling mass transfer capacity of vertical (θ = 90O) and inclined (θ
multiple plunging jets (varying from 1 to 16 numbers). The data set
used in this study consists of four input parameters with a total of
eighty eight cases, forty four each for vertical and inclined multiple
plunging jets. For testing, tenfold cross validation was used.
Correlation coefficient values of 0.971 and 0.981 along with
corresponding root mean square error values of 0.0025 and 0.0020
were achieved by using polynomial and radial basis kernel functions
based Support Vector Regression respectively. An intra-comparison
suggests improved performance by radial basis function in
comparison to polynomial kernel based Support Vector Regression.
Further, an inter-comparison with Multi Linear Regression
(correlation coefficient = 0.973 and root mean square error = 0.0024)
reveals that radial basis kernel functions based Support Vector
Regression performs better in modeling and estimating mass transfer
by multiple plunging jets.
Abstract: The need to save time and cost of soil testing at the
planning stage of road work has necessitated developing predictive
models. This study proposes a model for predicting the dry density of
lateritic soils stabilized with corn cob ash (CCA) and blended cement
- CCA. Lateritic soil was first stabilized with CCA at 1.5, 3.0, 4.5 and
6% of the weight of soil and then stabilized with the same
proportions as replacement for cement. Dry density, specific gravity,
maximum degree of saturation and moisture content were determined
for each stabilized soil specimen, following standard procedure.
Polynomial equations containing alpha and beta parameters for CCA
and blended CCA-cement were developed. Experimental values were
correlated with the values predicted from the Matlab curve fitting
tool, and the Solver function of Microsoft Excel 2010. The correlation
coefficient (R2) of 0.86 was obtained indicating that the model could
be accepted in predicting the maximum dry density of CCA stabilized
soils to facilitate quick decision making in roadworks.
Abstract: In this study, we examine some spectral properties
of non-selfadjoint matrix-valued difference equations consisting of
a polynomial-type Jost solution. The aim of this study is to
investigate the eigenvalues and spectral singularities of the difference
operator L which is expressed by the above-mentioned difference
equation. Firstly, thanks to the representation of polynomial type Jost
solution of this equation, we obtain asymptotics and some analytical
properties. Then, using the uniqueness theorems of analytic functions,
we guarantee that the operator L has a finite number of eigenvalues
and spectral singularities.
Abstract: A mixed method for model order reduction is
presented in this paper. The denominator polynomial is derived by
matching both Markov parameters and time moments, whereas
numerator polynomial derivation and error minimization is done
using Genetic Algorithm. The efficiency of the proposed method can
be investigated in terms of closeness of the response of reduced order
model with respect to that of higher order original model and a
comparison of the integral square error as well.
Abstract: Tamil handwritten document is taken as a key source
of data to identify the writer. Tamil is a classical language which has
247 characters include compound characters, consonants, vowels and
special character. Most characters of Tamil are multifaceted in
nature. Handwriting is a unique feature of an individual. Writer may
change their handwritings according to their frame of mind and this
place a risky challenge in identifying the writer. A new
discriminative model with pooled features of handwriting is proposed
and implemented using support vector machine. It has been reported
on 100% of prediction accuracy by RBF and polynomial kernel based
classification model.