Abstract: A mathematical model for the Dynamics of Economic
Profit is constructed by proposing a characteristic differential oneform
for this dynamics (analogous to the action in Hamiltonian
dynamics). After processing this form with exterior calculus, a pair of
characteristic differential equations is generated and solved for the
rate of change of profit P as a function of revenue R (t) and cost C (t).
By contracting the characteristic differential one-form with a vortex
vector, the Lagrangian is obtained for the Dynamics of Economic
Profit.
Abstract: The approach of subset selection in polynomial
regression model building assumes that the chosen fixed full set of
predefined basis functions contains a subset that is sufficient to
describe the target relation sufficiently well. However, in most cases
the necessary set of basis functions is not known and needs to be
guessed – a potentially non-trivial (and long) trial and error process.
In our research we consider a potentially more efficient approach –
Adaptive Basis Function Construction (ABFC). It lets the model
building method itself construct the basis functions necessary for
creating a model of arbitrary complexity with adequate predictive
performance. However, there are two issues that to some extent
plague the methods of both the subset selection and the ABFC,
especially when working with relatively small data samples: the
selection bias and the selection instability. We try to correct these
issues by model post-evaluation using Cross-Validation and model
ensembling. To evaluate the proposed method, we empirically
compare it to ABFC methods without ensembling, to a widely used
method of subset selection, as well as to some other well-known
regression modeling methods, using publicly available data sets.
Abstract: The study of a real function of two real variables can be supported by visualization using a Computer Algebra System (CAS). One type of constraints of the system is due to the algorithms implemented, yielding continuous approximations of the given function by interpolation. This often masks discontinuities of the function and can provide strange plots, not compatible with the mathematics. In recent years, point based geometry has gained increasing attention as an alternative surface representation, both for efficient rendering and for flexible geometry processing of complex surfaces. In this paper we present different artifacts created by mesh surfaces near discontinuities and propose a point based method that controls and reduces these artifacts. A least squares penalty method for an automatic generation of the mesh that controls the behavior of the chosen function is presented. The special feature of this method is the ability to improve the accuracy of the surface visualization near a set of interior points where the function may be discontinuous. The present method is formulated as a minimax problem and the non uniform mesh is generated using an iterative algorithm. Results show that for large poorly conditioned matrices, the new algorithm gives more accurate results than the classical preconditioned conjugate algorithm.
Abstract: In this paper, we consider the problem of logic simplification for a special class of logic functions, namely complementary Boolean functions (CBF), targeting low power implementation using static CMOS logic style. The functions are uniquely characterized by the presence of terms, where for a canonical binary 2-tuple, D(mj) ∪ D(mk) = { } and therefore, we have | D(mj) ∪ D(mk) | = 0 [19]. Similarly, D(Mj) ∪ D(Mk) = { } and hence | D(Mj) ∪ D(Mk) | = 0. Here, 'mk' and 'Mk' represent a minterm and maxterm respectively. We compare the circuits minimized with our proposed method with those corresponding to factored Reed-Muller (f-RM) form, factored Pseudo Kronecker Reed-Muller (f-PKRM) form, and factored Generalized Reed-Muller (f-GRM) form. We have opted for algebraic factorization of the Reed-Muller (RM) form and its different variants, using the factorization rules of [1], as it is simple and requires much less CPU execution time compared to Boolean factorization operations. This technique has enabled us to greatly reduce the literal count as well as the gate count needed for such RM realizations, which are generally prone to consuming more cells and subsequently more power consumption. However, this leads to a drawback in terms of the design-for-test attribute associated with the various RM forms. Though we still preserve the definition of those forms viz. realizing such functionality with only select types of logic gates (AND gate and XOR gate), the structural integrity of the logic levels is not preserved. This would consequently alter the testability properties of such circuits i.e. it may increase/decrease/maintain the same number of test input vectors needed for their exhaustive testability, subsequently affecting their generalized test vector computation. We do not consider the issue of design-for-testability here, but, instead focus on the power consumption of the final logic implementation, after realization with a conventional CMOS process technology (0.35 micron TSMC process). The quality of the resulting circuits evaluated on the basis of an established cost metric viz., power consumption, demonstrate average savings by 26.79% for the samples considered in this work, besides reduction in number of gates and input literals by 39.66% and 12.98% respectively, in comparison with other factored RM forms.
Abstract: DNA shuffling is a powerful method used for in vitro
evolute molecules with specific functions and has application in areas
such as, for example, pharmaceutical, medical and agricultural
research. The success of such experiments is dependent on a variety
of parameters and conditions that, sometimes, can not be properly
pre-established. Here, two computational models predicting DNA
shuffling results is presented and their use and results are evaluated
against an empirical experiment. The in silico and in vitro results
show agreement indicating the importance of these two models and
motivating the study and development of new models.
Abstract: The join dependency provides the basis for obtaining
lossless join decomposition in a classical relational schema. The
existence of Join dependency shows that that the tables always
represent the correct data after being joined. Since the classical
relational databases cannot handle imprecise data, they were
extended to fuzzy relational databases so that uncertain, ambiguous,
imprecise and partially known information can also be stored in
databases in a formal way. However like classical databases, the
fuzzy relational databases also undergoes decomposition during
normalization, the issue of joining the decomposed fuzzy relations
remains intact. Our effort in the present paper is to emphasize on this
issue. In this paper we define fuzzy join dependency in the
framework of type-1 fuzzy relational databases & type-2 fuzzy
relational databases using the concept of fuzzy equality which is
defined using fuzzy functions. We use the fuzzy equi-join operator
for computing the fuzzy equality of two attribute values. We also
discuss the dependency preservation property on execution of this
fuzzy equi- join and derive the necessary condition for the fuzzy
functional dependencies to be preserved on joining the decomposed
fuzzy relations. We also derive the conditions for fuzzy join
dependency to exist in context of both type-1 and type-2 fuzzy
relational databases. We find that unlike the classical relational
databases even the existence of a trivial join dependency does not
ensure lossless join decomposition in type-2 fuzzy relational
databases. Finally we derive the conditions for the fuzzy equality to
be non zero and the qualification of an attribute for fuzzy key.
Abstract: Load forecasting has always been the essential part of
an efficient power system operation and planning. A novel approach
based on support vector machines is proposed in this paper for annual
power load forecasting. Different kernel functions are selected to
construct a combinatorial algorithm. The performance of the new
model is evaluated with a real-world dataset, and compared with two
neural networks and some traditional forecasting techniques. The
results show that the proposed method exhibits superior performance.
Abstract: In this study, active tendons with Proportional Integral
Derivation type controllers were applied to a SDOF and a MDOF
building model. Physical models of buildings were constituted with
virtual springs, dampers and rigid masses. After that, equations of
motion of all degrees of freedoms were obtained. Matlab Simulink
was utilized to obtain the block diagrams for these equations of
motion. Parameters for controller actions were found by using a trial
method. After earthquake acceleration data were applied to the
systems, building characteristics such as displacements, velocities,
accelerations and transfer functions were analyzed for all degrees of
freedoms. Comparisons on displacement vs. time, velocity vs. time,
acceleration vs. time and transfer function (Db) vs. frequency (Hz)
were made for uncontrolled and controlled buildings. The results
show that the method seems feasible.
Abstract: Evolutionary Algorithms are population-based,
stochastic search techniques, widely used as efficient global
optimizers. However, many real life optimization problems often
require finding optimal solution to complex high dimensional,
multimodal problems involving computationally very expensive
fitness function evaluations. Use of evolutionary algorithms in such
problem domains is thus practically prohibitive. An attractive
alternative is to build meta models or use an approximation of the
actual fitness functions to be evaluated. These meta models are order
of magnitude cheaper to evaluate compared to the actual function
evaluation. Many regression and interpolation tools are available to
build such meta models. This paper briefly discusses the
architectures and use of such meta-modeling tools in an evolutionary
optimization context. We further present two evolutionary algorithm
frameworks which involve use of meta models for fitness function
evaluation. The first framework, namely the Dynamic Approximate
Fitness based Hybrid EA (DAFHEA) model [14] reduces
computation time by controlled use of meta-models (in this case
approximate model generated by Support Vector Machine
regression) to partially replace the actual function evaluation by
approximate function evaluation. However, the underlying
assumption in DAFHEA is that the training samples for the metamodel
are generated from a single uniform model. This does not take
into account uncertain scenarios involving noisy fitness functions.
The second model, DAFHEA-II, an enhanced version of the original
DAFHEA framework, incorporates a multiple-model based learning
approach for the support vector machine approximator to handle
noisy functions [15]. Empirical results obtained by evaluating the
frameworks using several benchmark functions demonstrate their
efficiency
Abstract: Computations with higher than the IEEE 754 standard double-precision (about 16 significant digits) are required recently. Although there are available software routines in Fortran and C for high-precision computation, users are required to implement such routines in their own computers with detailed knowledges about them. We have constructed an user-friendly online system for octupleprecision computation. In our Web system users with no knowledges about high-precision computation can easily perform octupleprecision computations, by choosing mathematical functions with argument(s) inputted, by writing simple mathematical expression(s) or by uploading C program(s). In this paper we enhance the Web system above by adding the facility of uploading Fortran programs, which have been widely used in scientific computing. To this end we construct converter routines in two stages.
Abstract: Environmental factors affect agriculture production
productivity and efficiency resulted in changing of profit efficiency.
This paper attempts to estimate the impacts of environmental factors
to profitability of rice farmers in the Red River Delta of Vietnam. The
dataset was extracted from 349 rice farmers using personal
interviews. Both OLS and MLE trans-log profit functions were used
in this study. Five production inputs and four environmental factors
were included in these functions. The estimation of the stochastic
profit frontier with a two-stage approach was used to measure
profitability. The results showed that the profit efficiency was about
75% on the average and environmental factors change profit
efficiency significantly beside farm specific characteristics. Plant
disease, soil fertility, irrigation apply and water pollution were the
four environmental factors cause profit loss in rice production. The
result indicated that farmers should reduce household size, farm
plots, apply row seeding technique and improve environmental
factors to obtain high profit efficiency with special consideration is
given for irrigation water quality improvement.
Abstract: We present a new method to reconstruct a temporally
coherent 3D animation from single or multi-view RGB-D video data
using unbiased feature point sampling. Given RGB-D video data, in
form of a 3D point cloud sequence, our method first extracts feature
points using both color and depth information. In the subsequent
steps, these feature points are used to match two 3D point clouds in
consecutive frames independent of their resolution. Our new motion
vectors based dynamic alignement method then fully reconstruct
a spatio-temporally coherent 3D animation. We perform extensive
quantitative validation using novel error functions to analyze the
results. We show that despite the limiting factors of temporal and
spatial noise associated to RGB-D data, it is possible to extract
temporal coherence to faithfully reconstruct a temporally coherent
3D animation from RGB-D video data.
Abstract: In this work, we improve a previously developed
segmentation scheme aimed at extracting edge information from
speckled images using a maximum likelihood edge detector. The
scheme was based on finding a threshold for the probability density
function of a new kernel defined as the arithmetic mean-to-geometric
mean ratio field over a circular neighborhood set and, in a general
context, is founded on a likelihood random field model (LRFM). The
segmentation algorithm was applied to discriminated speckle areas
obtained using simple elliptic discriminant functions based on
measures of the signal-to-noise ratio with fractional order moments.
A rigorous stochastic analysis was used to derive an exact expression
for the cumulative density function of the probability density
function of the random field. Based on this, an accurate probability
of error was derived and the performance of the scheme was
analysed. The improved segmentation scheme performed well for
both simulated and real images and showed superior results to those
previously obtained using the original LRFM scheme and standard
edge detection methods. In particular, the false alarm probability was
markedly lower than that of the original LRFM method with
oversegmentation artifacts virtually eliminated. The importance of
this work lies in the development of a stochastic-based segmentation,
allowing an accurate quantification of the probability of false
detection. Non visual quantification and misclassification in medical
ultrasound speckled images is relatively new and is of interest to
clinicians.
Abstract: Fuzzy logic control (FLC) systems have been tested in
many technical and industrial applications as a useful modeling tool
that can handle the uncertainties and nonlinearities of modern control
systems. The main drawback of the FLC methodologies in the
industrial environment is challenging for selecting the number of
optimum tuning parameters.
In this paper, a method has been proposed for finding the optimum
membership functions of a fuzzy system using particle swarm
optimization (PSO) algorithm. A synthetic algorithm combined from
fuzzy logic control and PSO algorithm is used to design a controller
for a continuous stirred tank reactor (CSTR) with the aim of
achieving the accurate and acceptable desired results. To exhibit the
effectiveness of proposed algorithm, it is used to optimize the
Gaussian membership functions of the fuzzy model of a nonlinear
CSTR system as a case study. It is clearly proved that the optimized
membership functions (MFs) provided better performance than a
fuzzy model for the same system, when the MFs were heuristically
defined.
Abstract: Interest in Human Consciousness has been revived in the late 20th century from different scientific disciplines. Consciousness studies involve both its understanding and its application. In this paper, a computational model of the minimum consciousness functions necessary in my point of view for Artificial Intelligence applications is presented with the aim of improving the way computations will be made in the future. In section I, human consciousness is briefly described according to the scope of this paper. In section II, a minimum set of consciousness functions is defined - based on the literature reviewed - to be modelled, and then a computational model of these functions is presented in section III. In section IV, an analysis of the model is carried out to describe its functioning in detail.
Abstract: Enhancement of the performance of a reverse osmosis
(RO) unit through periodic control is studied. The periodic control
manipulates the feed pressure and flow rate of the RO unit. To ensure
the periodic behavior of the inputs, the manipulated variables (MV)
are transformed into the form of sinusoidal functions. In this case, the
amplitude and period of the sinusoidal functions become the
surrogate MV and are thus regulated via nonlinear model predictive
control algorithm. The simulation results indicated that the control
system can generate cyclic inputs necessary to enhance the closedloop
performance in the sense of increasing the permeate production
and lowering the salt concentration. The proposed control system can
attain its objective with arbitrary set point for the controlled outputs.
Successful results were also obtained in the presence of modeling
errors.
Abstract: This article presents a voltage-mode universal
biquadratic filter performing simultaneous 3 standard functions: lowpass,
high-pass and band-pass functions, employing differential
different current conveyor (DDCC) and current controlled current
conveyor (CCCII) as active element. The features of the circuit are
that: the quality factor and pole frequency can be tuned independently
via the input bias currents: the circuit description is very simple,
consisting of 1 DDCC, 2 CCCIIs, 2 electronic resistors and 2
grounded capacitors. Without requiring component matching
conditions, the proposed circuit is very appropriate to further develop
into an integrated circuit. The PSPICE simulation results are
depicted. The given results agree well with the theoretical
anticipation.
Abstract: Soil chemical and physical properties have important
roles in compartment of the environment and agricultural
sustainability and human health. The objectives of this research is
determination of spatial distribution patterns of Cd, Zn, K, pH, TNV,
organic material and electrical conductivity (EC) in agricultural soils
of Natanz region in Esfehan province. In this study geostatistic and
non-geostatistic methods were used for prediction of spatial
distribution of these parameters. 64 composite soils samples were
taken at 0-20 cm depth. The study area is located in south of
NATANZ agricultural lands with area of 21660 hectares. Spatial
distribution of Cd, Zn, K, pH, TNV, organic material and electrical
conductivity (EC) was determined using geostatistic and geographic
information system. Results showed that Cd, pH, TNV and K data
has normal distribution and Zn, OC and EC data had not normal
distribution. Kriging, Inverse Distance Weighting (IDW), Local
Polynomial Interpolation (LPI) and Redial Basis functions (RBF)
methods were used to interpolation. Trend analysis showed that
organic carbon in north-south and east to west did not have trend
while K and TNV had second degree trend. We used some error
measurements include, mean absolute error(MAE), mean squared
error (MSE) and mean biased error(MBE). Ordinary
kriging(exponential model), LPI(Local polynomial interpolation),
RBF(radial basis functions) and IDW methods have been chosen as
the best methods to interpolating of the soil parameters. Prediction
maps by disjunctive kriging was shown that in whole study area was
intensive shortage of organic matter and more than 63.4 percent of
study area had shortage of K amount.
Abstract: In this paper, we employ the approach of linear
programming to propose a new interactive broadcast method. In our
method, a film S is divided into n equal parts and broadcast via k
channels. The user simultaneously downloads these segments from k
channels into the user-s set-top-box (STB) and plays them in order.
Our method assumes that the initial p segments will not have
fast-forwarding capabilities. Every time the user wants to initiate d
times fast-forwarding, according to our broadcasting strategy, the
necessary segments already saved in the user-s STB or are just
download on time for playing. The proposed broadcasting strategy not
only allows the user to pause and rewind, but also to fast-forward.
Abstract: With the exponential progress of technological
development comes a strong sense that events are moving too quickly
for our schools and that teachers may be losing control of them in the
process. This paper examines the impact of e-learning and e-teaching
in universities, from both the student and teacher perspective. In
particular, it is shown that e-teachers should focus not only on the
technical capacities and functions of IT materials and activities, but
must attempt to more fully understand how their e-learners perceive
the learning environment. From the e-learner perspective, this paper
indicates that simply having IT tools available does not automatically
translate into all students becoming effective learners. More
evidence-based evaluative research is needed to allow e-learning and
e-teaching to reach full potential.