Abstract: One of the major challenges faced in solving initial and boundary problems is how to find approximate solutions with minimal deviation from the exact solution without so much rigor and complications. The Taylor series method provides a simple way of obtaining an infinite series which converges to the exact solution for initial value problems and this method of solution is somewhat limited for a two point boundary problem since the infinite series has to be truncated to include the boundary conditions. In this paper, the Ying Buzu Shu algorithm is used to solve a two point boundary nonlinear diffusion problem for the fourth and sixth order solution and compare their relative error and rate of convergence to the exact solution.
Abstract: Machining instability, or chatter, can impose an important limitation to discrete part machining. In this work, a networked implementation of milling stability optimization with Bayesian learning is presented. The milling process was monitored with a wireless sensory tool holder instrumented with an accelerometer at the TU Wien, Vienna, Austria. The recorded data from a milling test cut were used to classify the cut as stable or unstable based on a frequency analysis. The test cut result was used in a Bayesian stability learning algorithm at the University of Tennessee, Knoxville, Tennessee, USA. The algorithm calculated the probability of stability as a function of axial depth of cut and spindle speed based on the test result and recommended parameters for the next test cut. The iterative process between two transatlantic locations was repeated until convergence to a stable optimal process parameter set was achieved.
Abstract: The paper is a comparative study of two classical vari-ants of parallel projection methods for solving the convex feasibility problem with their equivalents that involve variable weights in the construction of the solutions. We used a graphical representation of these methods for inpainting a convex area of an image in order to investigate their effectiveness in image reconstruction applications. We also presented a numerical analysis of the convergence of these four algorithms in terms of the average number of steps and execution time, in classical CPU and, alternativaly, in parallel GPU implementation.
Abstract: Medicines regulatory authorities expect pharmaceutical companies and contract research organizations to seek ways to certify that their laboratory control measurements are reliable. Establishing and maintaining laboratory quality standards are essential in ensuring the accuracy of test results. ‘ISO/IEC 17025:2017’ and ‘WHO Good Practices for Pharmaceutical Quality Control Laboratories (GPPQCL)’ are two quality standards commonly employed in developing laboratory quality systems. A review was conducted on the two standards to elaborate on areas on convergence and divergence. The goal was to understand how differences in each standard's requirements may influence laboratories' choices as to which document is easier to adopt for quality systems. A qualitative review method compared similar items in the two standards while mapping out areas where there were specific differences in the requirements of the two documents. The review also provided a detailed description of the clauses and parts covering management and technical requirements in these laboratory standards. The review showed that both documents share requirements for over ten critical areas covering objectives, infrastructure, management systems, and laboratory processes. There were, however, differences in standard expectations where GPPQCL emphasizes system procedures for planning and future budgets that will ensure continuity. Conversely, ISO 17025 was more focused on the risk management approach to establish laboratory quality systems. Elements in the two documents form common standard requirements to assure the validity of laboratory test results that promote mutual recognition. The ISO standard currently has more global patronage than GPPQCL.
Abstract: We analyze a first-class on the convergence of real number sequences, named hereafter sequences, to foster exploration and discovery of concepts through graphical representations before engaging students in proving. The main goal was to differentiate between sequences and continuous functions-of-a-real-variable and better understand concepts at an initial stage. We applied the analytic frame of Mathematical Working Spaces, which we expect to contribute to extending to sequences since, as far as we know, it has only developed for other objects, and which is relevant to analyze how mathematical work is built systematically by connecting the epistemological and cognitive perspectives, and involving the semiotic, instrumental, and discursive dimensions.
Abstract: A sequence of finite tandem queue is considered for
this study. Each one has a single server, which operates under the
egalitarian processor sharing discipline. External customers arrive at
each queue according to a renewal input process and having a general
service times distribution. Upon completing service, customers leave
the current queue and enter to the next. Under mild assumptions,
including critical data, we prove the existence and the uniqueness
of the fluid solution. For asymptotic behavior, we provide necessary
and sufficient conditions for the invariant state and the convergence
to this invariant state. In the end, we establish the convergence of a
correctly normalized state process to a fluid limit characterized by a
system of algebraic and integral equations.
Abstract: Firefly algorithm (FA) and Sine Cosine algorithm (SCA) are two very popular and advanced metaheuristic algorithms. However, these algorithms applied to multi-objective optimization problems have some shortcomings, respectively, such as premature convergence and limited exploration capability. Combining the privileges of FA and SCA while avoiding their deficiencies may improve the accuracy and efficiency of the algorithm. This paper proposes a hybridization of FA and SCA algorithms, named multi-objective firefly-sine cosine algorithm (MFA-SCA), to develop a more efficient meta-heuristic algorithm than FA and SCA.
Abstract: Noise estimation is essential in today wireless systems
for power control, adaptive modulation, interference suppression and
quality of service. Deep learning (DL) has already been applied in the
physical layer for modulation and signal classifications. Unacceptably
low accuracy of less than 50% is found to undermine traditional
application of DL classification for SNR prediction. In this paper,
we use divide-and-conquer algorithm and classifier fusion method
to simplify SNR classification and therefore enhances DL learning
and prediction. Specifically, multiple CNNs are used for classification
rather than a single CNN. Each CNN performs a binary classification
of a single SNR with two labels: less than, greater than or equal.
Together, multiple CNNs are combined to effectively classify over a
range of SNR values from −20 ≤ SNR ≤ 32 dB.We use pre-trained
CNNs to predict SNR over a wide range of joint channel parameters
including multiple Doppler shifts (0, 60, 120 Hz), power-delay
profiles, and signal-modulation types (QPSK,16QAM,64-QAM). The
approach achieves individual SNR prediction accuracy of 92%,
composite accuracy of 70% and prediction convergence one order
of magnitude faster than that of traditional estimation.
Abstract: State Estimator became an intrinsic part of Energy Management Systems (EMS). The SCADA measurements received from the field are processed by the State Estimator in order to accurately determine the actual operating state of the power systems and provide that information to other real-time network applications. All EMS vendors offer a State Estimator functionality in their baseline products. However, setting up and ensuring that State Estimator consistently produces a reliable solution often consumes a substantial engineering effort. This paper provides generic recommendations and describes a simple practical approach to efficient tuning of State Estimator, based on the working experience with major EMS software platforms and consulting projects in many electrical utilities of the USA.
Abstract: Over-parameterized neural networks have attracted a
great deal of attention in recent deep learning theory research,
as they challenge the classic perspective of over-fitting when
the model has excessive parameters and have gained empirical
success in various settings. While a number of theoretical works
have been presented to demystify properties of such models, the
convergence properties of such models are still far from being
thoroughly understood. In this work, we study the convergence
properties of training two-hidden-layer partially over-parameterized
fully connected networks with the Rectified Linear Unit activation via
gradient descent. To our knowledge, this is the first theoretical work
to understand convergence properties of deep over-parameterized
networks without the equally-wide-hidden-layer assumption and
other unrealistic assumptions. We provide a probabilistic lower bound
of the widths of hidden layers and proved linear convergence rate of
gradient descent. We also conducted experiments on synthetic and
real-world datasets to validate our theory.
Abstract: This paper mainly studies the path planning method based on ant colony optimization (ACO), and proposes heuristic integration ant colony optimization (HIACO). This paper not only analyzes and optimizes the principle, but also simulates and analyzes the parameters related to the application of HIACO in path planning. Compared with the original algorithm, the improved algorithm optimizes probability formula, tabu table mechanism and updating mechanism, and introduces more reasonable heuristic factors. The optimized HIACO not only draws on the excellent ideas of the original algorithm, but also solves the problems of premature convergence, convergence to the sub optimal solution and improper exploration to some extent. HIACO can be used to achieve better simulation results and achieve the desired optimization. Combined with the probability formula and update formula, several parameters of HIACO are tested. This paper proves the principle of the HIACO and gives the best parameter range in the research of path planning.
Abstract: The present study deals with the finite element (FE) analysis of thermally-induced bistable plate using various plate elements. The quadrilateral plate elements include the 4-node conforming plate element based on the classical laminate plate theory (CLPT), the 4-node and 9-node Mindlin plate element based on the first-order shear deformation laminated plate theory (FSDT), and a displacement-based 4-node quadrilateral element (RDKQ-NL20). Using the von-Karman’s large deflection theory and the total Lagrangian (TL) approach, the nonlinear FE governing equations for plate under thermal load are derived. Convergence analysis for four elements is first conducted. These elements are then used to predict the stable shapes of thermally-induced bistable plate. Numerical test shows that the plate element based on FSDT, namely the 4-node and 9-node Mindlin, and the RDKQ-NL20 plate element can predict two stable cylindrical shapes while the 4-node conforming plate predicts a saddles shape. Comparing the simulation results with ABAQUS, the RDKQ-NL20 element shows the best accuracy among all the elements.
Abstract: An Upgraded Cuckoo Search Algorithm is proposed here to solve optimization problems based on the improvements made in the earlier versions of Cuckoo Search Algorithm. Short comings of the earlier versions like slow convergence, trap in local optima improved in the proposed version by random initialization of solution by suggesting an Improved Lambda Iteration Relaxation method, Random Gaussian Distribution Walk to improve local search and further proposing Greedy Selection to accelerate to optimized solution quickly and by “Study Nearby Strategy” to improve global search performance by avoiding trapping to local optima. It is further proposed to generate better solution by Crossover Operation. The proposed strategy used in algorithm shows superiority in terms of high convergence speed over several classical algorithms. Three standard algorithms were tested on a 6-generator standard test system and the results are presented which clearly demonstrate its superiority over other established algorithms. The algorithm is also capable of handling higher unit systems.
Abstract: This work focuses on the symmetric alpha stable process with continuous time frequently used in modeling the signal with indefinitely growing variance, often observed with an unknown additive error. The objective of this paper is to estimate this error from discrete observations of the signal. For that, we propose a method based on the smoothing of the observations via Jackson polynomial kernel and taking into account the width of the interval where the spectral density is non-zero. This technique allows avoiding the “Aliasing phenomenon” encountered when the estimation is made from the discrete observations of a process with continuous time. We have studied the convergence rate of the estimator and have shown that the convergence rate improves in the case where the spectral density is zero at the origin. Thus, we set up an estimator of the additive error that can be subtracted for approaching the original signal without error.
Abstract: Assessing several individuals intensively over time
yields intensive longitudinal data (ILD). Even though ILD provide
rich information, they also bring other data analytic challenges. One
of these is the increased occurrence of missingness with increased
study length, possibly under non-ignorable missingness scenarios.
Multiple imputation (MI) handles missing data by creating several
imputed data sets, and pooling the estimation results across imputed
data sets to yield final estimates for inferential purposes. In this
article, we introduce dynr.mi(), a function in the R package,
Dynamic Modeling in R (dynr). The package dynr provides a suite
of fast and accessible functions for estimating and visualizing the
results from fitting linear and nonlinear dynamic systems models in
discrete as well as continuous time. By integrating the estimation
functions in dynr and the MI procedures available from the R
package, Multivariate Imputation by Chained Equations (MICE), the
dynr.mi() routine is designed to handle possibly non-ignorable
missingness in the dependent variables and/or covariates in a
user-specified dynamic systems model via MI, with convergence
diagnostic check. We utilized dynr.mi() to examine, in the context
of a vector autoregressive model, the relationships among individuals’
ambulatory physiological measures, and self-report affect valence
and arousal. The results from MI were compared to those from
listwise deletion of entries with missingness in the covariates.
When we determined the number of iterations based on the
convergence diagnostics available from dynr.mi(), differences in
the statistical significance of the covariate parameters were observed
between the listwise deletion and MI approaches. These results
underscore the importance of considering diagnostic information in
the implementation of MI procedures.
Abstract: The statistical modelling of precipitation data for a
given portion of territory is fundamental for the monitoring of
climatic conditions and for Hydrogeological Management Plans
(HMP). This modelling is rendered particularly complex by the
changes taking place in the frequency and intensity of precipitation,
presumably to be attributed to the global climate change. This paper
applies the Wakeby distribution (with 5 parameters) as a theoretical
reference model. The number and the quality of the parameters
indicate that this distribution may be the appropriate choice for
the interpolations of the hydrological variables and, moreover, the
Wakeby is particularly suitable for describing phenomena producing
heavy tails. The proposed estimation methods for determining the
value of the Wakeby parameters are the same as those used for
density functions with heavy tails. The commonly used procedure
is the classic method of moments weighed with probabilities
(probability weighted moments, PWM) although this has often shown
difficulty of convergence, or rather, convergence to a configuration
of inappropriate parameters. In this paper, we analyze the problem of
the likelihood estimation of a random variable expressed through its
quantile function. The method of maximum likelihood, in this case,
is more demanding than in the situations of more usual estimation.
The reasons for this lie, in the sampling and asymptotic properties of
the estimators of maximum likelihood which improve the estimates
obtained with indications of their variability and, therefore, their
accuracy and reliability. These features are highly appreciated in
contexts where poor decisions, attributable to an inefficient or
incomplete information base, can cause serious damages.
Abstract: In this paper, ways of modeling dynamic measurement
systems are discussed. Specially, for linear system with single-input
single-output, it could be modeled with shallow neural network.
Then, gradient based optimization algorithms are used for searching
the proper coefficients. Besides, method with normal equation and
second order gradient descent are proposed to accelerate the modeling
process, and ways of better gradient estimation are discussed. It
shows that the mathematical essence of the learning objective is
maximum likelihood with noises under Gaussian distribution. For
conventional gradient descent, the mini-batch learning and gradient
with momentum contribute to faster convergence and enhance model
ability. Lastly, experimental results proved the effectiveness of second
order gradient descent algorithm, and indicated that optimization with
normal equation was the most suitable for linear dynamic models.
Abstract: Multiobjective Particle Swarm Optimization (MOPSO) has shown an effective performance for solving test functions and real-world optimization problems. However, this method has a premature convergence problem, which may lead to lack of diversity. In order to improve its performance, this paper presents a hybrid approach which embedded the MOPSO into the island model and integrated a local search technique, Variable Neighborhood Search, to enhance the diversity into the swarm. Experiments on two series of test functions have shown the effectiveness of the proposed approach. A comparison with other evolutionary algorithms shows that the proposed approach presented a good performance in solving multiobjective optimization problems.
Abstract: A coupled two-layer finite volume/finite element
method was proposed for solving dam-break flow problem
over deformable beds. The governing equations consist of the
well-balanced two-layer shallow water equations for the water flow
and a linear elastic model for the bed deformations. Deformations
in the topography can be caused by a brutal localized force or
simply by a class of sliding displacements on the bathymetry.
This deformation in the bed is a source of perturbations, on
the water surface generating water waves which propagate with
different amplitudes and frequencies. Coupling conditions at the
interface are also investigated in the current study and two mesh
procedure is proposed for the transfer of information through the
interface. In the present work a new procedure is implemented at
the soil-water interface using the finite element and two-layer finite
volume meshes with a conservative distribution of the forces at
their intersections. The finite element method employs quadratic
elements in an unstructured triangular mesh and the finite volume
method uses the Rusanove to reconstruct the numerical fluxes. The
numerical coupled method is highly efficient, accurate, well balanced,
and it can handle complex geometries as well as rapidly varying
flows. Numerical results are presented for several test examples of
dam-break flows over deformable beds. Mesh convergence study is
performed for both methods, the overall model provides new insight
into the problems at minimal computational cost.
Abstract: This paper presents a comparative study of the Gauss Seidel and Newton-Raphson polar coordinates methods for power flow analysis. The effectiveness of these methods are evaluated and tested through a different IEEE bus test system on the basis of number of iteration, computational time, tolerance value and convergence.