Abstract: Assessing several individuals intensively over time
yields intensive longitudinal data (ILD). Even though ILD provide
rich information, they also bring other data analytic challenges. One
of these is the increased occurrence of missingness with increased
study length, possibly under non-ignorable missingness scenarios.
Multiple imputation (MI) handles missing data by creating several
imputed data sets, and pooling the estimation results across imputed
data sets to yield final estimates for inferential purposes. In this
article, we introduce dynr.mi(), a function in the R package,
Dynamic Modeling in R (dynr). The package dynr provides a suite
of fast and accessible functions for estimating and visualizing the
results from fitting linear and nonlinear dynamic systems models in
discrete as well as continuous time. By integrating the estimation
functions in dynr and the MI procedures available from the R
package, Multivariate Imputation by Chained Equations (MICE), the
dynr.mi() routine is designed to handle possibly non-ignorable
missingness in the dependent variables and/or covariates in a
user-specified dynamic systems model via MI, with convergence
diagnostic check. We utilized dynr.mi() to examine, in the context
of a vector autoregressive model, the relationships among individuals’
ambulatory physiological measures, and self-report affect valence
and arousal. The results from MI were compared to those from
listwise deletion of entries with missingness in the covariates.
When we determined the number of iterations based on the
convergence diagnostics available from dynr.mi(), differences in
the statistical significance of the covariate parameters were observed
between the listwise deletion and MI approaches. These results
underscore the importance of considering diagnostic information in
the implementation of MI procedures.
Abstract: Optimization is an important tool in making decisions and in analysing physical systems. In mathematical terms, an optimization problem is the problem of finding the best solution from among the set of all feasible solutions. The paper discusses the Whale Optimization Algorithm (WOA), and its applications in different fields. The algorithm is tested using MATLAB because of its unique and powerful features. The benchmark functions used in WOA algorithm are grouped as: unimodal (F1-F7), multimodal (F8-F13), and fixed-dimension multimodal (F14-F23). Out of these benchmark functions, we show the experimental results for F7, F11, and F19 for different number of iterations. The search space and objective space for the selected function are drawn, and finally, the best solution as well as the best optimal value of the objective function found by WOA is presented. The algorithmic results demonstrate that the WOA performs better than the state-of-the-art meta-heuristic and conventional algorithms.
Abstract: This paper presents a comparative study of the Gauss Seidel and Newton-Raphson polar coordinates methods for power flow analysis. The effectiveness of these methods are evaluated and tested through a different IEEE bus test system on the basis of number of iteration, computational time, tolerance value and convergence.
Abstract: This work approaches the automatic planning of paths
for Unmanned Aerial Vehicles (UAVs) through the application of the
Rapidly Exploring Random Tree Star-Smart (RRT*-Smart) algorithm.
RRT*-Smart is a sampling process of positions of a navigation
environment through a tree-type graph. The algorithm consists of
randomly expanding a tree from an initial position (root node) until
one of its branches reaches the final position of the path to be
planned. The algorithm ensures the planning of the shortest path,
considering the number of iterations tending to infinity. When a
new node is inserted into the tree, each neighbor node of the
new node is connected to it, if and only if the extension of the
path between the root node and that neighbor node, with this new
connection, is less than the current extension of the path between
those two nodes. RRT*-smart uses an intelligent sampling strategy
to plan less extensive routes by spending a smaller number of
iterations. This strategy is based on the creation of samples/nodes
near to the convex vertices of the navigation environment obstacles.
The planned paths are smoothed through the application of the
method called quintic pythagorean hodograph curves. The smoothing
process converts a route into a dynamically-viable one based on the
kinematic constraints of the vehicle. This smoothing method models
the hodograph components of a curve with polynomials that obey
the Pythagorean Theorem. Its advantage is that the obtained structure
allows computation of the curve length in an exact way, without the
need for quadratural techniques for the resolution of integrals.
Abstract: Over communication networks, images can be easily copied and distributed in an illegal way. The copyright protection for authors and owners is necessary. Therefore, the digital watermarking techniques play an important role as a valid solution for authority problems. Digital image watermarking techniques are used to hide watermarks into images to achieve copyright protection and prevent its illegal copy. Watermarks need to be robust to attacks and maintain data quality. Therefore, we discussed in this paper two approaches for image watermarking, first is based on Particle Swarm Optimization (PSO) and the second approach is based on Genetic Algorithm (GA). Discrete wavelet transformation (DWT) is used with the two approaches separately for embedding process to cover image transformation. Each of PSO and GA is based on co-relation coefficient to detect the high energy coefficient watermark bit in the original image and then hide the watermark in original image. Many experiments were conducted for the two approaches with different values of PSO and GA parameters. From experiments, PSO approach got better results with PSNR equal 53, MSE equal 0.0039. Whereas GA approach got PSNR equal 50.5 and MSE equal 0.0048 when using population size equal to 100, number of iterations equal to 150 and 3×3 block. According to the results, we can note that small block size can affect the quality of image watermarking based PSO/GA because small block size can increase the search area of the watermarking image. Better PSO results were obtained when using swarm size equal to 100.
Abstract: This paper presents a method for modelling and analysing end plate beam-to-column connections to obtain the quasi-static behaviour using non-linear dynamic explicit integration. In addition to its importance to study the static behaviour of a structural member, quasi-static behaviour is largely needed to be compared with the dynamic behaviour of such members in order to investigate the dynamic effect by proposing dynamic increase factors (DIFs). The beam-to-column bolted connections contain various contact surfaces at which the implicit procedure may have difficulties converging, resulting in a large number of iterations. Contrary, explicit procedure could deal effectively with complex contacts without converging problems. Hence, finite element modelling using ABAQUS/explicit is used in this study to address the dynamic effect may be produced using explicit procedure. Also, the effect of loading rate and mass scaling are discussed to investigate their effect on the time of analysis. The results show that the explicit procedure is valuable to model the end plate beam-to-column connections in terms of failure mode, load-displacement relationships. Also, it is concluded that loading rate and mass scaling should be carefully selected to avoid the dynamic effect in the solution.
Abstract: Vehicle Routing Problem (VRP) is a complex combinatorial optimization problem and it is quite difficult to find an optimal solution consisting of a set of routes for vehicles whose total cost is minimum. Evolutionary and swarm intelligent (SI) algorithms play a vital role in solving optimization problems. While the SI algorithms perform search, the diversity between the solutions they exploit is very important. This is because of the need to avoid early convergence and to get an appropriate balance between the exploration and exploitation. Therefore, it is important to check how far the solutions are diverse. In this paper, we measure the similarity between solutions, which ABC exploits while optimizing VRP. The similar solutions found are discarded at the end of the iteration and only unique solutions are passed on to the next iteration. The bees of discarded solutions become scouts and they start searching for new solutions. This process is continued and results show that the solution is optimized at lesser number of iterations but with the overhead of computing similarity in all the iterations. The problem instance from Solomon benchmarked dataset has been used for evaluating the presented methodology.
Abstract: In this paper, the specific sound Transmission Loss
(TL) of the Laminated Composite Plate (LCP) with different material
properties in each layer is investigated. The numerical method to
obtain the TL of the LCP is proposed by using elastic plate theory. The
transfer matrix approach is novelty presented for computational
efficiency in solving the numerous layers of dynamic stiffness matrix
(D-matrix) of the LCP. Besides the numerical simulations for
calculating the TL of the LCP, the material properties inverse method
is presented for the design of a laminated composite plate analogous to
a metallic plate with a specified TL. As a result, it demonstrates that
the proposed computational algorithm exhibits high efficiency with a
small number of iterations for achieving the goal. This method can be
effectively employed to design and develop tailor-made materials for
various applications.
Abstract: Conjugate gradient method has been enormously used
to solve large scale unconstrained optimization problems due to the
number of iteration, memory, CPU time, and convergence property,
in this paper we find a new class of nonlinear conjugate gradient
coefficient with global convergence properties proved by exact line
search. The numerical results for our new βK give a good result when
it compared with well known formulas.
Abstract: The aim of the current work is to present a comparison among three popular optimization methods in the inverse elastostatics problem (IESP) of flaw detection within a solid. In more details, the performance of a simulated annealing, a Hooke & Jeeves and a sequential quadratic programming algorithm was studied in the test case of one circular flaw in a plate solved by both the boundary element (BEM) and the finite element method (FEM). The proposed optimization methods use a cost function that utilizes the displacements of the static response. The methods were ranked according to the required number of iterations to converge and to their ability to locate the global optimum. Hence, a clear impression regarding the performance of the aforementioned algorithms in flaw identification problems was obtained. Furthermore, the coupling of BEM or FEM with these optimization methods was investigated in order to track differences in their performance.
Abstract: Given a large sparse signal, great wishes are to
reconstruct the signal precisely and accurately from lease number of
measurements as possible as it could. Although this seems possible
by theory, the difficulty is in built an algorithm to perform the
accuracy and efficiency of reconstructing. This paper proposes a new
proved method to reconstruct sparse signal depend on using new
method called Least Support Matching Pursuit (LS-OMP) merge it
with the theory of Partial Knowing Support (PSK) given new method
called Partially Knowing of Least Support Orthogonal Matching
Pursuit (PKLS-OMP).
The new methods depend on the greedy algorithm to compute the
support which depends on the number of iterations. So to make it
faster, the PKLS-OMP adds the idea of partial knowing support of its
algorithm. It shows the efficiency, simplicity, and accuracy to get
back the original signal if the sampling matrix satisfies the Restricted
Isometry Property (RIP).
Simulation results also show that it outperforms many algorithms
especially for compressible signals.
Abstract: Optimal design of structure has a main role in reduction of material usage which leads to deduction in the final cost of construction projects. Evolutionary approaches are found to be more successful techniques for solving size and shape structural optimization problem since it uses a stochastic random search instead of a gradient search. By reviewing the recent literature works the problem found was the optimization of weight. A new meta-heuristic algorithm called as Cuckoo Search (CS) Algorithm has used for the optimization of the total weight of the truss structures. This paper has used set of 10 bars and 25 bars trusses for the testing purpose. The main objective of this work is to reduce the number of iterations, weight and the total time consumption. In order to demonstrate the effectiveness of the present method, minimum weight design of truss structures is performed and the results of the CS are compared with other algorithms.
Abstract: This paper presents a generalized form of the
mechanistic deconvolution technique (GMD) to modeling image sensors applicable in various pan–tilt planes of view. The mechanistic deconvolution technique (UMD) is modified with the
given angles of a pan–tilt plane of view to formulate constraint parameters and characterize distortion effects, and thereby, determine
the corrected image data. This, as a result, does not require experimental setup or calibration. Due to the mechanistic nature of
the sensor model, the necessity for the sensor image plane to be
orthogonal to its z-axis is eliminated, and it reduces the dependency on image data. An experiment was constructed to evaluate the
accuracy of a model created by GMD and its insensitivity to changes in sensor properties and in pan and tilt angles. This was compared
with a pre-calibrated model and a model created by UMD using two sensors with different specifications. It achieved similar accuracy
with one-seventh the number of iterations and attained lower mean error by a factor of 2.4 when compared to the pre-calibrated and
UMD model respectively. The model has also shown itself to be robust and, in comparison to pre-calibrated and UMD model, improved the accuracy significantly.
Abstract: The objective of this paper is to a design of pattern
classification model based on the back-propagation (BP) algorithm for
decision support system. Standard BP model has done full connection
of each node in the layers from input to output layers. Therefore, it
takes a lot of computing time and iteration computing for good
performance and less accepted error rate when we are doing some
pattern generation or training the network.
However, this model is using exclusive connection in between
hidden layer nodes and output nodes. The advantage of this model is
less number of iteration and better performance compare with standard
back-propagation model. We simulated some cases of classification
data and different setting of network factors (e.g. hidden layer number
and nodes, number of classification and iteration). During our
simulation, we found that most of simulations cases were satisfied by
BP based using exclusive connection network model compared to
standard BP. We expect that this algorithm can be available to
identification of user face, analysis of data, mapping data in between
environment data and information.
Abstract: An attempt in this paper proposes a re-modification to
the minimum moment approach of resource leveling which is a modified minimum moment approach to the traditional method by
Harris. The method is based on critical path method. The new approach suggests the difference between the methods in the
selection criteria of activity which needs to be shifted for leveling resource histogram. In traditional method, the improvement factor
found first to select the activity for each possible day of shifting. In
modified method maximum value of the product of Resources Rate
and Free Float was found first and improvement factor is then
calculated for that activity which needs to be shifted. In the proposed
method the activity to be selected first for shifting is based on the largest value of resource rate. The process is repeated for all the
remaining activities for possible shifting to get updated histogram.
The proposed method significantly reduces the number of iterations
and is easier for manual computations.
Abstract: In this paper usefulness of quasi-Newton iteration
procedure in parameters estimation of the conditional variance
equation within BHHH algorithm is presented. Analytical solution of
maximization of the likelihood function using first and second
derivatives is too complex when the variance is time-varying. The
advantage of BHHH algorithm in comparison to the other
optimization algorithms is that requires no third derivatives with
assured convergence. To simplify optimization procedure BHHH
algorithm uses the approximation of the matrix of second derivatives
according to information identity. However, parameters estimation in
a/symmetric GARCH(1,1) model assuming normal distribution of
returns is not that simple, i.e. it is difficult to solve it analytically.
Maximum of the likelihood function can be founded by iteration
procedure until no further increase can be found. Because the
solutions of the numerical optimization are very sensitive to the
initial values, GARCH(1,1) model starting parameters are defined.
The number of iterations can be reduced using starting values close
to the global maximum. Optimization procedure will be illustrated in
framework of modeling volatility on daily basis of the most liquid
stocks on Croatian capital market: Podravka stocks (food industry),
Petrokemija stocks (fertilizer industry) and Ericsson Nikola Tesla
stocks (information-s-communications industry).
Abstract: An edge based local search algorithm, called ELS, is proposed for the maximum clique problem (MCP), a well-known combinatorial optimization problem. ELS is a two phased local search method effectively £nds the near optimal solutions for the MCP. A parameter ’support’ of vertices de£ned in the ELS greatly reduces the more number of random selections among vertices and also the number of iterations and running times. Computational results on BHOSLIB and DIMACS benchmark graphs indicate that ELS is capable of achieving state-of-the-art-performance for the maximum clique with reasonable average running times.
Abstract: L-system is a tool commonly used for modeling and simulating the growth of fractal plants. The aim of this paper is to join some problems of the computational geometry with the fractal geometry by using the L-system technique to generate fractal plant in 3D. L-system constructs the fractal structure by applying rewriting rules sequentially and this technique depends on recursion process with large number of iterations to get different shapes of 3D fractal plants. Instead, it was reiterated a specific number of iterations up to three iterations. The vertices generated from the last stage of the Lsystem rewriting process are used as input to the triangulation algorithm to construct the triangulation shape of these vertices. The resulting shapes can be used as covers for the architectural objects and in different computer graphics fields. The paper presents a gallery of triangulation forms which application in architecture creates an alternative for domes and other traditional types of roofs.
Abstract: A complex valued neural network is a neural network, which consists of complex valued input and/or weights and/or thresholds and/or activation functions. Complex-valued neural networks have been widening the scope of applications not only in electronics and informatics, but also in social systems. One of the most important applications of the complex valued neural network is in image and vision processing. In Neural networks, radial basis functions are often used for interpolation in multidimensional space. A Radial Basis function is a function, which has built into it a distance criterion with respect to a centre. Radial basis functions have often been applied in the area of neural networks where they may be used as a replacement for the sigmoid hidden layer transfer characteristic in multi-layer perceptron. This paper aims to present exhaustive results of using RBF units in a complex-valued neural network model that uses the back-propagation algorithm (called 'Complex-BP') for learning. Our experiments results demonstrate the effectiveness of a Radial basis function in a complex valued neural network in image recognition over a real valued neural network. We have studied and stated various observations like effect of learning rates, ranges of the initial weights randomly selected, error functions used and number of iterations for the convergence of error on a neural network model with RBF units. Some inherent properties of this complex back propagation algorithm are also studied and discussed.
Abstract: The design of a modern aircraft is based on three pillars: theoretical results, experimental test and computational simulations.
As a results of this, Computational Fluid Dynamic (CFD) solvers are
widely used in the aeronautical field. These solvers require the correct
selection of many parameters in order to obtain successful results. Besides, the computational time spent in the simulation depends on
the proper choice of these parameters.
In this paper we create an expert system capable of making an
accurate prediction of the number of iterations and time required for the convergence of a computational fluid dynamic (CFD) solver.
Artificial neural network (ANN) has been used to design the expert system. It is shown that the developed expert system is capable of making an accurate prediction the number of iterations and time
required for the convergence of a CFD solver.