Abstract: The purpose of this paper is to analyze the cooperative learning behavior pattern based on the data of students' movement. The study firstly reviewed the cooperative learning theory and its research status, and briefly introduced the k-means clustering algorithm. Then, it used clustering algorithm and mathematical statistics theory to analyze the activity rhythm of individual student and groups in different functional areas, according to the movement data provided by 10 first-year graduate students. It also focused on the analysis of students' behavior in the learning area and explored the law of cooperative learning behavior. The research result showed that the cooperative learning behavior analysis method based on movement data proposed in this paper is feasible. From the results of data analysis, the characteristics of behavior of students and their cooperative learning behavior patterns could be found.
Abstract: In electrical discharge machining (EDM), a complete and clear theory has not yet been established. The developed theory (physical models) yields results far from reality due to the complexity of the physics. It is difficult to select proper parameter settings in order to achieve better EDM performance. However, modelling can solve this critical problem concerning the parameter settings. Therefore, the purpose of the present work is to develop mathematical model to predict performance characteristics of EDM on Ti-5Al-2.5Sn titanium alloy. Response surface method (RSM) and artificial neural network (ANN) are employed to develop the mathematical models. The developed models are verified through analysis of variance (ANOVA). The ANN models are trained, tested, and validated utilizing a set of data. It is found that the developed ANN and mathematical model can predict performance of EDM effectively. Thus, the model has found a precise tool that turns EDM process cost-effective and more efficient.
Abstract: Regularity has often been present in the form of regular
polyhedra or tessellations; classical examples are the nine regular
polyhedra consisting of the five Platonic solids (regular convex
polyhedra) and the four Kleper-Poinsot polyhedra. These polytopes
can be seen as regular maps. Maps are cellular embeddings of
graphs (with possibly multiple edges, loops or dangling edges) on
compact connected (closed) surfaces with or without boundary. The
n-dimensional abstract polytopes, particularly the regular ones, have
gained popularity over recent years. The main focus of research
has been their symmetries and regularity. Planification of polyhedra
helps its spatial construction, yet it destroys its symmetries. To our
knowledge there is no “planification” for n-dimensional polytopes.
However we show that it is possible to make a “surfacification”
of the n-dimensional polytope, that is, it is possible to construct a
restrictedly-marked map representation of the abstract polytope on
some surface that describes its combinatorial structures as well as
all of its symmetries. We also show that there are infinitely many
ways to do this; yet there is one that is more natural that describes
reflections on the sides ((n−1)-faces) of n-simplices with reflections
on the sides of n-polygons. We illustrate this construction with the
4-tetrahedron (a regular 4-polytope with automorphism group of size
120) and the 4-cube (a regular 4-polytope with automorphism group
of size 384).
Abstract: This paper is devoted to the numerical solution of
large-scale linear ill-posed systems. A multilevel regularization
method is proposed. This method is based on a synthesis of
the Arnoldi-Tikhonov regularization technique and the multilevel
technique. We show that if the Arnoldi-Tikhonov method is
a regularization method, then the multilevel method is also a
regularization one. Numerical experiments presented in this paper
illustrate the effectiveness of the proposed method.
Abstract: The driven processes of Wiener and Lévy are known
self-standing Gaussian-Markov processes for fitting non-linear
dynamical Vasciek model. In this paper, a coincidental Gaussian
density stationarity condition and autocorrelation function of the
two driven processes were established. This led to the conflation
of Wiener and Lévy processes so as to investigate the efficiency
of estimates incorporated into the one-dimensional Vasciek model
that was estimated via the Maximum Likelihood (ML) technique.
The conditional laws of drift, diffusion and stationarity process
was ascertained for the individual Wiener and Lévy processes as
well as the commingle of the two processes for a fixed effect
and Autoregressive like Vasciek model when subjected to financial
series; exchange rate of Naira-CFA Franc. In addition, the model
performance error of the sub-merged driven process was miniature
compared to the self-standing driven process of Wiener and Lévy.
Abstract: Associations between life events and various forms of cancers have been identified. The purpose of a recent random-effects meta-analysis was to identify studies that examined the association between adverse events associated with changes to financial status including decreased income and breast cancer risk. The same association was studied in four separate studies which displayed traits that were not consistent between studies such as the study design, location, and time frame. It was of interest to pool information from various studies to help identify characteristics that differentiated study results. Two random-effects Bayesian meta-analysis models are proposed to combine the reported estimates of the described studies. The proposed models allow major sources of variation to be taken into account, including study level characteristics, between study variance and within study variance, and illustrate the ease with which uncertainty can be incorporated using a hierarchical Bayesian modelling approach.
Abstract: In this work, we present an efficient approach for
solving variable-order time-fractional partial differential equations,
which are based on Legendre and Laguerre polynomials. First, we
introduced the pseudo-operational matrices of integer and variable
fractional order of integration by use of some properties of
Riemann-Liouville fractional integral. Then, applied together with
collocation method and Legendre-Laguerre functions for solving
variable-order time-fractional partial differential equations. Also, an
estimation of the error is presented. At last, we investigate numerical
examples which arise in physics to demonstrate the accuracy of the
present method. In comparison results obtained by the present method
with the exact solution and the other methods reveals that the method
is very effective.
Abstract: Due to many applications and problems in the fields of plasma physics, geophysics, and other many topics, the interaction between the strain field and the magnetic field has to be considered. Adomian introduced the decomposition method for solving linear and nonlinear functional equations. This method leads to accurate, computable, approximately convergent solutions of linear and nonlinear partial and ordinary differential equations even the equations with variable coefficients. This paper is dealing with a mathematical model of generalized thermoelasticity of a half-space conducting medium. A magnetic field with constant intensity acts normal to the bounding plane has been assumed. Adomian’s decomposition method has been used to solve the model when the bounding plane is taken to be traction free and thermally loaded by harmonic heating. The numerical results for the temperature increment, the stress, the strain, the displacement, the induced magnetic, and the electric fields have been represented in figures. The magnetic field, the relaxation time, and the angular thermal load have significant effects on all the studied fields.
Abstract: Electricity markets throughout the world have
undergone substantial changes. Accurate, reliable, clear and
comprehensible modeling and forecasting of different variables
(loads and prices in the first instance) have achieved increasing
importance. In this paper, we describe the actual state of the
art focusing on reg-SARMA methods, which have proven to be
flexible enough to accommodate the electricity price/load behavior
satisfactory. More specifically, we will discuss: 1) The dichotomy
between point and interval forecasts; 2) The difficult choice between
stochastic (e.g. climatic variation) and non-deterministic predictors
(e.g. calendar variables); 3) The confrontation between modelling
a single aggregate time series or creating separated and potentially
different models of sub-series. The noteworthy point that we would
like to make it emerge is that prices and loads require different
approaches that appear irreconcilable even though must be made
reconcilable for the interests and activities of energy companies.
Abstract: Networks are often presented as containing a “core”
and a “periphery.” The existence of a core suggests that some
vertices are central and form the skeleton of the network, to which
all other vertices are connected. An alternative view of graphs is
through communities. Multiple measures have been proposed for
dense communities in graphs, the most classical being k-cliques,
k-cores, and k-plexes, all presenting groups of tightly connected
vertices. We here show that the edge number thresholds for such
communities to emerge and for their percolation into a single dense
connectivity component are very close, in all networks studied. These
percolating cliques produce a natural core and periphery structure.
This result is generic and is tested in configuration models and in
real-world networks. This is also true for k-cores and k-plexes. Thus,
the emergence of this connectedness among communities leading to
a core is not dependent on some specific mechanism but a direct
result of the natural percolation of dense communities.
Abstract: Urban flooding resulting from a sudden release of
water due to dam-break or excessive rainfall is a serious threatening
environment hazard, which causes loss of human life and large
economic losses. Anticipating floods before they occur could
minimize human and economic losses through the implementation
of appropriate protection, provision, and rescue plans. This work
reports on the numerical modelling of flash flood propagation
in urban areas after an excessive rainfall event or dam-break.
A two-dimensional (2D) depth-averaged shallow water model is
used with a refined unstructured grid of triangles for representing
the urban area topography. The 2D shallow water equations are
solved using a second-order well-balanced discontinuous Galerkin
scheme. Theoretical test case and three flood events are described
to demonstrate the potential benefits of the scheme: (i) wetting and
drying in a parabolic basin (ii) flash flood over a physical model of
the urbanized Toce River valley in Italy; (iii) wave propagation on
the Reyran river valley in consequence of the Malpasset dam-break
in 1959 (France); and (iv) dam-break flood in October 1982 at the
town of Sumacarcel (Spain). The capability of the scheme is also
verified against alternative models. Computational results compare
well with recorded data and show that the scheme is at least as
efficient as comparable second-order finite volume schemes, with
notable efficiency speedup due to parallelization.
Abstract: In this paper, we deal with the optimal I/O point location in an automated parking system. In this system, the S/R machine (storage and retrieve machine) travels independently in vertical and horizontal directions. Based on the characteristics of the parking system and the basic principle of AS/RS system (Automated Storage and Retrieval System), we obtain the continuous model in units of time. For the single command cycle using the randomized storage policy, we calculate the probability density function for the system travel time and thus we develop the travel time model. And we confirm that the travel time model shows a good performance by comparing with discrete case. Finally in this part, we establish the optimal model by minimizing the expected travel time model and it is shown that the optimal location of the I/O point is located at the middle of the left-hand above corner.
Abstract: In the present work, we consider one category of curves
denoted by L(p, k, r, n). These curves are continuous arcs which are
trajectories of roots of the trinomial equation zn = αzk + (1 − α),
where z is a complex number, n and k are two integers such that
1 ≤ k ≤ n − 1 and α is a real parameter greater than 1. Denoting
by L the union of all trinomial curves L(p, k, r, n) and using the
box counting dimension as fractal dimension, we will prove that the
dimension of L is equal to 3/2.
Abstract: Back propagation algorithm (BP) is a widely used
technique in artificial neural network and has been used as a tool
for solving the time series problems, such as decreasing training
time, maximizing the ability to fall into local minima, and optimizing
sensitivity of the initial weights and bias. This paper proposes an
improvement of a BP technique which is called IM-COH algorithm
(IM-COH). By combining IM-COH algorithm with cuckoo search
algorithm (CS), the result is cuckoo search improved control output
hidden layer algorithm (CS-IM-COH). This new algorithm has a
better ability in optimizing sensitivity of the initial weights and bias
than the original BP algorithm. In this research, the algorithm of
CS-IM-COH is compared with the original BP, the IM-COH, and the
original BP with CS (CS-BP). Furthermore, the selected benchmarks,
four time series samples, are shown in this research for illustration.
The research shows that the CS-IM-COH algorithm give the best
forecasting results compared with the selected samples.
Abstract: Networks can be utilized to represent project planning problems, using nodes for activities and arcs to indicate precedence relationship between them. For fixed activity duration, a simple algorithm calculates the amount of time required to complete a project, followed by the activities that comprise the critical path. Program Evaluation and Review Technique (PERT) generalizes the above model by incorporating uncertainty, allowing activity durations to be random variables, producing nevertheless a relatively crude solution in planning problems. In this paper, based on the findings of the relevant literature, which strongly suggests that a Beta distribution can be employed to model earthmoving activities, we utilize Monte Carlo simulation, to estimate the project completion time distribution and measure the influence of skewness, an element inherent in activities of modern technical projects. We also extract the activity criticality index, with an ultimate goal to produce more accurate planning estimations.
Abstract: It can be frequently observed that the data arising in our environment have a hierarchical or a nested structure attached with the data. Multilevel modelling is a modern approach to handle this kind of data. When multilevel modelling is combined with a binary response, the estimation methods get complex in nature and the usual techniques are derived from quasi-likelihood method. The estimation methods which are compared in this study are, marginal quasi-likelihood (order 1 & order 2) (MQL1, MQL2) and penalized quasi-likelihood (order 1 & order 2) (PQL1, PQL2). A statistical model is of no use if it does not reflect the given dataset. Therefore, checking the adequacy of the fitted model through a goodness-of-fit (GOF) test is an essential stage in any modelling procedure. However, prior to usage, it is also equally important to confirm that the GOF test performs well and is suitable for the given model. This study assesses the suitability of the GOF test developed for binary response multilevel models with respect to the method used in model estimation. An extensive set of simulations was conducted using MLwiN (v 2.19) with varying number of clusters, cluster sizes and intra cluster correlations. The test maintained the desirable Type-I error for models estimated using PQL2 and it failed for almost all the combinations of MQL. Power of the test was adequate for most of the combinations in all estimation methods except MQL1. Moreover, models were fitted using the four methods to a real-life dataset and performance of the test was compared for each model.
Abstract: The incubation period is defined as the time from infection with a microorganism to development of symptoms. In this research, two disease models: one with incubation period and another without incubation period were studied. The study involves the use of a mathematical model with a single incubation period. The test for the existence and stability of the disease free and the endemic equilibrium states for both models were carried out. The fourth order Runge-Kutta method was used to solve both models numerically. Finally, a computer program in MATLAB was developed to run the numerical experiments. From the results, we are able to show that the endemic equilibrium state of the model with incubation period is locally asymptotically stable whereas the endemic equilibrium state of the model without incubation period is unstable under certain conditions on the given model parameters. It was also established that the disease free equilibrium states of the model with and without incubation period are locally asymptotically stable. Furthermore, results from numerical experiments using empirical data obtained from Nigeria Centre for Disease Control (NCDC) showed that the overall population of the infected people for the model with incubation period is higher than that without incubation period. We also established from the results obtained that as the transmission rate from susceptible to infected population increases, the peak values of the infected population for the model with incubation period decrease and are always less than those for the model without incubation period.
Abstract: Greater common divisor (GCD) attack is an attack that relies on the polynomial structure of the cryptosystem. This attack required two plaintexts differ from a fixed number and encrypted under same modulus. This paper reports a security reaction of Lucas Based El-Gamal Cryptosystem in the Elliptic Curve group over finite field under GCD attack. Lucas Based El-Gamal Cryptosystem in the Elliptic Curve group over finite field was exposed mathematically to the GCD attack using GCD and Dickson polynomial. The result shows that the cryptanalyst is able to get the plaintext without decryption by using GCD attack. Thus, the study concluded that it is highly perilous when two plaintexts have a slight difference from a fixed number in the same Elliptic curve group over finite field.
Abstract: Lenstra’s attack uses Chinese remainder theorem as a tool and requires a faulty signature to be successful. This paper reports on the security responses of fourth and sixth order Lucas based (LUC4,6) cryptosystem under the Lenstra’s attack as compared to the other two Lucas based cryptosystems such as LUC and LUC3 cryptosystems. All the Lucas based cryptosystems were exposed mathematically to the Lenstra’s attack using Chinese Remainder Theorem and Dickson polynomial. Result shows that the possibility for successful Lenstra’s attack is less against LUC4,6 cryptosystem than LUC3 and LUC cryptosystems. Current study concludes that LUC4,6 cryptosystem is more secure than LUC and LUC3 cryptosystems in sustaining against Lenstra’s attack.
Abstract: Harmonic functions are solutions to Laplace’s equation
that are known to have an advantage as a global approach in providing
the potential values for autonomous vehicle navigation. However,
the computation for obtaining harmonic functions is often too slow
particularly when it involves very large environment. This paper
presents a two-stage iterative method namely Modified Arithmetic
Mean (MAM) method for solving 2D Laplace’s equation. Once
the harmonic functions are obtained, the standard Gradient Descent
Search (GDS) is performed for path finding of an autonomous vehicle
from arbitrary initial position to the specified goal position. Details
of the MAM method are discussed. Several simulations of vehicle
navigation with path planning in a static known indoor environment
were conducted to verify the efficiency of the MAM method. The
generated paths obtained from the simulations are presented. The
performance of the MAM method in computing harmonic functions
in 2D environment to solve path planning problem for an autonomous
vehicle navigation is also provided.