Abstract: Rubik's cube was invented in 1974. Since then, speedcubers all over the world try their best to break the world record again and again. The newest record is 3.47 seconds. There are many factors that affect the timing including turns per second (tps), algorithm, finger trick, and hardware of the cube. In this paper, the lower bound of the cube solving time will be discussed using convex optimization. Extended analysis of the world records will be used to understand how to improve the timing. With the understanding of each part of the solving step, the paper suggests a list of speed improvement technique. Based on the analysis of the world record, there is a high possibility that the 3 seconds mark will be broken soon.
Abstract: The emergence of digital twin technology, a digital replica of physical world, has improved the real-time access to data from sensors about the performance of buildings. This digital transformation has opened up many opportunities to improve the management of the building by using the data collected to help monitor consumption patterns and energy leakages. One example is the integration of predictive models for anomaly detection. In this paper, we use the GAM (Generalised Additive Model) for the anomaly detection of Air Handling Units (AHU) power consumption pattern. There is ample research work on the use of GAM for the prediction of power consumption at the office building and nation-wide level. However, there is limited illustration of its anomaly detection capabilities, prescriptive analytics case study, and its integration with the latest development of digital twin technology. In this paper, we applied the general GAM modelling framework on the historical data of the AHU power consumption and cooling load of the building between Jan 2018 to Aug 2019 from an education campus in Singapore to train prediction models that, in turn, yield predicted values and ranges. The historical data are seamlessly extracted from the digital twin for modelling purposes. We enhanced the utility of the GAM model by using it to power a real-time anomaly detection system based on the forward predicted ranges. The magnitude of deviation from the upper and lower bounds of the uncertainty intervals is used to inform and identify anomalous data points, all based on historical data, without explicit intervention from domain experts. Notwithstanding, the domain expert fits in through an optional feedback loop through which iterative data cleansing is performed. After an anomalously high or low level of power consumption detected, a set of rule-based conditions are evaluated in real-time to help determine the next course of action for the facilities manager. The performance of GAM is then compared with other approaches to evaluate its effectiveness. Lastly, we discuss the successfully deployment of this approach for the detection of anomalous power consumption pattern and illustrated with real-world use cases.
Abstract: Over-parameterized neural networks have attracted a
great deal of attention in recent deep learning theory research,
as they challenge the classic perspective of over-fitting when
the model has excessive parameters and have gained empirical
success in various settings. While a number of theoretical works
have been presented to demystify properties of such models, the
convergence properties of such models are still far from being
thoroughly understood. In this work, we study the convergence
properties of training two-hidden-layer partially over-parameterized
fully connected networks with the Rectified Linear Unit activation via
gradient descent. To our knowledge, this is the first theoretical work
to understand convergence properties of deep over-parameterized
networks without the equally-wide-hidden-layer assumption and
other unrealistic assumptions. We provide a probabilistic lower bound
of the widths of hidden layers and proved linear convergence rate of
gradient descent. We also conducted experiments on synthetic and
real-world datasets to validate our theory.
Abstract: This paper employs the Jeffrey's prior technique in the
process of estimating the periodograms and frequency of sinusoidal
model for unknown noisy time variants or oscillating events (data) in
a Bayesian setting. The non-informative Jeffrey's prior was adopted
for the posterior trigonometric function of the sinusoidal model
such that Cramer-Rao Lower Bound (CRLB) inference was used
in carving-out the minimum variance needed to curb the invariance
structure effect for unknown noisy time observational and repeated
circular patterns. An average monthly oscillating temperature series
measured in degree Celsius (0C) from 1901 to 2014 was subjected to
the posterior solution of the unknown noisy events of the sinusoidal
model via Markov Chain Monte Carlo (MCMC). It was not only
deduced that two minutes period is required before completing a cycle
of changing temperature from one particular degree Celsius to another
but also that the sinusoidal model via the CRLB-Jeffrey's prior for
unknown noisy events produced a miniature posterior Maximum A
Posteriori (MAP) compare to a known noisy events.
Abstract: In construction industry, reinforced concrete (RC) slabs
represent fundamental elements of buildings and bridges. Different
methods are available for analysing the structural behaviour of
slabs. In the early ages of last century, the yield-line method has
been proposed to attempt to solve such problem. Simple geometry
problems could easily be solved by using traditional hand analyses
which include plasticity theories. Nowadays, advanced finite element
(FE) analyses have mainly found their way into applications of
many engineering fields due to the wide range of geometries to
which they can be applied. In such cases, the application of an
elastic or a plastic constitutive model would completely change the
approach of the analysis itself. Elastic methods are popular due to
their easy applicability to automated computations. However, elastic
analyses are limited since they do not consider any aspect of the
material behaviour beyond its yield limit, which turns to be an
essential aspect of RC structural performance. Furthermore, their
applicability to non-linear analysis for modeling plastic behaviour
gives very reliable results. Per contra, this type of analysis is
computationally quite expensive, i.e. not well suited for solving
daily engineering problems. In the past years, many researchers have
worked on filling this gap between easy-to-implement elastic methods
and computationally complex plastic analyses. This paper aims at
proposing a numerical procedure, through which a pseudo-lower
bound solution, not violating the yield criterion, is achieved. The
advantages of moment distribution are taken into account, hence the
increase in strength provided by plastic behaviour is considered. The
lower bound solution is improved by detecting over-yielded moments,
which are used to artificially rule the moment distribution among
the rest of the non-yielded elements. The proposed technique obeys
Nielsen’s yield criterion. The outcome of this analysis provides a
simple, yet accurate, and non-time-consuming tool of predicting the
lower-bound solution of the collapse load of RC slabs. By using
this method, structural engineers can find the fracture patterns and
ultimate load bearing capacity. The collapse triggering mechanism is
found by detecting yield-lines. An application to the simple case of
a square clamped slab is shown, and a good match was found with
the exact values of collapse load.
Abstract: The goal of this project is to investigate constant
properties (called the Liouville-type Problem) for a p-stable map
as a local or global minimum of a p-energy functional where
the domain is a Euclidean space and the target space is a
closed half-ellipsoid. The First and Second Variation Formulas
for a p-energy functional has been applied in the Calculus
Variation Method as computation techniques. Stokes’ Theorem,
Cauchy-Schwarz Inequality, Hardy-Sobolev type Inequalities, and
the Bochner Formula as estimation techniques have been used to
estimate the lower bound and the upper bound of the derived
p-Harmonic Stability Inequality. One challenging point in this project
is to construct a family of variation maps such that the images
of variation maps must be guaranteed in a closed half-ellipsoid.
The other challenging point is to find a contradiction between the
lower bound and the upper bound in an analysis of p-Harmonic
Stability Inequality when a p-energy minimizing map is not constant.
Therefore, the possibility of a non-constant p-energy minimizing
map has been ruled out and the constant property for a p-energy
minimizing map has been obtained. Our research finding is to explore
the constant property for a p-stable map from a Euclidean space into
a closed half-ellipsoid in a certain range of p. The certain range of
p is determined by the dimension values of a Euclidean space (the
domain) and an ellipsoid (the target space). The certain range of p
is also bounded by the curvature values on an ellipsoid (that is, the
ratio of the longest axis to the shortest axis). Regarding Liouville-type
results for a p-stable map, our research finding on an ellipsoid is a
generalization of mathematicians’ results on a sphere. Our result is
also an extension of mathematicians’ Liouville-type results from a
special ellipsoid with only one parameter to any ellipsoid with (n+1)
parameters in the general setting.
Abstract: A new relative efficiency is defined as LSE and BLUE in the generalized linear model. The relative efficiency is based on the ratio of the least eigenvalues. In this paper, we discuss about its lower bound and the relationship between it and generalized relative coefficient. Finally, this paper proves that the new estimation is better under Stein function and special condition in some degree.
Abstract: In statistics parameter theory, usually the
parameter estimations have two kinds, one is the least-square
estimation (LSE), and the other is the best linear unbiased
estimation (BLUE). Due to the determining theorem of
minimum variance unbiased estimator (MVUE), the parameter
estimation of BLUE in linear model is most ideal. But since
the calculations are complicated or the covariance is not
given, people are hardly to get the solution. Therefore, people
prefer to use LSE rather than BLUE. And this substitution
will take some losses. To quantize the losses, many scholars
have presented many kinds of different relative efficiencies in
different views. For the linear weighted regression model, this
paper discusses the relative efficiencies of LSE of β to BLUE
of β. It also defines two new relative efficiencies and gives
their lower bounds.
Abstract: We study the anomalous WWγ and WWZ couplings by
calculating total cross sections of two processes at the LHeC with
electron beam energy Ee=140 GeV and the proton beam energy Ep=7
TeV, and at the FCC-ep collider with the polarized electron beam
energy Ee=80 GeV and the proton beam energy Ep=50 TeV. At the
LHeC with electron beam polarization, we obtain the results for the
difference of upper and lower bounds as (0.975, 0.118) and (0.285,
0.009) for the anomalous (Δκγ, λγ) and (Δκz, λz) couplings,
respectively. As for FCC-ep collider, these bounds are obtained as
(1.101, 0.065) and (0.320, 0.002) at an integrated luminosity of
Lint=100 fb^-1.
Abstract: A new relative efficiency in linear model in reference is
instructed into the linear weighted regression, and its upper and lower
bound are proposed. In the linear weighted regression model, for the
best linear unbiased estimation of mean matrix respect to the
least-squares estimation, two new relative efficiencies are given, and
their upper and lower bounds are also studied.
Abstract: Let A and B be nonnegative matrices. A new upper bound on the spectral radius ρ(A◦B) is obtained. Meanwhile, a new lower bound on the smallest eigenvalue q(AB) for the Fan product, and a new lower bound on the minimum eigenvalue q(B ◦A−1) for the Hadamard product of B and A−1 of two nonsingular M-matrices A and B are given. Some results of comparison are also given in theory. To illustrate our results, numerical examples are considered.
Abstract: All-to-all personalized communication, also known as complete exchange, is one of the most dense communication patterns in parallel computing. In this paper, we propose new indirect algorithms for complete exchange on all-port ring and torus. The new algorithms fully utilize all communication links and transmit messages along shortest paths to completely achieve the theoretical lower bounds on message transmission, which have not be achieved among other existing indirect algorithms. For 2D r × c ( r % c ) all-port torus, the algorithm has time complexities of optimal transmission cost and O(c) message startup cost. In addition, the proposed algorithms accommodate non-power-of-two tori where the number of nodes in each dimension needs not be power-of-two or square. Finally, the algorithms are conceptually simple and symmetrical for every message and every node so that they can be easily implemented and achieve the optimum in practice.
Abstract: A perfect secret-sharing scheme is a method to distribute a secret among a set of participants in such a way that only qualified subsets of participants can recover the secret and the joint share of participants in any unqualified subset is statistically independent of the secret. The collection of all qualified subsets is called the access structure of the perfect secret-sharing scheme. In a graph-based access structure, each vertex of a graph G represents a participant and each edge of G represents a minimal qualified subset. The average information ratio of a perfect secret-sharing scheme realizing the access structure based on G is defined as AR = (Pv2V (G) H(v))/(|V (G)|H(s)), where s is the secret and v is the share of v, both are random variables from and H is the Shannon entropy. The infimum of the average information ratio of all possible perfect secret-sharing schemes realizing a given access structure is called the optimal average information ratio of that access structure. Most known results about the optimal average information ratio give upper bounds or lower bounds on it. In this present structures based on bipartite graphs and determine the exact values of the optimal average information ratio of some infinite classes of them.
Abstract: This paper is concerned with exponential stability and stabilization of switched linear systems with interval time-varying delays. The time delay is any continuous function belonging to a given interval, in which the lower bound of delay is not restricted to zero. By constructing a suitable augmented Lyapunov-Krasovskii functional combined with Leibniz-Newton-s formula, a switching rule for the exponential stability and stabilization of switched linear systems with interval time-varying delays and new delay-dependent sufficient conditions for the exponential stability and stabilization of the systems are first established in terms of LMIs. Numerical examples are included to illustrate the effectiveness of the results.
Abstract: The paper presents a new hybridization methodology involving Neural, Fuzzy and Rough Computing. A Rough Sets based approximation technique has been proposed based on a certain Neuro – Fuzzy architecture. A New Rough Neuron composition consisting of a combination of a Lower Bound neuron and a Boundary neuron has also been described. The conventional convergence of error in back propagation has been given away for a new framework based on 'Output Excitation Factor' and an inverse input transfer function. The paper also presents a brief comparison of performances, of the existing Rough Neural Networks and ANFIS architecture against the proposed methodology. It can be observed that the rough approximation based neuro-fuzzy architecture is superior to its counterparts.
Abstract: A new data fusion method called joint probability density matrix (JPDM) is proposed, which can associate and fuse measurements from spatially distributed heterogeneous sensors to identify the real target in a surveillance region. Using the probabilistic grids representation, we numerically combine the uncertainty regions of all the measurements in a general framework. The NP-hard multisensor data fusion problem has been converted to a peak picking problem in the grids map. Unlike most of the existing data fusion method, the JPDM method dose not need association processing, and will not lead to combinatorial explosion. Its convergence to the CRLB with a diminishing grid size has been proved. Simulation results are presented to illustrate the effectiveness of the proposed technique.
Abstract: The System Identification problem looks for a
suitably parameterized model, representing a given process. The
parameters of the model are adjusted to optimize a performance
function based on error between the given process output and
identified process output. The linear system identification field is
well established with many classical approaches whereas most of
those methods cannot be applied for nonlinear systems. The problem
becomes tougher if the system is completely unknown with only the
output time series is available. It has been reported that the
capability of Artificial Neural Network to approximate all linear and
nonlinear input-output maps makes it predominantly suitable for the
identification of nonlinear systems, where only the output time series
is available. [1][2][4][5]. The work reported here is an attempt to
implement few of the well known algorithms in the context of
modeling of nonlinear systems, and to make a performance
comparison to establish the relative merits and demerits.
Abstract: In this paper we use classical linear stability theory
to investigate the effects of uniform internal heat generation on the
onset of Marangoni convection in a horizontal layer of fluid heated
from below. We use a analytical technique to obtain the close form
analytical expression for the onset of Marangoni convection when
the lower boundary is conducting with free-slip condition. We show
that the effect of increasing the internal heat generation is always to
destabilize the layer.
Abstract: In this work, we study the impact of dynamically
changing link slowdowns on the stability properties of packetswitched
networks under the Adversarial Queueing Theory
framework. Especially, we consider the Adversarial, Quasi-Static
Slowdown Queueing Theory model, where each link slowdown may
take on values in the two-valued set of integers {1, D} with D > 1
which remain fixed for a long time, under a (w, ¤ü)-adversary. In this
framework, we present an innovative systematic construction for the
estimation of adversarial injection rate lower bounds, which, if
exceeded, cause instability in networks that use the LIS (Longest-in-
System) protocol for contention-resolution. In addition, we show that
a network that uses the LIS protocol for contention-resolution may
result in dropping its instability bound at injection rates ¤ü > 0 when
the network size and the high slowdown D take large values. This is
the best ever known instability lower bound for LIS networks.
Abstract: In this paper we present a photo mosaic smartphone
application in client-server based large-scale image databases. Photo
mosaic is not a new concept, but there are very few smartphone
applications especially for a huge number of images in the
client-server environment. To support large-scale image databases,
we first propose an overall framework working as a client-server
model. We then present a concept of image-PAA features to efficiently
handle a huge number of images and discuss its lower bounding
property. We also present a best-match algorithm that exploits the
lower bounding property of image-PAA. We finally implement an
efficient Android-based application and demonstrate its feasibility.