Abstract: Simulation modeling can be used to solve real world problems. It provides an understanding of a complex system. To develop a simplified model of process simulation, a suitable experimental design is required to be able to capture surface characteristics. This paper presents the experimental design and algorithm used to model the process simulation for optimization problem. The CO2 liquefaction based on external refrigeration with two refrigeration circuits was used as a simulation case study. Latin Hypercube Sampling (LHS) was purposed to combine with existing Central Composite Design (CCD) samples to improve the performance of CCD in generating the second order model of the system. The second order model was then used as the objective function of the optimization problem. The results showed that adding LHS samples to CCD samples can help capture surface curvature characteristics. Suitable number of LHS sample points should be considered in order to get an accurate nonlinear model with minimum number of simulation experiments.
Abstract: Probability of failure (PF) often appears alongside factor of safety (FS) in design acceptance criteria for rock slope, underground excavation and open pit mine designs. However, the design acceptance criteria generally provide no guidance relating to how PF should be calculated for homogeneous and heterogeneous rock masses, or what qualifies a ‘reasonable’ PF assessment for a given slope design. Observational and kinematic methods were widely used in the 1990s until advances in computing permitted the routine use of numerical modelling. In the 2000s and early 2010s, PF in numerical models was generally calculated using the point estimate method. More recently, some limit equilibrium analysis software offer statistical parameter inputs along with Monte-Carlo or Latin-Hypercube sampling methods to automatically calculate PF. Factors including rock type and density, weathering and alteration, intact rock strength, rock mass quality and shear strength, the location and orientation of geologic structure, shear strength of geologic structure and groundwater pore pressure influence the stability of rock slopes. Significant engineering and geological judgment, interpretation and data interpolation is usually applied in determining these factors and amalgamating them into a geotechnical model which can then be analysed. Most factors are estimated ‘approximately’ or with allowances for some variability rather than ‘exactly’. When it comes to numerical modelling, some of these factors are then treated deterministically (i.e. as exact values), while others have probabilistic inputs based on the user’s discretion and understanding of the problem being analysed. This paper discusses the importance of understanding the key aspects of slope design for homogeneous and heterogeneous rock masses and how they can be translated into reasonable PF assessments where the data permits. A case study from a large open pit gold mine in a complex geological setting in Western Australia is presented to illustrate how PF can be calculated using different methods and obtain markedly different results. Ultimately sound engineering judgement and logic is often required to decipher the true meaning and significance (if any) of some PF results.
Abstract: This paper presents performance of two robust gradient-based heuristic optimization procedures based on 3n enumeration and tunneling approach to seek global optimum of constrained integer problems. Both these procedures consist of two distinct phases for locating the global optimum of integer problems with a linear or non-linear objective function subject to linear or non-linear constraints. In both procedures, in the first phase, a local minimum of the function is found using the gradient approach coupled with hemstitching moves when a constraint is violated in order to return the search to the feasible region. In the second phase, in one optimization procedure, the second sub-procedure examines 3n integer combinations on the boundary and within hypercube volume encompassing the result neighboring the result from the first phase and in the second optimization procedure a tunneling function is constructed at the local minimum of the first phase so as to find another point on the other side of the barrier where the function value is approximately the same. In the next cycle, the search for the global optimum commences in both optimization procedures again using this new-found point as the starting vector. The search continues and repeated for various step sizes along the function gradient as well as that along the vector normal to the violated constraints until no improvement in optimum value is found. The results from both these proposed optimization methods are presented and compared with one provided by popular MS Excel solver that is provided within MS Office suite and other published results.
Abstract: Delay Tolerant Networks (DTN) which have sufficient state information include trajectory and contact information, to protect routing efficiency. However, state information is dynamic and hard to obtain without a global and/or long-term collection process. To deal with these problems, the internal social features of each node are introduced in the network to perform the routing process. This type of application is motivated from several human contact networks where people contact each other more frequently if they have more social features in common. Two unique processes were developed for this process; social feature extraction and multipath routing. The routing method then becomes a hypercube–based feature matching process. Furthermore, the effectiveness of multipath routing is evaluated and compared to that of single-path routing.
Abstract: In this paper, a thorough review about dual-cubes, DCn,
the related studies and their variations are given. DCn was introduced
to be a network which retains the pleasing properties of hypercube Qn
but has a much smaller diameter. In fact, it is so constructed that the
number of vertices of DCn is equal to the number of vertices of Q2n
+1. However, each vertex in DCn is adjacent to n + 1 neighbors and
so DCn has (n + 1) × 2^2n edges in total, which is roughly half the
number of edges of Q2n+1. In addition, the diameter of any DCn is 2n
+2, which is of the same order of that of Q2n+1. For selfcompleteness,
basic definitions, construction rules and symbols are
provided. We chronicle the results, where eleven significant theorems
are presented, and include some open problems at the end.
Abstract: Latin hypercube designs (LHDs) have been applied in
many computer experiments among the space-filling designs found in
the literature. A LHD can be randomly generated but a randomly
chosen LHD may have bad properties and thus act poorly in
estimation and prediction. There is a connection between Latin
squares and orthogonal arrays (OAs). A Latin square of order s
involves an arrangement of s symbols in s rows and s columns, such
that every symbol occurs once in each row and once in each column
and this exists for every non-negative integer s. In this paper, a
computer program was written to construct orthogonal array-based
Latin hypercube designs (OA-LHDs). Orthogonal arrays (OAs) were
constructed from Latin square of order s and the OAs constructed
were afterward used to construct the desired Latin hypercube designs
for three input variables for use in computer experiments. The LHDs
constructed have better space-filling properties and they can be used
in computer experiments that involve only three input factors.
MATLAB 2012a computer package (www.mathworks.com/) was
used for the development of the program that constructs the designs.
Abstract: This paper presents an optimal broadcast algorithm
for the hypercube networks. The main focus of the paper is the
effectiveness of the algorithm in the presence of many node faults.
For the optimal solution, our algorithm builds with spanning tree
connecting the all nodes of the networks, through which messages
are propagated from source node to remaining nodes. At any given
time, maximum n − 1 nodes may fail due to crashing. We show
that the hypercube networks are strongly fault-tolerant. Simulation
results analyze to accomplish algorithm characteristics under many
node faults. We have compared our simulation results between our
proposed method and the Fu’s method. Fu’s approach cannot tolerate
n − 1 faulty nodes in the worst case, but our approach can tolerate
n − 1 faulty nodes.
Abstract: Two finite element (FEM) models are presented in
this paper to address the random nature of the response of glued
timber structures made of wood segments with variable elastic
moduli evaluated from 3600 indentation measurements. This total
database served to create the same number of ensembles as was the
number of segments in the tested beam. Statistics of these ensembles
were then assigned to given segments of beams and the Latin
Hypercube Sampling (LHS) method was called to perform 100
simulations resulting into the ensemble of 100 deflections subjected
to statistical evaluation. Here, a detailed geometrical arrangement of
individual segments in the laminated beam was considered in the
construction of two-dimensional FEM model subjected to in fourpoint
bending to comply with the laboratory tests. Since laboratory
measurements of local elastic moduli may in general suffer from a
significant experimental error, it appears advantageous to exploit the
full scale measurements of timber beams, i.e. deflections, to improve
their prior distributions with the help of the Bayesian statistical
method. This, however, requires an efficient computational model
when simulating the laboratory tests numerically. To this end, a
simplified model based on Mindlin’s beam theory was established.
The improved posterior distributions show that the most significant
change of the Young’s modulus distribution takes place in laminae in
the most strained zones, i.e. in the top and bottom layers within the
beam center region. Posterior distributions of moduli of elasticity
were subsequently utilized in the 2D FEM model and compared with
the original simulations.
Abstract: Let maxζG(m) denote the maximum number of edges in a subgraph of graph G induced by m nodes. The n-dimensional augmented cube, denoted as AQn, a variation of the hypercube, possesses some properties superior to those of the hypercube. We study the cases when G is the augmented cube AQn.
Abstract: Complex sensitivity analysis of stresses in a concrete slab of the real type of rigid pavement made from recycled materials is performed. The computational model of the pavement is designed as a spatial (3D) model, is based on a nonlinear variant of the finite element method that respects the structural nonlinearity, enables to model different arrangements of joints, and the entire model can be loaded by the thermal load. Interaction of adjacent slabs in joints and contact of the slab and the subsequent layer are modeled with the help of special contact elements. Four concrete slabs separated by transverse and longitudinal joints and the additional structural layers and soil to the depth of about 3m are modeled. The thickness of individual layers, physical and mechanical properties of materials, characteristics of joints, and the temperature of the upper and lower surface of slabs are supposed to be random variables. The modern simulation technique Updated Latin Hypercube Sampling with 20 simulations is used. For sensitivity analysis the sensitivity coefficient based on the Spearman rank correlation coefficient is utilized. As a result, the estimates of influence of random variability of individual input variables on the random variability of principal stresses s1 and s3 in 53 points on the upper and lower surface of the concrete slabs are obtained.
Abstract: This paper investigates the suitability of Latin Hypercube sampling (LHS) for composite electric power system reliability analysis. Each sample generated in LHS is mapped into an equivalent system state and used for evaluating the annualized system and load point indices. DC loadflow based state evaluation model is solved for each sampled contingency state. The indices evaluated are loss of load probability, loss of load expectation, expected demand not served and expected energy not supplied. The application of the LHS is illustrated through case studies carried out using RBTS and IEEE-RTS test systems. Results obtained are compared with non-sequential Monte Carlo simulation and state enumeration analytical approaches. An error analysis is also carried out to check the LHS method’s ability to capture the distributions of the reliability indices. It is found that LHS approach estimates indices nearer to actual value and gives tighter bounds of indices than non-sequential Monte Carlo simulation.
Abstract: Qk
n has been shown as an alternative to the hypercube
family. For any even integer k ≥ 4 and any integer n ≥ 2, Qk
n is
a bipartite graph. In this paper, we will prove that given any pair of
vertices, w and b, from different partite sets of Qk
n, there exist 2n
internally disjoint paths between w and b, denoted by {Pi | 0 ≤ i ≤ 2n-1}, such that 2n-1
i=0 Pi covers all vertices of Qk
n. The result is
optimal since each vertex of Qk
n has exactly 2n neighbors.
Abstract: Global approximation using metamodel for complex
mathematical function or computer model over a large variable
domain is often needed in sensibility analysis, computer simulation,
optimal control, and global design optimization of complex, multiphysics
systems. To overcome the limitations of the existing
response surface (RS), surrogate or metamodel modeling methods for
complex models over large variable domain, a new adaptive and
regressive RS modeling method using quadratic functions and local
area model improvement schemes is introduced. The method applies
an iterative and Latin hypercube sampling based RS update process,
divides the entire domain of design variables into multiple cells,
identifies rougher cells with large modeling error, and further divides
these cells along the roughest dimension direction. A small number
of additional sampling points from the original, expensive model are
added over the small and isolated rough cells to improve the RS
model locally until the model accuracy criteria are satisfied. The
method then combines local RS cells to regenerate the global RS
model with satisfactory accuracy. An effective RS cells sorting
algorithm is also introduced to improve the efficiency of model
evaluation. Benchmark tests are presented and use of the new
metamodeling method to replace complex hybrid electrical vehicle
powertrain performance model in vehicle design optimization and
optimal control are discussed.
Abstract: Decision support systems are usually based on
multidimensional structures which use the concept of hypercube.
Dimensions are the axes on which facts are analyzed and form a
space where a fact is located by a set of coordinates at the
intersections of members of dimensions. Conventional
multidimensional structures deal with discrete facts linked to discrete
dimensions. However, when dealing with natural continuous
phenomena the discrete representation is not adequate. There is a
need to integrate spatiotemporal continuity within multidimensional
structures to enable analysis and exploration of continuous field data.
Research issues that lead to the integration of spatiotemporal
continuity in multidimensional structures are numerous. In this paper,
we discuss research issues related to the integration of continuity in
multidimensional structures, present briefly a multidimensional
model for continuous field data. We also define new aggregation
operations. The model and the associated operations and measures
are validated by a prototype.
Abstract: Independent spanning trees (ISTs) provide a number of advantages in data broadcasting. One can cite the use in fault tolerance network protocols for distributed computing and bandwidth. However, the problem of constructing multiple ISTs is considered hard for arbitrary graphs. In this paper we present an efficient algorithm to construct ISTs on hypercubes that requires minimum resources to be performed.
Abstract: To improve the efficiency of parametric studies or
tests planning the method is proposed, that takes into account all input parameters, but only a few simulation runs are performed to
assess the relative importance of each input parameter. For K input
parameters with N input values the total number of possible combinations of input values equals NK. To limit the number of runs,
only some (totally N) of possible combinations are taken into account. The sampling procedure Updated Latin Hypercube
Sampling is used to choose the optimal combinations. To measure the
relative importance of each input parameter, the Spearman rank
correlation coefficient is proposed. The sensitivity and the influence
of all parameters are analyzed within one procedure and the key
parameters with the largest influence are immediately identified.
Abstract: Star graphs are Cayley graphs of symmetric groups of permutations, with transpositions as the generating sets. A star graph is a preferred interconnection network topology to a hypercube for its ability to connect a greater number of nodes with lower degree. However, an attractive property of the hypercube is that it has a Hamiltonian decomposition, i.e. its edges can be partitioned into disjoint Hamiltonian cycles, and therefore a simple routing can be found in the case of an edge failure. The existence of Hamiltonian cycles in Cayley graphs has been known for some time. So far, there are no published results on the much stronger condition of the existence of Hamiltonian decompositions. In this paper, we give a construction of a Hamiltonian decomposition of the star graph 5-star of degree 4, by defining an automorphism for 5-star and a Hamiltonian cycle which is edge-disjoint with its image under the automorphism.
Abstract: Faults in a network may take various forms such as hardware/software errors, vertex/edge faults, etc. Folded hypercube is a well-known variation of the hypercube structure and can be constructed from a hypercube by adding a link to every pair of nodes with complementary addresses. Let FFv (respectively, FFe) be the set of faulty nodes (respectively, faulty links) in an n-dimensional folded hypercube FQn. Hsieh et al. have shown that FQn - FFv - FFe for n ≥ 3 contains a fault-free cycle of length at least 2n -2|FFv|, under the constraints that (1) |FFv| + |FFe| ≤ 2n - 4 and (2) every node in FQn is incident to at least two fault-free links. In this paper, we further consider the constraints |FFv| + |FFe| ≤ 2n - 3. We prove that FQn - FFv - FFe for n ≥ 5 still has a fault-free cycle of length at least 2n - 2|FFv|, under the constraints : (1) |FFv| + |FFe| ≤ 2n - 3, (2) |FFe| ≥ n + 2, and (3) every vertex is still incident with at least two links.
Abstract: The crossed cube is one of the most notable variations of hypercube, but some properties of the former are superior to those of the latter. For example, the diameter of the crossed cube is almost the half of that of the hypercube. In this paper, we focus on the problem embedding a Hamiltonian cycle through an arbitrary given edge in the crossed cube. We give necessary and sufficient condition for determining whether a given permutation with n elements over Zn generates a Hamiltonian cycle pattern of the crossed cube. Moreover, we obtain a lower bound for the number of different Hamiltonian cycles passing through a given edge in an n-dimensional crossed cube. Our work extends some recently obtained results.
Abstract: The hypercube Qn is one of the most well-known
and popular interconnection networks and the k-ary n-cube Qk
n is
an enlarged family from Qn that keeps many pleasing properties
from hypercubes. In this article, we study the panpositionable
hamiltonicity of Qk
n for k ≥ 3 and n ≥ 2. Let x, y of V (Qk
n)
be two arbitrary vertices and C be a hamiltonian cycle of Qk
n.
We use dC(x, y) to denote the distance between x and y on the
hamiltonian cycle C. Define l as an integer satisfying d(x, y) ≤ l ≤ 1
2 |V (Qk
n)|. We prove the followings:
• When k = 3 and n ≥ 2, there exists a hamiltonian cycle C
of Qk
n such that dC(x, y) = l.
• When k ≥ 5 is odd and n ≥ 2, we request that l /∈ S
where S is a set of specific integers. Then there exists a
hamiltonian cycle C of Qk
n such that dC(x, y) = l.
• When k ≥ 4 is even and n ≥ 2, we request l-d(x, y) to be
even. Then there exists a hamiltonian cycle C of Qk
n such
that dC(x, y) = l.
The result is optimal since the restrictions on l is due to the
structure of Qk
n by definition.