Abstract: This work proposed a multi-objective mathematical programming approach to select the appropriate supply network elements. The multi-item multi-objective production-distribution inventory model is formulated with possible constraints under fuzzy environment. The unit cost has taken under fuzzy environment. The inventory model and warehouse location model has combined to formulate the production-distribution inventory model. Warehouse location is important in supply chain network. Particularly, if a company maintains more selling stores it cannot maintain individual secondary warehouse near to each selling store. Hence, maintaining the optimum number of secondary warehouses is important. Hence, the combined mathematical model is formulated to reduce the total expenditure of the organization by arranging the network of minimum number of secondary warehouses. Numerical example has been taken to illustrate the proposed model.
Abstract: As DNA microarray data contain relatively small
sample size compared to the number of genes, high dimensional
models are often employed. In high dimensional models, the selection
of tuning parameter (or, penalty parameter) is often one of the crucial
parts of the modeling. Cross-validation is one of the most common
methods for the tuning parameter selection, which selects a parameter
value with the smallest cross-validated score. However, selecting a
single value as an ‘optimal’ value for the parameter can be very
unstable due to the sampling variation since the sample sizes of
microarray data are often small. Our approach is to choose multiple candidates of tuning parameter
first, then average the candidates with different weights depending
on their performance. The additional step of estimating the weights
and averaging the candidates rarely increase the computational cost,
while it can considerably improve the traditional cross-validation. We
show that the selected value from the suggested methods often lead to
stable parameter selection as well as improved detection of significant
genetic variables compared to the tradition cross-validation via real
data and simulated data sets.
Abstract: Bianchi type cosmological models have been studied
on the basis of Lyra’s geometry. Exact solution has been obtained by
considering a time dependent displacement field for constant
deceleration parameter and varying cosmological term of the
universe. The physical behavior of the different models has been
examined for different cases.
Abstract: The objective of this research is to forecast the monthly exchange rate between Thai baht and the US dollar and to compare two forecasting methods. The methods are Box-Jenkins’ method and Holt’s method. Results show that the Box-Jenkins’ method is the most suitable method for the monthly Exchange Rate between Thai Baht and the US Dollar. The suitable forecasting model is ARIMA (1,1,0) without constant and the forecasting equation is Yt = Yt-1 + 0.3691 (Yt-1 - Yt-2) When Yt is the time series data at time t, respectively.
Abstract: In this work we present a family of new convergent
type methods splitting high order no negative steps feature that
allows your application to irreversible problems. Performing affine
combinations consist of results obtained with Trotter Lie integrators
of different steps. Some examples where applied symplectic
compared with methods, in particular a pair of differential equations
semilinear. The number of basic integrations required is comparable
with integrators symplectic, but this technique allows the ability
to do the math in parallel thus reducing the times of which
exemplify exhibiting some implementations with simple schemes for
its modularity and scalability process.
Abstract: Particle swarm optimization (PSO) is becoming one of
the most important swarm intelligent paradigms for solving global
optimization problems. Although some progress has been made to
improve PSO algorithms over the last two decades, additional work
is still needed to balance parameters to achieve better numerical
properties of accuracy, efficiency, and stability. In the optimal
PSO algorithm, the optimal weightings of (√ 5 − 1)/2 and (3 − √5)/2 are used for the cognitive factor and the social factor,
respectively. By the same token, the same optimal weightings have
been applied for intensification searches and diversification searches,
respectively. Perturbation and constriction effects are optimally
balanced. Simulations of the de Jong, the Rosenbrock, and the
Griewank functions show that the optimal PSO algorithm indeed
achieves better numerical properties and outperforms the canonical
PSO algorithm.
Abstract: Fuzzy regression models are useful for investigating
the relationship between explanatory variables and responses in fuzzy
environments. To overcome the deficiencies of previous models and
increase the explanatory power of fuzzy data, the graded mean
integration (GMI) representation is applied to determine
representative crisp regression coefficients. A fuzzy regression model
is constructed based on the modified dissemblance index (MDI),
which can precisely measure the actual total error. Compared with
previous studies based on the proposed MDI and distance criterion, the
results from commonly used test examples show that the proposed
fuzzy linear regression model has higher explanatory power and
forecasting accuracy.
Abstract: In this paper numerous robust fitting procedures are considered in estimating spatial variograms. In spatial statistics, the conventional variogram fitting procedure (non-linear weighted least squares) suffers from the same outlier problem that has plagued this method from its inception. Even a 3-parameter model, like the variogram, can be adversely affected by a single outlier. This paper uses the Hogg-Type adaptive procedures to select an optimal score function for a rank-based estimator for these non-linear models. Numeric examples and simulation studies will demonstrate the robustness, utility, efficiency, and validity of these estimates.
Abstract: In this paper, we present a binary cat swarm
optimization for solving the Set covering problem. The set covering
problem is a well-known NP-hard problem with many practical
applications, including those involving scheduling, production
planning and location problems. Binary cat swarm optimization
is a recent swarm metaheuristic technique based on the behavior
of discrete cats. Domestic cats show the ability to hunt and are
curious about moving objects. The cats have two modes of behavior:
seeking mode and tracing mode. We illustrate this approach with
65 instances of the problem from the OR-Library. Moreover, we
solve this problem with 40 new binarization techniques and we select
the technical with the best results obtained. Finally, we make a
comparison between results obtained in previous studies and the new
binarization technique, that is, with roulette wheel as transfer function
and V3 as discretization technique.
Abstract: Samurai Sudoku consists of five Sudoku square designs each having nine treatments in each row (column or sub-block) only once such the five Sudoku designs overlaps. Two or more Samurai designs can be joint together to give an extended Samurai design. In addition, two Samurai designs, each containing five Sudoku square designs, are mutually orthogonal (Graeco). If we superimpose two Samurai designs and obtained a pair of Latin and Greek letters in each row (column or sub-block) of the five Sudoku designs only once, then we have Graeco Samurai design. In this paper, simple method of constructing Samurai designs and mutually orthogonal Samurai design are proposed. In addition, linear models and methods of data analysis for the designs are proposed.
Abstract: We present in this work our model of road traffic
emissions (line sources) and dispersion of these emissions, named
DISPOLSPEM (Dispersion of Poly Sources and Pollutants Emission
Model). In its emission part, this model was designed to keep the
consistent bottom-up and top-down approaches. It also allows to
generate emission inventories from reduced input parameters being
adapted to existing conditions in Morocco and in the other developing
countries. While several simplifications are made, all the performance
of the model results are kept. A further important advantage of
the model is that it allows the uncertainty calculation and emission
rate uncertainty according to each of the input parameters. In the
dispersion part of the model, an improved line source model has
been developed, implemented and tested against a reference solution.
It provides improvement in accuracy over previous formulas of line
source Gaussian plume model, without being too demanding in terms
of computational resources. In the case study presented here, the
biggest errors were associated with the ends of line source sections;
these errors will be canceled by adjacent sections of line sources
during the simulation of a road network. In cases where the wind
is parallel to the source line, the use of the combination discretized
source and analytical line source formulas minimizes remarkably the
error. Because this combination is applied only for a small number
of wind directions, it should not excessively increase the calculation
time.
Abstract: This study solves a phylogeny problem by using modified wild dog pack optimization. The least squares error is considered as a cost function that needs to be minimized. Therefore, in each iteration, new distance matrices based on the constructed trees are calculated and used to select the alpha dog. To test the suggested algorithm, ten homologous genes are selected and collected from National Center for Biotechnology Information (NCBI) databanks (i.e., 16S, 18S, 28S, Cox 1, ITS1, ITS2, ETS, ATPB, Hsp90, and STN). The data are divided into three categories: 50 taxa, 100 taxa and 500 taxa. The empirical results show that the proposed algorithm is more reliable and accurate than other implemented methods.
Abstract: In this paper, we presented a Multi-Objective Random
Drift Particle Swarm Optimization algorithm (MORDPSO-CD) based
on RDPSO and crowding distance sorting to improve the convergence
and distribution with less computation cost. MORDPSO-CD makes
the most of RDPSO to approach the true Pareto optimal solutions
fast. We adopt the crowding distance sorting technique to update and
maintain the archived optimal solutions. Introducing the crowding
distance technique into MORDPSO can make the leader particles
find the true Pareto solution ultimately. The simulation results reveal
that the proposed algorithm has better convergence and distribution.
Abstract: The construction of Intensity-Duration-Frequency (IDF) curves is one of the most common and useful tools in order to design hydraulic structures and to provide a mathematical relationship between rainfall characteristics. IDF curves, especially those in Peninsular Malaysia, are often built using moving windows of rainfalls. However, these windows do not represent the actual rainfall events since the duration of rainfalls is usually prefixed. Hence, instead of using moving windows, this study aims to find regionalized distributions for IDF curves of extreme rainfalls based on storm events. Homogeneity test is performed on annual maximum of storm intensities to identify homogeneous regions of storms in Peninsular Malaysia. The L-moment method is then used to regionalized Generalized Extreme Value (GEV) distribution of these annual maximums and subsequently. IDF curves are constructed using the regional distributions. The differences between the IDF curves obtained and IDF curves found using at-site GEV distributions are observed through the computation of the coefficient of variation of root mean square error, mean percentage difference and the coefficient of determination. The small differences implied that the construction of IDF curves could be simplified by finding a general probability distribution of each region. This will also help in constructing IDF curves for sites with no rainfall station.
Abstract: Storm Event Analysis (SEA) provides a method to define rainfalls events as storms where each storm has its own amount and duration. By modelling daily probability of different types of storms, the onset, offset and cycle of rainfall seasons can be determined and investigated. Furthermore, researchers from the field of meteorology will be able to study the dynamical characteristics of rainfalls and make predictions for future reference. In this study, four categories of storms; short, intermediate, long and very long storms; are introduced based on the length of storm duration. Daily probability models of storms are built for these four categories of storms in Peninsular Malaysia. The models are constructed by using Bernoulli distribution and by applying linear regression on the first Fourier harmonic equation. From the models obtained, it is found that daily probability of storms at the Eastern part of Peninsular Malaysia shows a unimodal pattern with high probability of rain beginning at the end of the year and lasting until early the next year. This is very likely due to the Northeast monsoon season which occurs from November to March every year. Meanwhile, short and intermediate storms at other regions of Peninsular Malaysia experience a bimodal cycle due to the two inter-monsoon seasons. Overall, these models indicate that Peninsular Malaysia can be divided into four distinct regions based on the daily pattern for the probability of various storm events.
Abstract: Several experiments are conducted at different environments such as locations or periods (seasons) with identical treatments to each experiment purposely to study the interaction between the treatments and environments or between the treatments and periods (seasons). The commonly used designs of experiments for this purpose are randomized block design, Latin square design, balanced incomplete block design, Youden design, and one or more factor designs. The interest is to carry out a combined analysis of the data from these multi-environment experiments, instead of analyzing each experiment separately. This paper proposed combined analysis of experiments conducted via Sudoku square design of odd order with same experimental treatments.
Abstract: Piecewise polynomial regression model is very flexible model for modeling the data. If the piecewise polynomial regression model is matched against the data, its parameters are not generally known. This paper studies the parameter estimation problem of piecewise polynomial regression model. The method which is used to estimate the parameters of the piecewise polynomial regression model is Bayesian method. Unfortunately, the Bayes estimator cannot be found analytically. Reversible jump MCMC algorithm is proposed to solve this problem. Reversible jump MCMC algorithm generates the Markov chain that converges to the limit distribution of the posterior distribution of piecewise polynomial regression model parameter. The resulting Markov chain is used to calculate the Bayes estimator for the parameters of piecewise polynomial regression model.
Abstract: In this paper, numerical approximate Laplace transform inversion algorithm based on Chebyshev polynomial of second kind is developed using odd cosine series. The technique has been tested for three different functions to work efficiently. The illustrations show that the new developed numerical inverse Laplace transform is very much close to the classical analytic inverse Laplace transform.
Abstract: In this paper, applying frequency domain approach, a
delayed competitive web-site system is investigated. By choosing
the parameter α as a bifurcation parameter, it is found that Hopf
bifurcation occurs as the bifurcation parameter α passes a critical
values. That is, a family of periodic solutions bifurcate from the
equilibrium when the bifurcation parameter exceeds a critical value.
Some numerical simulations are included to justify the theoretical
analysis results. Finally, main conclusions are given.
Abstract: Course timetabling problems occur every semester in a university which includes the allocation of resources (subjects, lecturers and students) to a number of fixed rooms and timeslots. The assignment is carried out in a way such that there are no conflicts within rooms, students and lecturers, as well as fulfilling a range of constraints. The constraints consist of rules and policies set up by the universities as well as lecturers’ and students’ preferences of courses to be allocated in specific timeslots. This paper specifically focuses on the preferences of the course timetabling problem in one of the public universities in Malaysia. The demands will be considered into our existing mathematical model to make it more generalized and can be used widely. We have distributed questionnaires to a number of lecturers and students of the university to investigate their demands and preferences for their desired course timetable. We classify the preferences thus converting them to construct one mathematical model that can produce such timetable.