Abstract: In this paper, multiple positive solutions for
semipositone discrete eigenvalue problems are obtained by
using a three critical points theorem for nondifferentiable
functional.
Abstract: In this paper, the generalized (2+1)-dimensional Calogero-Bogoyavlenskii-Schiff (shortly CBS) equations are investigated. We employ the Hirota-s bilinear method to obtain the bilinear form of CBS equations. Then by the idea of extended homoclinic test approach (shortly EHTA), some exact soliton solutions including breather type solutions are presented.
Abstract: The object of this work is the probabilistic performance evaluation of safety instrumented systems (SIS), i.e. the average probability of dangerous failure on demand (PFDavg) and the average frequency of failure (PFH), taking into account the uncertainties related to the different parameters that come into play: failure rate (λ), common cause failure proportion (β), diagnostic coverage (DC)... This leads to an accurate and safe assessment of the safety integrity level (SIL) inherent to the safety function performed by such systems. This aim is in keeping with the requirement of the IEC 61508 standard with respect to handling uncertainty. To do this, we propose an approach that combines (1) Monte Carlo simulation and (2) fuzzy sets. Indeed, the first method is appropriate where representative statistical data are available (using pdf of the relating parameters), while the latter applies in the case characterized by vague and subjective information (using membership function). The proposed approach is fully supported with a suitable computer code.
Abstract: For the past one decade, biclustering has become popular data mining technique not only in the field of biological data analysis but also in other applications like text mining, market data analysis with high-dimensional two-way datasets. Biclustering clusters both rows and columns of a dataset simultaneously, as opposed to traditional clustering which clusters either rows or columns of a dataset. It retrieves subgroups of objects that are similar in one subgroup of variables and different in the remaining variables. Firefly Algorithm (FA) is a recently-proposed metaheuristic inspired by the collective behavior of fireflies. This paper provides a preliminary assessment of discrete version of FA (DFA) while coping with the task of mining coherent and large volume bicluster from web usage dataset. The experiments were conducted on two web usage datasets from public dataset repository whereby the performance of FA was compared with that exhibited by other population-based metaheuristic called binary Particle Swarm Optimization (PSO). The results achieved demonstrate the usefulness of DFA while tackling the biclustering problem.
Abstract: In this paper, He-s amplitude frequency formulation is used to obtain a periodic solution for a nonlinear oscillator with fractional potential. By calculation and computer simulations, compared with the exact solution shows that the result obtained is of high accuracy.
Abstract: A fully implicit finite-difference method has been proposed for the numerical solutions of one dimensional coupled nonlinear Burgers’ equations on the uniform mesh points. The method forms a system of nonlinear difference equations which is to be solved at each iteration. Newton’s iterative method has been implemented to solve this nonlinear assembled system of equations. The linear system has been solved by Gauss elimination method with partial pivoting algorithm at each iteration of Newton’s method. Three test examples have been carried out to illustrate the accuracy of the method. Computed solutions obtained by proposed scheme have been compared with analytical solutions and those already available in the literature by finding L2 and L∞ errors.
Abstract: Let Gα ,β (γ ,δ ) denote the class of function
f (z), f (0) = f ′(0)−1= 0 which satisfied e δ {αf ′(z)+ βzf ′′(z)}> γ i Re
in the open unit disk D = {z ∈ı : z < 1} for some α ∈ı (α ≠ 0) ,
β ∈ı and γ ∈ı (0 ≤γ 0 . In
this paper, we determine some extremal properties including
distortion theorem and argument of f ′( z ) .
Abstract: In a graph G, a cycle is Hamiltonian cycle if it contain all vertices of G. Two Hamiltonian cycles C_1 = 〈u_0, u_1, u_2, ..., u_{n−1}, u_0〉 and C_2 = 〈v_0, v_1, v_2, ..., v_{n−1}, v_0〉 in G are independent if u_0 = v_0, u_i = ̸ v_i for all 1 ≤ i ≤ n−1. In G, a set of Hamiltonian cycles C = {C_1, C_2, ..., C_k} is mutually independent if any two Hamiltonian cycles of C are independent. The mutually independent Hamiltonicity IHC(G), = k means there exist a maximum integer k such that there exists k-mutually independent Hamiltonian cycles start from any vertex of G. In this paper, we prove that IHC(C_n × C_n) = 4, for n ≥ 3.
Abstract: A clustering is process to identify a homogeneous
groups of object called as cluster. Clustering is one interesting topic
on data mining. A group or class behaves similarly characteristics.
This paper discusses a robust clustering process for data images with
two reduction dimension approaches; i.e. the two dimensional
principal component analysis (2DPCA) and principal component
analysis (PCA). A standard approach to overcome this problem is
dimension reduction, which transforms a high-dimensional data into
a lower-dimensional space with limited loss of information. One of
the most common forms of dimensionality reduction is the principal
components analysis (PCA). The 2DPCA is often called a variant of
principal component (PCA), the image matrices were directly treated
as 2D matrices; they do not need to be transformed into a vector so
that the covariance matrix of image can be constructed directly using
the original image matrices. The decomposed classical covariance
matrix is very sensitive to outlying observations. The objective of
paper is to compare the performance of robust minimizing vector
variance (MVV) in the two dimensional projection PCA (2DPCA)
and the PCA for clustering on an arbitrary data image when outliers
are hiden in the data set. The simulation aspects of robustness and
the illustration of clustering images are discussed in the end of
paper
Abstract: Soft topological spaces are considered as mathematical tools for dealing with uncertainties, and a fuzzy topological space is a special case of the soft topological space. The purpose of this paper is to study soft topological spaces. We introduce some new concepts in soft topological spaces such as soft first-countable spaces, soft second-countable spaces and soft separable spaces, and some basic properties of these concepts are explored.
Abstract: Einstein vacuum equations, that is a system of nonlinear
partial differential equations (PDEs) are derived from Weyl metric
by using relation between Einstein tensor and metric tensor. The
symmetries of Einstein vacuum equations for static axisymmetric
gravitational fields are obtained using the Lie classical method. We
have examined the optimal system of vector fields which is further
used to reduce nonlinear PDE to nonlinear ordinary differential
equation (ODE). Some exact solutions of Einstein vacuum equations
in general relativity are also obtained.
Abstract: Qk
n has been shown as an alternative to the hypercube
family. For any even integer k ≥ 4 and any integer n ≥ 2, Qk
n is
a bipartite graph. In this paper, we will prove that given any pair of
vertices, w and b, from different partite sets of Qk
n, there exist 2n
internally disjoint paths between w and b, denoted by {Pi | 0 ≤ i ≤ 2n-1}, such that 2n-1
i=0 Pi covers all vertices of Qk
n. The result is
optimal since each vertex of Qk
n has exactly 2n neighbors.
Abstract: The aim of this paper is to investigate the
performance of the developed two point block method designed for
two processors for solving directly non stiff large systems of higher
order ordinary differential equations (ODEs). The method calculates
the numerical solution at two points simultaneously and produces
two new equally spaced solution values within a block and it is
possible to assign the computational tasks at each time step to a
single processor. The algorithm of the method was developed in C
language and the parallel computation was done on a parallel shared
memory environment. Numerical results are given to compare the
efficiency of the developed method to the sequential timing. For
large problems, the parallel implementation produced 1.95 speed-up
and 98% efficiency for the two processors.
Abstract: A generalized Dirichlet to Neumann map is
one of the main aspects characterizing a recently introduced
method for analyzing linear elliptic PDEs, through which it
became possible to couple known and unknown components
of the solution on the boundary of the domain without
solving on its interior. For its numerical solution, a well conditioned
quadratically convergent sine-Collocation method
was developed, which yielded a linear system of equations
with the diagonal blocks of its associated coefficient matrix
being point diagonal. This structural property, among others,
initiated interest for the employment of iterative methods for
its solution. In this work we present a conclusive numerical
study for the behavior of classical (Jacobi and Gauss-Seidel)
and Krylov subspace (GMRES and Bi-CGSTAB) iterative
methods when they are applied for the solution of the Dirichlet
to Neumann map associated with the Laplace-s equation
on regular polygons with the same boundary conditions on
all edges.
Abstract: The visualization of geographic information on mobile devices has become popular as the widespread use of mobile Internet. The mobility of these devices brings about much convenience to people-s life. By the add-on location-based services of the devices, people can have an access to timely information relevant to their tasks. However, visual analysis of geographic data on mobile devices presents several challenges due to the small display and restricted computing resources. These limitations on the screen size and resources may impair the usability aspects of the visualization applications. In this paper, a variable-scale visualization method is proposed to handle the challenge of small mobile display. By merging multiple scales of information into a single image, the viewer is able to focus on the interesting region, while having a good grasp of the surrounding context. This is essentially visualizing the map through a fisheye lens. However, the fisheye lens induces undesirable geometric distortion in the peripheral, which renders the information meaningless. The proposed solution is to apply map generalization that removes excessive information around the peripheral and an automatic smoothing process to correct the distortion while keeping the local topology consistent. The proposed method is applied on both artificial and real geographical data for evaluation.
Abstract: Consider the Gregory integration (G) formula
with end corrections where h Δ is the forward difference operator with step size h. In this study we prove that can be optimized by
minimizing some of the coefficient k a in the remainder term by
particle swarm optimization. Experimental tests prove that can be rendered a powerful formula for library use.
Abstract: Feature selection has recently been the subject of intensive research in data mining, specially for datasets with a large number of attributes. Recent work has shown that feature selection can have a positive effect on the performance of machine learning algorithms. The success of many learning algorithms in their attempts to construct models of data, hinges on the reliable identification of a small set of highly predictive attributes. The inclusion of irrelevant, redundant and noisy attributes in the model building process phase can result in poor predictive performance and increased computation. In this paper, a novel feature search procedure that utilizes the Ant Colony Optimization (ACO) is presented. The ACO is a metaheuristic inspired by the behavior of real ants in their search for the shortest paths to food sources. It looks for optimal solutions by considering both local heuristics and previous knowledge. When applied to two different classification problems, the proposed algorithm achieved very promising results.
Abstract: This paper presents a comparison of average outgoing
quality limit of the MCSP-2-C plan with MCSP-C when MCSP-2-C
has been developed from MCSP-C. The parameters used in MCSP-2-
C are: i (the clearance number), c (the acceptance number), m (the
number of conforming units to be found before allowing c nonconforming
units in the sampling inspection), f1 and f2 (the sampling
frequency at level 1 and 2, respectively). The average outgoing
quality limit (AOQL) values from two plans were compared and we
found that for all sets of i, r, and c values, MCSP-2-C gives higher
values than MCSP-C. For all sets of i, r, and c values, the average
outgoing quality values of MCSP-C and MCSP-2-C are similar when
p is low or high but is difference when p is moderate.
Abstract: This paper describes a new method for affine parameter
estimation between image sequences. Usually, the parameter
estimation techniques can be done by least squares in a quadratic
way. However, this technique can be sensitive to the presence
of outliers. Therefore, parameter estimation techniques for various
image processing applications are robust enough to withstand the
influence of outliers. Progressively, some robust estimation functions
demanding non-quadratic and perhaps non-convex potentials adopted
from statistics literature have been used for solving these. Addressing
the optimization of the error function in a factual framework for
finding a global optimal solution, the minimization can begin with
the convex estimator at the coarser level and gradually introduce nonconvexity
i.e., from soft to hard redescending non-convex estimators
when the iteration reaches finer level of multiresolution pyramid.
Comparison has been made to find the performance of the results
of proposed method with the results found individually using two
different estimators.
Abstract: Three service providers in competition, try to optimize
their quality of service / content level and their service access
price. But, they have to deal with uncertainty on the consumers-
preferences. To reduce their uncertainty, they have the opportunity
to buy information and to build alliances. We determine the Shapley
value which is a fair way to allocate the grand coalition-s revenue
between the service providers. Then, we identify the values of β
(consumers- sensitivity coefficient to the quality of service / contents)
for which allocating the grand coalition-s revenue using the Shapley
value guarantees the system stability. For other values of β, we prove
that it is possible for the regulator to impose a per-period interest rate
maximizing the market coverage under equal allocation rules.