Abstract: Object manipulation techniques in robotics can be
categorized in two major groups including manipulation with grasp
and manipulation without grasp. The original aim of this paper is to
develop an object manipulation method where in addition to being
grasp-less, the manipulation task is done in a passive approach. In
this method, linear and angular positions of the object are changed
and its manipulation path is controlled. The manipulation path is a
helix track with constant radius and incline. The method presented in
this paper proposes a system which has not the actuator and the active
controller. So this system requires a passive mechanical intelligence
to convey the object from the status of the source along the specified
path to the goal state. This intelligent is created based on utilizing the
geometry of the system components. A general set up for the
components of the system is considered to satisfy the required
conditions. Then after kinematical analysis, detailed dimensions and
geometry of the mechanism is obtained. The kinematical results are
verified by simulation in ADAMS.
Abstract: The objective of this research is to forecast the monthly exchange rate between Thai baht and the US dollar and to compare two forecasting methods. The methods are Box-Jenkins’ method and Holt’s method. Results show that the Box-Jenkins’ method is the most suitable method for the monthly Exchange Rate between Thai Baht and the US Dollar. The suitable forecasting model is ARIMA (1,1,0) without constant and the forecasting equation is Yt = Yt-1 + 0.3691 (Yt-1 - Yt-2) When Yt is the time series data at time t, respectively.
Abstract: Bianchi type cosmological models have been studied
on the basis of Lyra’s geometry. Exact solution has been obtained by
considering a time dependent displacement field for constant
deceleration parameter and varying cosmological term of the
universe. The physical behavior of the different models has been
examined for different cases.
Abstract: MSN used to be the most popular application for
communicating among social networks, but Facebook chat is now the
most popular. Facebook and MSN have similar characteristics,
including usefulness, ease-of-use, and a similar function, which is the
exchanging of information with friends. Facebook outperforms MSN
in both of these areas. However, the adoption of Facebook and
abandonment of MSN have occurred for other reasons. Functions can
be improved, but users’ willingness to use does not just depend on
functionality. Flow status has been established to be crucial to users’
adoption of cyber applications and to affects users’ adoption of
software applications. If users experience flow in using software
application, they will enjoy using it frequently, and even change their
preferred application from an old to this new one. However, no
investigation has examined choice behavior related to switching from
Facebook to MSN based on a consideration of flow experiences and
functions. This investigation discusses the flow experiences and
functions of social-networking applications. Flow experience is found
to affect perceived ease of use and perceived usefulness; perceived
ease of use influences information ex-change with friends, and
perceived usefulness; information exchange influences perceived
usefulness, but information exchange has no effect on flow
experience.
Abstract: This paper presents the confidence intervals for the
effect size base on bootstrap resampling method. The meta-analytic
confidence interval for effect size is proposed that are easy to
compute. A Monte Carlo simulation study was conducted to compare
the performance of the proposed confidence intervals with the
existing confidence intervals. The best confidence interval method
will have a coverage probability close to 0.95. Simulation results
have shown that our proposed confidence intervals perform well in
terms of coverage probability and expected length.
Abstract: Cell volume, together with membrane potential and
intracellular hydrogen ion concentration, is an essential biophysical
parameter for normal cellular activity. Cell volumes can be altered by
osmotically active compounds and extracellular tonicity.
In this study, a simple mathematical model of osmotically induced
cell swelling and shrinking is presented. Emphasis is given to water
diffusion across the membrane. The mathematical description of the
cellular behavior consists in a system of coupled ordinary differential
equations. We compare experimental data of cell volume alterations
driven by differences in osmotic pressure with mathematical
simulations under hypotonic and hypertonic conditions. Implications
for a future model are also discussed.
Abstract: The aim of this paper is to use matrix representation
of Fuzzy soft sets for proving some equalities connected with Fuzzy
soft sets based on set-operations.
Abstract: This paper deals with the problem of passivity
analysis for stochastic neural networks with leakage, discrete and
distributed delays. By using delay partitioning technique, free
weighting matrix method and stochastic analysis technique, several
sufficient conditions for the passivity of the addressed neural
networks are established in terms of linear matrix inequalities
(LMIs), in which both the time-delay and its time derivative can be
fully considered. A numerical example is given to show the
usefulness and effectiveness of the obtained results.
Abstract: Image segmentation process based on mathematical morphology has been studied in the paper. It has been established from the first principles of the morphological process, the entire segmentation is although a nonlinear signal processing task, the constituent wise, the intermediate steps are linear, bilinear and conformal transformation and they give rise to a non linear affect in a cumulative manner.
Abstract: Photoacoustic imaging (PAI) is a non-invasive and
non-ionizing imaging modality that combines the absorption contrast
of light with ultrasound resolution. Laser is used to deposit optical
energy into a target (i.e., optical fluence). Consequently, the target
temperature rises, and then thermal expansion occurs that leads to
generating a PA signal. In general, most image reconstruction
algorithms for PAI assume uniform fluence within an imaging object.
However, it is known that optical fluence distribution within the
object is non-uniform. This could affect the reconstruction of PA
images. In this study, we have investigated the influence of optical
fluence distribution on PA back-propagation imaging using finite
element method. The uniform fluence was simulated as a triangular
waveform within the object of interest. The non-uniform fluence
distribution was estimated by solving light propagation within a
tissue model via Monte Carlo method. The results show that the PA
signal in the case of non-uniform fluence is wider than the uniform
case by 23%. The frequency spectrum of the PA signal due to the
non-uniform fluence has missed some high frequency components in
comparison to the uniform case. Consequently, the reconstructed
image with the non-uniform fluence exhibits a strong smoothing
effect.
Abstract: In this paper, a backward semi-Lagrangian scheme
combined with the second-order backward difference formula
is designed to calculate the numerical solutions of nonlinear
advection-diffusion equations. The primary aims of this paper are
to remove any iteration process and to get an efficient algorithm
with the convergence order of accuracy 2 in time. In order to achieve
these objects, we use the second-order central finite difference and the
B-spline approximations of degree 2 and 3 in order to approximate
the diffusion term and the spatial discretization, respectively. For the
temporal discretization, the second order backward difference formula
is applied. To calculate the numerical solution of the starting point
of the characteristic curves, we use the error correction methodology
developed by the authors recently. The proposed algorithm turns out
to be completely iteration free, which resolves the main weakness
of the conventional backward semi-Lagrangian method. Also, the
adaptability of the proposed method is indicated by numerical
simulations for Burgers’ equations. Throughout these numerical
simulations, it is shown that the numerical results is in good
agreement with the analytic solution and the present scheme offer
better accuracy in comparison with other existing numerical schemes.
Abstract: In this paper, we introduce a method for improving
the embedded Runge-Kutta-Fehlberg4(5) method. At each integration
step, the proposed method is comprised of two equations for the
solution and the error, respectively. These solution and error are
obtained by solving an initial value problem whose solution has the
information of the error at each integration step. The constructed algorithm
controls both the error and the time step size simultaneously and
possesses a good performance in the computational cost compared to
the original method. For the assessment of the effectiveness, EULR
problem is numerically solved.
Abstract: In this article the problem of distributional moments estimation is considered. The new approach of moments estimation based on usage of the characteristic function is proposed. By statistical simulation technique author shows that new approach has some robust properties. For calculation of the derivatives of characteristic function there is used numerical differentiation. Obtained results confirmed that author’s idea has a certain working efficiency and it can be recommended for any statistical applications.
Abstract: Two new algorithms for nonparametric estimation of errors-in-variables models are proposed. The first algorithm is based on penalized regression spline. The spline is represented as a piecewise-linear function and for each linear portion orthogonal regression is estimated. This algorithm is iterative. The second algorithm involves locally weighted regression estimation. When the independent variable is measured with error such estimation is a complex nonlinear optimization problem. The simulation results have shown the advantage of the second algorithm under the assumption that true smoothing parameters values are known. Nevertheless the use of some indexes of fit to smoothing parameters selection gives the similar results and has an oversmoothing effect.
Abstract: The problem of estimating a proportion has important
applications in the field of economics, and in general, in many areas
such as social sciences. A common application in economics is
the estimation of the headcount index. In this paper, we define the
general headcount index as a proportion. Furthermore, we introduce
a new quantitative method for estimating the headcount index. In
particular, we suggest to use the logistic regression estimator for the
problem of estimating the headcount index. Assuming a real data set,
results derived from Monte Carlo simulation studies indicate that the
logistic regression estimator can be more accurate than the traditional
estimator of the headcount index.
Abstract: The paper is concerned with the existence of solution
of nonlinear second order neutral stochastic differential inclusions
with infinite delay in a Hilbert Space. Sufficient conditions for the
existence are obtained by using a fixed point theorem for condensing
maps.
Abstract: In this paper, Economic Order Quantity (EOQ) based model for non-instantaneous Weibull distribution deteriorating items with power demand pattern is presented. In this model, the holding cost per unit of the item per unit time is assumed to be an increasing linear function of time spent in storage. Here the retailer is allowed a trade-credit offer by the supplier to buy more items. Also in this model, shortages are allowed and partially backlogged. The backlogging rate is dependent on the waiting time for the next replenishment. This model aids in minimizing the total inventory cost by finding the optimal time interval and finding the optimal order quantity. The optimal solution of the model is illustrated with the help of numerical examples. Finally sensitivity analysis and graphical representations are given to demonstrate the model.
Abstract: Let the vertices of a graph such that every two
adjacent vertices have different color is a very common problem in
the graph theory. This is known as proper coloring of graphs. The
possible number of different proper colorings on a graph with a given
number of colors can be represented by a function called the
chromatic polynomial. Two graphs G and H are said to be
chromatically equivalent, if they share the same chromatic
polynomial. A Graph G is chromatically unique, if G is isomorphic to
H for any graph H such that G is chromatically equivalent to H. The
study of chromatically equivalent and chromatically unique problems
is called chromaticity. This paper shows that a wheel W12 is
chromatically unique.
Abstract: The aim of this paper is to study the internal
stabilization of the Bernoulli-Euler equation numerically. For this,
we consider a square plate subjected to a feedback/damping force
distributed only in a subdomain. An algorithm for obtaining an
approximate solution to this problem was proposed and implemented.
The numerical method used was the Finite Difference Method.
Numerical simulations were performed and showed the behavior of
the solution, confirming the theoretical results that have already been
proved in the literature. In addition, we studied the validation of the
numerical scheme proposed, followed by an analysis of the numerical
error; and we conducted a study on the decay of the energy associated.
Abstract: Natural resources management including water resources requires reliable estimations of time variant environmental parameters. Small improvements in the estimation of environmental parameters would result in grate effects on managing decisions. Noise reduction using wavelet techniques is an effective approach for preprocessing of practical data sets. Predictability enhancement of the river flow time series are assessed using fractal approaches before and after applying wavelet based preprocessing. Time series correlation and persistency, the minimum sufficient length for training the predicting model and the maximum valid length of predictions were also investigated through a fractal assessment.