Abstract: Both the minimum energy consumption and
smoothness, which is quantified as a function of jerk, are generally
needed in many dynamic systems such as the automobile and the
pick-and-place robot manipulator that handles fragile equipments.
Nevertheless, many researchers come up with either solely
concerning on the minimum energy consumption or minimum jerk
trajectory. This research paper proposes a simple yet very interesting
when combining the minimum energy and jerk of indirect jerks
approaches in designing the time-dependent system yielding an
alternative optimal solution. Extremal solutions for the cost functions
of the minimum energy, the minimum jerk and combining them
together are found using the dynamic optimization methods together
with the numerical approximation. This is to allow us to simulate
and compare visually and statistically the time history of state inputs
employed by combining minimum energy and jerk designs. The
numerical solution of minimum direct jerk and energy problem are
exactly the same solution; however, the solutions from problem of
minimum energy yield the similar solution especially in term of
tendency.
Abstract: This paper describes an automatic algorithm to restore
the shape of three-dimensional (3D) left ventricle (LV) models created
from magnetic resonance imaging (MRI) data using a geometry-driven
optimization approach. Our basic premise is to restore the LV shape
such that the LV epicardial surface is smooth after the restoration. A
geometrical measure known as the Minimum Principle Curvature (κ2)
is used to assess the smoothness of the LV. This measure is used to
construct the objective function of a two-step optimization process.
The objective of the optimization is to achieve a smooth epicardial
shape by iterative in-plane translation of the MRI slices.
Quantitatively, this yields a minimum sum in terms of the magnitude
of κ
2, when κ2 is negative. A limited memory quasi-Newton algorithm,
L-BFGS-B, is used to solve the optimization problem. We tested our
algorithm on an in vitro theoretical LV model and 10 in vivo
patient-specific models which contain significant motion artifacts. The
results show that our method is able to automatically restore the shape
of LV models back to smoothness without altering the general shape of
the model. The magnitudes of in-plane translations are also consistent
with existing registration techniques and experimental findings.
Abstract: The statistical distributions are modeled in explaining
nature of various types of data sets. Although these distributions are
mostly uni-modal, it is quite common to see multiple modes in the
observed distribution of the underlying variables, which make the
precise modeling unrealistic. The observed data do not exhibit
smoothness not necessarily due to randomness, but could also be due
to non-randomness resulting in zigzag curves, oscillations, humps
etc. The present paper argues that trigonometric functions, which
have not been used in probability functions of distributions so far,
have the potential to take care of this, if incorporated in the
distribution appropriately. A simple distribution (named as, Sinoform
Distribution), involving trigonometric functions, is illustrated in the
paper with a data set. The importance of trigonometric functions is
demonstrated in the paper, which have the characteristics to make
statistical distributions exotic. It is possible to have multiple modes,
oscillations and zigzag curves in the density, which could be suitable
to explain the underlying nature of select data set.
Abstract: Shrunken patterning for integrated device
manufacturing requires surface cleanliness and surface smoothness in
wet chemical processing [1]. It is necessary to control all process
parameters perfectly especially for the common cleaning technique
RCA clean (SC-1 and SC-2) [2]. In this paper the characteristic and
effect of surface preparation parameters are discussed. The properties
of RCA wet chemical processing in silicon technology is based on
processing time, temperature, concentration and megasonic power of
SC-1 and QDR. An improvement of wafer surface preparation by
the enhanced variables of the wet cleaning chemical process is
proposed.
Abstract: Subdivision surfaces were applied to the entire
meshes in order to produce smooth surfaces refinement from coarse
mesh. Several schemes had been introduced in this area to provide a
set of rules to converge smooth surfaces. However, to compute and
render all the vertices are really inconvenient in terms of memory
consumption and runtime during the subdivision process. It will lead
to a heavy computational load especially at a higher level of
subdivision. Adaptive subdivision is a method that subdivides only at
certain areas of the meshes while the rest were maintained less
polygons. Although adaptive subdivision occurs at the selected areas,
the quality of produced surfaces which is their smoothness can be
preserved similar as well as regular subdivision. Nevertheless,
adaptive subdivision process burdened from two causes; calculations
need to be done to define areas that are required to be subdivided and
to remove cracks created from the subdivision depth difference
between the selected and unselected areas. Unfortunately, the result
of adaptive subdivision when it reaches to the higher level of
subdivision, it still brings the problem with memory consumption.
This research brings to iterative process of adaptive subdivision to
improve the previous adaptive method that will reduce memory
consumption applied on triangular mesh. The result of this iterative
process was acceptable better in memory and appearance in order to
produce fewer polygons while it preserves smooth surfaces.
Abstract: World has entered in 21st century. The technology of
computer graphics and digital cameras is prevalent. High resolution
display and printer are available. Therefore high resolution images
are needed in order to produce high quality display images and high
quality prints. However, since high resolution images are not usually
provided, there is a need to magnify the original images. One
common difficulty in the previous magnification techniques is that of
preserving details, i.e. edges and at the same time smoothing the data
for not introducing the spurious artefacts. A definitive solution to this
is still an open issue. In this paper an image magnification using
adaptive interpolation by pixel level data-dependent geometrical
shapes is proposed that tries to take into account information about
the edges (sharp luminance variations) and smoothness of the image.
It calculate threshold, classify interpolation region in the form of
geometrical shapes and then assign suitable values inside
interpolation region to the undefined pixels while preserving the
sharp luminance variations and smoothness at the same time.
The results of proposed technique has been compared qualitatively
and quantitatively with five other techniques. In which the qualitative
results show that the proposed method beats completely the Nearest
Neighbouring (NN), bilinear(BL) and bicubic(BC) interpolation. The
quantitative results are competitive and consistent with NN, BL, BC
and others.
Abstract: The problem of estimating time-varying regression is
inevitably concerned with the necessity to choose the appropriate
level of model volatility - ranging from the full stationarity of instant
regression models to their absolute independence of each other. In the
stationary case the number of regression coefficients to be estimated
equals that of regressors, whereas the absence of any smoothness
assumptions augments the dimension of the unknown vector by the
factor of the time-series length. The Akaike Information Criterion
is a commonly adopted means of adjusting a model to the given
data set within a succession of nested parametric model classes,
but its crucial restriction is that the classes are rigidly defined by
the growing integer-valued dimension of the unknown vector. To
make the Kullback information maximization principle underlying the
classical AIC applicable to the problem of time-varying regression
estimation, we extend it onto a wider class of data models in which
the dimension of the parameter is fixed, but the freedom of its values
is softly constrained by a family of continuously nested a priori
probability distributions.
Abstract: The exploration of this paper will focus on the Cshaped
transition curve. This curve is designed by using the concept
of circle to circle where one circle lies inside other. The degree of
smoothness employed is curvature continuity. The function used in
designing the C-curve is Bézier-like cubic function. This function has
a low degree, flexible for the interactive design of curves and
surfaces and has a shape parameter. The shape parameter is used to
control the C-shape curve. Once the C-shaped curve design is
completed, this curve will be applied to design spur gear tooth. After
the tooth design procedure is finished, the design will be analyzed by
using Finite Element Analysis (FEA). This analysis is used to find
out the applicability of the tooth design and the gear material that
chosen. In this research, Cast Iron 4.5 % Carbon, ASTM A-48 is
selected as a gear material.
Abstract: Animation is simply defined as the sequencing of a
series of static images to generate the illusion of movement. Most
people believe that actual drawings or creation of the individual
images is the animation, when in actuality it is the arrangement of
those static images that conveys the motion. To become an animator,
it is often assumed that needed the ability to quickly design
masterpiece after masterpiece. Although some semblance of artistic
skill is a necessity for the job, the real key to becoming a great
animator is in the comprehension of timing. This paper will use a
combination of sprite animation, frame animation, and some other
techniques to cause a group of multi-colored static images to slither
around in the bounded area. In addition to slithering, the images
will also change the color of different parts of their body, much like
the real world creatures that have this amazing ability to change the
colors on their bodies do. This paper was implemented by using
Java 2 Standard Edition (J2SE).
It is both time-consuming and expensive to create animations,
regardless if they are created by hand or by using motion-capture
equipment. If the animators could reuse old animations and even
blend different animations together, a lot of work would be saved in
the process. The main objective of this paper is to examine a method
for blending several animations together in real time. This paper
presents and analyses a solution using Weighted Skeleton
Animation (WSA) resulting in limited CPU time and memory waste
as well as saving time for the animators. The idea presented is
described in detail and implemented. In this paper, text animation,
vertex animation, sprite part animation and whole sprite animation
were tested.
In this research paper, the resolution, smoothness and movement
of animated images will be carried out from the parameters, which
will be obtained from the experimental research of implementing
this paper.
Abstract: The fuzzy technique is an operator introduced in order
to simulate at a mathematical level the compensatory behavior in
process of decision making or subjective evaluation. The following
paper introduces such operators on hand of computer vision
application.
In this paper a novel method based on fuzzy logic reasoning
strategy is proposed for edge detection in digital images without
determining the threshold value. The proposed approach begins by
segmenting the images into regions using floating 3x3 binary matrix.
The edge pixels are mapped to a range of values distinct from each
other. The robustness of the proposed method results for different
captured images are compared to those obtained with the linear Sobel
operator. It is gave a permanent effect in the lines smoothness and
straightness for the straight lines and good roundness for the curved
lines. In the same time the corners get sharper and can be defined
easily.
Abstract: The standard approach to image reconstruction is to stabilize the problem by including an edge-preserving roughness penalty in addition to faithfulness to the data. However, this methodology produces noisy object boundaries and creates a staircase effect. The existing attempts to favor the formation of smooth contour lines take the edge field explicitly into account; they either are computationally expensive or produce disappointing results. In this paper, we propose to incorporate the smoothness of the edge field in an implicit way by means of an additional penalty term defined in the wavelet domain. We also derive an efficient half-quadratic algorithm to solve the resulting optimization problem, including the case when the data fidelity term is non-quadratic and the cost function is nonconvex. Numerical experiments show that our technique preserves edge sharpness while smoothing contour lines; it produces visually pleasing reconstructions which are quantitatively better than those obtained without wavelet-domain constraints.
Abstract: The ultimate goal of this article is to develop a robust and accurate numerical method for solving hyperbolic conservation laws in one and two dimensions. A hybrid numerical method, coupling a cheap fourth order total variation diminishing (TVD) scheme [1] for smooth region and a Robust seventh-order weighted non-oscillatory (WENO) scheme [2] near discontinuities, is considered. High order multi-resolution analysis is used to detect the high gradients regions of the numerical solution in order to capture the shocks with the WENO scheme, while the smooth regions are computed with fourth order total variation diminishing (TVD). For time integration, we use the third order TVD Runge-Kutta scheme. The accuracy of the resulting hybrid high order scheme is comparable with these of WENO, but with significant decrease of the CPU cost. Numerical demonstrates that the proposed scheme is comparable to the high order WENO scheme and superior to the fourth order TVD scheme. Our scheme has the added advantage of simplicity and computational efficiency. Numerical tests are presented which show the robustness and effectiveness of the proposed scheme.