Abstract: In this paper, estimation of the linear regression
model is made by ordinary least squares method and the
partially linear regression model is estimated by penalized
least squares method using smoothing spline. Then, it is
investigated that differences and similarity in the sum of
squares related for linear regression and partial linear
regression models (semi-parametric regression models). It is
denoted that the sum of squares in linear regression is reduced
to sum of squares in partial linear regression models.
Furthermore, we indicated that various sums of squares in the
linear regression are similar to different deviance statements in
partial linear regression. In addition to, coefficient of the
determination derived in linear regression model is easily
generalized to coefficient of the determination of the partial
linear regression model. For this aim, it is made two different
applications. A simulated and a real data set are considered to
prove the claim mentioned here. In this way, this study is
supported with a simulation and a real data example.
Abstract: This paper develops an unscented grid-based filter
and a smoother for accurate nonlinear modeling and analysis
of time series. The filter uses unscented deterministic sampling
during both the time and measurement updating phases, to approximate
directly the distributions of the latent state variable. A
complementary grid smoother is also made to enable computing
of the likelihood. This helps us to formulate an expectation
maximisation algorithm for maximum likelihood estimation of
the state noise and the observation noise. Empirical investigations
show that the proposed unscented grid filter/smoother compares
favourably to other similar filters on nonlinear estimation tasks.
Abstract: ANNARIMA that combines both autoregressive integrated moving average (ARIMA) model and artificial neural network (ANN) model is a valuable tool for modeling and forecasting nonlinear time series, yet the over-fitting problem is more likely to occur in neural network models. This paper provides a hybrid methodology that combines both radial basis function (RBF) neural network and auto regression (AR) model based on binomial smoothing (BS) technique which is efficient in data processing, which is called BSRBFAR. This method is examined by using the data of Canadian Lynx data. Empirical results indicate that the over-fitting problem can be eased using RBF neural network based on binomial smoothing which is called BS-RBF, and the hybrid model–BS-RBFAR can be an effective way to improve forecasting accuracy achieved by BSRBF used separately.
Abstract: We describe a novel method for removing noise (in wavelet domain) of unknown variance from microarrays. The method is based on a smoothing of the coefficients of the highest subbands. Specifically, we decompose the noisy microarray into wavelet subbands, apply smoothing within each highest subband, and reconstruct a microarray from the modified wavelet coefficients. This process is applied a single time, and exclusively to the first level of decomposition, i.e., in most of the cases, it is not necessary a multirresoltuion analysis. Denoising results compare favorably to the most of methods in use at the moment.
Abstract: Texture classification is an important image processing
task with a broad application range. Many different techniques for
texture classification have been explored. Using sparse approximation
as a feature extraction method for texture classification is a relatively
new approach, and Skretting et al. recently presented the Frame
Texture Classification Method (FTCM), showing very good results on
classical texture images. As an extension of that work the FTCM is
here tested on a real world application as detection of abnormalities
in mammograms. Some extensions to the original FTCM that are
useful in some applications are implemented; two different smoothing
techniques and a vector augmentation technique. Both detection of
microcalcifications (as a primary detection technique and as a last
stage of a detection scheme), and soft tissue lesions in mammograms
are explored. All the results are interesting, and especially the results
using FTCM on regions of interest as the last stage in a detection
scheme for microcalcifications are promising.
Abstract: Spare parts inventory management is one of the major
areas of inventory research. Analysis of recent literature showed that
an approach integrating spare parts classification, demand
forecasting, and stock control policies is essential; however, adapting
this integrated approach is limited. This work presents an integrated
framework for spare part inventory management and an Excel based
application developed for the implementation of the proposed
framework. A multi-criteria analysis has been used for spare
classification. Forecasting of spare parts- intermittent demand has
been incorporated into the application using three different
forecasting models; namely, normal distribution, exponential
smoothing, and Croston method. The application is also capable of
running with different inventory control policies. To illustrate the
performance of the proposed framework and the developed
application; the framework is applied to different items at a service
organization. The results achieved are presented and possible areas
for future work are highlighted.
Abstract: The general idea behind the filter is to average a pixel
using other pixel values from its neighborhood, but simultaneously to
take care of important image structures such as edges. The main
concern of the proposed filter is to distinguish between any variations
of the captured digital image due to noise and due to image structure.
The edges give the image the appearance depth and sharpness. A
loss of edges makes the image appear blurred or unfocused.
However, noise smoothing and edge enhancement are traditionally
conflicting tasks. Since most noise filtering behaves like a low pass
filter, the blurring of edges and loss of detail seems a natural
consequence. Techniques to remedy this inherent conflict often
encompass generation of new noise due to enhancement.
In this work a new fuzzy filter is presented for the noise reduction
of images corrupted with additive noise. The filter consists of three
stages. (1) Define fuzzy sets in the input space to computes a fuzzy
derivative for eight different directions (2) construct a set of IFTHEN
rules by to perform fuzzy smoothing according to
contributions of neighboring pixel values and (3) define fuzzy sets in
the output space to get the filtered and edged image.
Experimental results are obtained to show the feasibility of the
proposed approach with two dimensional objects.
Abstract: Smoothing or filtering of data is first preprocessing step
for noise suppression in many applications involving data analysis.
Moving average is the most popular method of smoothing the data,
generalization of this led to the development of Savitzky-Golay filter.
Many window smoothing methods were developed by convolving
the data with different window functions for different applications;
most widely used window functions are Gaussian or Kaiser. Function
approximation of the data by polynomial regression or Fourier
expansion or wavelet expansion also gives a smoothed data. Wavelets
also smooth the data to great extent by thresholding the wavelet
coefficients. Almost all smoothing methods destroys the peaks and
flatten them when the support of the window is increased. In certain
applications it is desirable to retain peaks while smoothing the data
as much as possible. In this paper we present a methodology called
as peak-wise smoothing that will smooth the data to any desired level
without losing the major peak features.
Abstract: A numerical method is developed for simulating
the motion of particles with arbitrary shapes in an effectively
infinite or bounded viscous flow. The particle translational and
angular motions are numerically investigated using a fluid-structure
interaction (FSI) method based on the Arbitrary-Lagrangian-Eulerian
(ALE) approach and the dynamic mesh method (smoothing and
remeshing) in FLUENT ( ANSYS Inc., USA). Also, the effects of
arbitrary shapes on the dynamics are studied using the FSI method
which could be applied to the motions and deformations of a single
blood cell and multiple blood cells, and the primary thrombogenesis
caused by platelet aggregation. It is expected that, combined with a
sophisticated large-scale computational technique, the simulation
method will be useful for understanding the overall properties of blood
flow from blood cellular level (microscopic) to the resulting
rheological properties of blood as a mass (macroscopic).
Abstract: A new fuzzy filter is presented for noise reduction of
images corrupted with additive noise. The filter consists of two
stages. In the first stage, all the pixels of image are processed for
determining noisy pixels. For this, a fuzzy rule based system
associates a degree to each pixel. The degree of a pixel is a real
number in the range [0,1], which denotes a probability that the pixel
is not considered as a noisy pixel. In the second stage, another fuzzy
rule based system is employed. It uses the output of the previous
fuzzy system to perform fuzzy smoothing by weighting the
contributions of neighboring pixel values. Experimental results are
obtained to show the feasibility of the proposed filter. These results
are also compared to other filters by numerical measure and visual
inspection.
Abstract: Utilizing echoic intension and distribution from different organs and local details of human body, ultrasonic image can catch important medical pathological changes, which unfortunately may be affected by ultrasonic speckle noise. A feature preserving ultrasonic image denoising and edge enhancement scheme is put forth, which includes two terms: anisotropic diffusion and edge enhancement, controlled by the optimum smoothing time. In this scheme, the anisotropic diffusion is governed by the local coordinate transformation and the first and the second order normal derivatives of the image, while the edge enhancement is done by the hyperbolic tangent function. Experiments on real ultrasonic images indicate effective preservation of edges, local details and ultrasonic echoic bright strips on denoising by our scheme.
Abstract: Cluster analysis is the name given to a diverse collection of techniques that can be used to classify objects (e.g. individuals, quadrats, species etc). While Kohonen's Self-Organizing Feature Map (SOFM) or Self-Organizing Map (SOM) networks have been successfully applied as a classification tool to various problem domains, including speech recognition, image data compression, image or character recognition, robot control and medical diagnosis, its potential as a robust substitute for clustering analysis remains relatively unresearched. SOM networks combine competitive learning with dimensionality reduction by smoothing the clusters with respect to an a priori grid and provide a powerful tool for data visualization. In this paper, SOM is used for creating a toroidal mapping of two-dimensional lattice to perform cluster analysis on results of a chemical analysis of wines produced in the same region in Italy but derived from three different cultivators, referred to as the “wine recognition data" located in the University of California-Irvine database. The results are encouraging and it is believed that SOM would make an appealing and powerful decision-support system tool for clustering tasks and for data visualization.
Abstract: World has entered in 21st century. The technology of
computer graphics and digital cameras is prevalent. High resolution
display and printer are available. Therefore high resolution images
are needed in order to produce high quality display images and high
quality prints. However, since high resolution images are not usually
provided, there is a need to magnify the original images. One
common difficulty in the previous magnification techniques is that of
preserving details, i.e. edges and at the same time smoothing the data
for not introducing the spurious artefacts. A definitive solution to this
is still an open issue. In this paper an image magnification using
adaptive interpolation by pixel level data-dependent geometrical
shapes is proposed that tries to take into account information about
the edges (sharp luminance variations) and smoothness of the image.
It calculate threshold, classify interpolation region in the form of
geometrical shapes and then assign suitable values inside
interpolation region to the undefined pixels while preserving the
sharp luminance variations and smoothness at the same time.
The results of proposed technique has been compared qualitatively
and quantitatively with five other techniques. In which the qualitative
results show that the proposed method beats completely the Nearest
Neighbouring (NN), bilinear(BL) and bicubic(BC) interpolation. The
quantitative results are competitive and consistent with NN, BL, BC
and others.
Abstract: The aim of this paper is to present a methodology in
three steps to forecast supply chain demand. In first step, various data
mining techniques are applied in order to prepare data for entering
into forecasting models. In second step, the modeling step, an
artificial neural network and support vector machine is presented
after defining Mean Absolute Percentage Error index for measuring
error. The structure of artificial neural network is selected based on
previous researchers' results and in this article the accuracy of
network is increased by using sensitivity analysis. The best forecast
for classical forecasting methods (Moving Average, Exponential
Smoothing, and Exponential Smoothing with Trend) is resulted based
on prepared data and this forecast is compared with result of support
vector machine and proposed artificial neural network. The results
show that artificial neural network can forecast more precisely in
comparison with other methods. Finally, forecasting methods'
stability is analyzed by using raw data and even the effectiveness of
clustering analysis is measured.
Abstract: This paper study about using of nonparametric
models for Gross National Product data in Turkey and Stanford heart
transplant data. It is discussed two nonparametric techniques called
smoothing spline and kernel regression. The main goal is to compare
the techniques used for prediction of the nonparametric regression
models. According to the results of numerical studies, it is concluded
that smoothing spline regression estimators are better than those of
the kernel regression.
Abstract: Median filters with larger windows offer greater smoothing and are more robust than the median filters of smaller windows. However, the larger median smoothers (the median filters with the larger windows) fail to track low order polynomial trends in the signals. Due to this, constant regions are produced at the signal corners, leading to the loss of fine details. In this paper, an algorithm, which combines the ability of the 3-point median smoother in preserving the low order polynomial trends and the superior noise filtering characteristics of the larger median smoother, is introduced. The proposed algorithm (called the combiner algorithm in this paper) is evaluated for its performance on a test image corrupted with different types of noise and the results obtained are included.
Abstract: Echocardiography imaging is one of the most common diagnostic tests that are widely used for assessing the abnormalities of the regional heart ventricle function. The main goal of the image enhancement task in 2D-echocardiography (2DE) is to solve two major anatomical structure problems; speckle noise and low quality. Therefore, speckle noise reduction is one of the important steps that used as a pre-processing to reduce the distortion effects in 2DE image segmentation. In this paper, we present the common filters that based on some form of low-pass spatial smoothing filters such as Mean, Gaussian, and Median. The Laplacian filter was used as a high-pass sharpening filter. A comparative analysis was presented to test the effectiveness of these filters after being applied to original 2DE images of 4-chamber and 2-chamber views. Three statistical quantity measures: root mean square error (RMSE), peak signal-to-ratio (PSNR) and signal-tonoise ratio (SNR) are used to evaluate the filter performance quantitatively on the output enhanced image.
Abstract: Robust face recognition under various illumination
environments is very difficult and needs to be accomplished for
successful commercialization. In this paper, we propose an improved
illumination normalization method for face recognition. Illumination
normalization algorithm based on anisotropic smoothing is well known
to be effective among illumination normalization methods but
deteriorates the intensity contrast of the original image, and incurs less
sharp edges. The proposed method in this paper improves the previous
anisotropic smoothing-based illumination normalization method so
that it increases the intensity contrast and enhances the edges while
diminishing the effect of illumination variations. Due to the result of
these improvements, face images preprocessed by the proposed
illumination normalization method becomes to have more distinctive
feature vectors (Gabor feature vectors) for face recognition. Through
experiments of face recognition based on Gabor feature vector
similarity, the effectiveness of the proposed illumination
normalization method is verified.
Abstract: In this paper, we present a simple circuit for
Manchester decoding and without using any complicated or
programmable devices. This circuit can decode 90kbps of transmitted
encoded data; however, greater than this transmission rate can be
decoded if high speed devices were used. We also present a new
method for extracting the embedded clock from Manchester data in
order to use it for serial-to-parallel conversion. All of our
experimental measurements have been done using simulation.
Abstract: An important step in studying the statistics of
fingerprint minutia features is to reliably extract minutia features from
the fingerprint images. A new reliable method of computation for
minutiae feature extraction from fingerprint images is presented. A
fingerprint image is treated as a textured image. An orientation flow
field of the ridges is computed for the fingerprint image. To
accurately locate ridges, a new ridge orientation based computation
method is proposed. After ridge segmentation a new method of
computation is proposed for smoothing the ridges. The ridge skeleton
image is obtained and then smoothed using morphological operators
to detect the features. A post processing stage eliminates a large
number of false features from the detected set of minutiae features.
The detected features are observed to be reliable and accurate.