Abstract: The focus in this work is to assess which method
allows a better forecasting of malaria cases in Bujumbura ( Burundi)
when taking into account association between climatic factors and
the disease. For the period 1996-2007, real monthly data on both
malaria epidemiology and climate in Bujumbura are described and
analyzed. We propose a hierarchical approach to achieve our
objective. We first fit a Generalized Additive Model to malaria cases
to obtain an accurate predictor, which is then used to predict future
observations. Various well-known forecasting methods are compared
leading to different results. Based on in-sample mean average
percentage error (MAPE), the multiplicative exponential smoothing
state space model with multiplicative error and seasonality performed
better.
Abstract: Medical image segmentation based on image smoothing followed by edge detection assumes a great degree of importance in the field of Image Processing. In this regard, this paper proposes a novel algorithm for medical image segmentation based on vigorous smoothening by identifying the type of noise and edge diction ideology which seems to be a boom in medical image diagnosis. The main objective of this algorithm is to consider a particular medical image as input and make the preprocessing to remove the noise content by employing suitable filter after identifying the type of noise and finally carrying out edge detection for image segmentation. The algorithm consists of three parts. First, identifying the type of noise present in the medical image as additive, multiplicative or impulsive by analysis of local histograms and denoising it by employing Median, Gaussian or Frost filter. Second, edge detection of the filtered medical image is carried out using Canny edge detection technique. And third part is about the segmentation of edge detected medical image by the method of Normalized Cut Eigen Vectors. The method is validated through experiments on real images. The proposed algorithm has been simulated on MATLAB platform. The results obtained by the simulation shows that the proposed algorithm is very effective which can deal with low quality or marginal vague images which has high spatial redundancy, low contrast and biggish noise, and has a potential of certain practical use of medical image diagnosis.
Abstract: System development life cycle (SDLC) is a
process uses during the development of any system. SDLC
consists of four main phases: analysis, design, implement and
testing. During analysis phase, context diagram and data flow
diagrams are used to produce the process model of a system.
A consistency of the context diagram to lower-level data flow
diagrams is very important in smoothing up developing
process of a system. However, manual consistency check from
context diagram to lower-level data flow diagrams by using a
checklist is time-consuming process. At the same time, the
limitation of human ability to validate the errors is one of the
factors that influence the correctness and balancing of the
diagrams. This paper presents a tool that automates the
consistency check between Data Flow Diagrams (DFDs)
based on the rules of DFDs. The tool serves two purposes: as
an editor to draw the diagrams and as a checker to check the
correctness of the diagrams drawn. The consistency check
from context diagram to lower-level data flow diagrams is
embedded inside the tool to overcome the manual checking
problem.
Abstract: Discretization of spatial derivatives is an important
issue in meshfree methods especially when the derivative terms
contain non-linear coefficients. In this paper, various methods used
for discretization of second-order spatial derivatives are investigated
in the context of Smoothed Particle Hydrodynamics. Three popular
forms (i.e. "double summation", "second-order kernel derivation",
and "difference scheme") are studied using one-dimensional unsteady
heat conduction equation. To assess these schemes, transient response
to a step function initial condition is considered. Due to parabolic
nature of the heat equation, one can expect smooth and monotone
solutions. It is shown, however in this paper, that regardless of
the type of kernel function used and the size of smoothing radius,
the double summation discretization form leads to non-physical
oscillations which persist in the solution. Also, results show that when
a second-order kernel derivative is used, a high-order kernel function
shall be employed in such a way that the distance of inflection
point from origin in the kernel function be less than the nearest
particle distance. Otherwise, solutions may exhibit oscillations near
discontinuities unlike the "difference scheme" which unconditionally
produces monotone results.
Abstract: In this paper we propose and examine an Adaptive
Neuro-Fuzzy Inference System (ANFIS) in Smoothing Transition
Autoregressive (STAR) modeling. Because STAR models follow
fuzzy logic approach, in the non-linear part fuzzy rules can be
incorporated or other training or computational methods can be
applied as the error backpropagation algorithm instead to nonlinear
squares. Furthermore, additional fuzzy membership functions can be
examined, beside the logistic and exponential, like the triangle,
Gaussian and Generalized Bell functions among others. We examine
two macroeconomic variables of US economy, the inflation rate and
the 6-monthly treasury bills interest rates.
Abstract: Discrete Cosine Transform (DCT) based transform coding is very popular in image, video and speech compression due to its good energy compaction and decorrelating properties. However, at low bit rates, the reconstructed images generally suffer from visually annoying blocking artifacts as a result of coarse quantization. Lapped transform was proposed as an alternative to the DCT with reduced blocking artifacts and increased coding gain. Lapped transforms are popular for their good performance, robustness against oversmoothing and availability of fast implementation algorithms. However, there is no proper study reported in the literature regarding the statistical distributions of block Lapped Orthogonal Transform (LOT) and Lapped Biorthogonal Transform (LBT) coefficients. This study performs two goodness-of-fit tests, the Kolmogorov-Smirnov (KS) test and the 2- test, to determine the distribution that best fits the LOT and LBT coefficients. The experimental results show that the distribution of a majority of the significant AC coefficients can be modeled by the Generalized Gaussian distribution. The knowledge of the statistical distribution of transform coefficients greatly helps in the design of optimal quantizers that may lead to minimum distortion and hence achieve optimal coding efficiency.
Abstract: This paper introduces the effective speckle reduction of
synthetic aperture radar (SAR) images using inner product spaces in
undecimated wavelet domain. There are two major areas in projection
onto span algorithm where improvement can be made. First is the use
of undecimated wavelet transformation instead of discrete wavelet
transformation. And second area is the use of smoothing filter namely
directional smoothing filter which is an additional step. Proposed
method does not need any noise estimation and thresholding
technique. More over proposed method gives good results on both
single polarimetric and fully polarimetric SAR images.
Abstract: The visualization of geographic information on mobile devices has become popular as the widespread use of mobile Internet. The mobility of these devices brings about much convenience to people-s life. By the add-on location-based services of the devices, people can have an access to timely information relevant to their tasks. However, visual analysis of geographic data on mobile devices presents several challenges due to the small display and restricted computing resources. These limitations on the screen size and resources may impair the usability aspects of the visualization applications. In this paper, a variable-scale visualization method is proposed to handle the challenge of small mobile display. By merging multiple scales of information into a single image, the viewer is able to focus on the interesting region, while having a good grasp of the surrounding context. This is essentially visualizing the map through a fisheye lens. However, the fisheye lens induces undesirable geometric distortion in the peripheral, which renders the information meaningless. The proposed solution is to apply map generalization that removes excessive information around the peripheral and an automatic smoothing process to correct the distortion while keeping the local topology consistent. The proposed method is applied on both artificial and real geographical data for evaluation.
Abstract: A simple and easy algorithm is presented for a fast calculation of kernel functions which required in fluid simulations using the Smoothed Particle Hydrodynamic (SPH) method. Present proposed algorithm improves the Linked-list algorithm and adopts the Pair-Wise Interaction technique, which are widely used for evaluating kernel functions in fluid simulations using the SPH method. The algorithm is easy to be implemented without any complexities in programming. Some benchmark examples are used to show the simulation time saved by using the proposed algorithm. Parametric studies on the number of divisions for sub-domains, smoothing length and total amount of particles are conducted to show the effectiveness of the present technique. A compact formulation is proposed for practical usage.
Abstract: Due to the liberalization of countless electricity markets, load forecasting has become crucial to all public utilities for which electricity is a strategic variable. With the goal of contributing to the forecasting process inside public utilities, this paper addresses the issue of applying the Holt-Winters exponential smoothing technique and the time series analysis for forecasting the hourly electricity load curve of the Italian railways. The results of the analysis confirm the accuracy of the two models and therefore the relevance of forecasting inside public utilities.
Abstract: Detection of player identity is challenging task in sport video content analysis. In case of soccer video player number recognition is effective and precise solution. Jersey numbers can be considered as scene text and difficulties in localization and recognition appear due to variations in orientation, size, illumination, motion etc. This paper proposed new method for player number localization and recognition. By observing hue, saturation and value for 50 different jersey examples we noticed that most often combination of low and high saturated pixels is used to separate number and jersey region. Image segmentation method based on this observation is introduced. Then, novel method for player number localization based on internal contours is proposed. False number candidates are filtered using area and aspect ratio. Before OCR processing extracted numbers are enhanced using image smoothing and rotation normalization.
Abstract: In this paper we apply an Adaptive Network-Based
Fuzzy Inference System (ANFIS) with one input, the dependent
variable with one lag, for the forecasting of four macroeconomic
variables of US economy, the Gross Domestic Product, the inflation
rate, six monthly treasury bills interest rates and unemployment rate.
We compare the forecasting performance of ANFIS with those of the
widely used linear autoregressive and nonlinear smoothing transition
autoregressive (STAR) models. The results are greatly in favour of
ANFIS indicating that is an effective tool for macroeconomic
forecasting used in academic research and in research and application
by the governmental and other institutions
Abstract: Edge detection is usually the first step in medical
image processing. However, the difficulty increases when a
conventional kernel-based edge detector is applied to ultrasonic
images with a textural pattern and speckle noise. We designed an
adaptive diffusion filter to remove speckle noise while preserving the
initial edges detected by using a Sobel edge detector. We also propose
a genetic algorithm for edge selection to form complete boundaries of
the detected entities. We designed two fitness functions to evaluate
whether a criterion with a complex edge configuration can render a
better result than a simple criterion such as the strength of gradient.
The edges obtained by using a complex fitness function are thicker and
more fragmented than those obtained by using a simple fitness
function, suggesting that a complex edge selecting scheme is not
necessary for good edge detection in medical ultrasonic images;
instead, a proper noise-smoothing filter is the key.
Abstract: This paper investigates the problem of automated defect
detection for textile fabrics and proposes a new optimal filter design
method to solve this problem. Gabor Wavelet Network (GWN) is
chosen as the major technique to extract the texture features from
textile fabrics. Based on the features extracted, an optimal Gabor filter
can be designed. In view of this optimal filter, a new semi-supervised
defect detection scheme is proposed, which consists of one real-valued
Gabor filter and one smoothing filter. The performance of the scheme
is evaluated by using an offline test database with 78 homogeneous
textile images. The test results exhibit accurate defect detection with
low false alarm, thus showing the effectiveness and robustness of the
proposed scheme. To evaluate the detection scheme comprehensively,
a prototyped detection system is developed to conduct a real time test.
The experiment results obtained confirm the efficiency and
effectiveness of the proposed detection scheme.
Abstract: This paper presents a new method for estimating the nonstationary
noise power spectral density given a noisy signal. The
method is based on averaging the noisy speech power spectrum using
time and frequency dependent smoothing factors. These factors are
adjusted based on signal-presence probability in individual frequency
bins. Signal presence is determined by computing the ratio of the
noisy speech power spectrum to its local minimum, which is updated
continuously by averaging past values of the noisy speech power
spectra with a look-ahead factor. This method adapts very quickly to
highly non-stationary noise environments. The proposed method
achieves significant improvements over a system that uses voice
activity detector (VAD) in noise estimation.
Abstract: This paper presents unified theory for local (Savitzky-
Golay) and global polynomial smoothing. The algebraic framework
can represent any polynomial approximation and is seamless from
low degree local, to high degree global approximations. The representation
of the smoothing operator as a projection onto orthonormal
basis functions enables the computation of: the covariance matrix
for noise propagation through the filter; the noise gain and; the
frequency response of the polynomial filters. A virtually perfect Gram
polynomial basis is synthesized, whereby polynomials of degree
d = 1000 can be synthesized without significant errors. The perfect
basis ensures that the filters are strictly polynomial preserving. Given
n points and a support length ls = 2m + 1 then the smoothing
operator is strictly linear phase for the points xi, i = m+1. . . n-m.
The method is demonstrated on geometric surfaces data lying on an
invariant 2D lattice.
Abstract: In wavelet regression, choosing threshold value is a crucial issue. A too large value cuts too many coefficients resulting in over smoothing. Conversely, a too small threshold value allows many coefficients to be included in reconstruction, giving a wiggly estimate which result in under smoothing. However, the proper choice of threshold can be considered as a careful balance of these principles. This paper gives a very brief introduction to some thresholding selection methods. These methods include: Universal, Sure, Ebays, Two fold cross validation and level dependent cross validation. A simulation study on a variety of sample sizes, test functions, signal-to-noise ratios is conducted to compare their numerical performances using three different noise structures. For Gaussian noise, EBayes outperforms in all cases for all used functions while Two fold cross validation provides the best results in the case of long tail noise. For large values of signal-to-noise ratios, level dependent cross validation works well under correlated noises case. As expected, increasing both sample size and level of signal to noise ratio, increases estimation efficiency.
Abstract: This paper proposes a new technique based on nonlinear Minmax Detector Based (MDB) filter for image restoration. The aim of image enhancement is to reconstruct the true image from the corrupted image. The process of image acquisition frequently leads to degradation and the quality of the digitized image becomes inferior to the original image. Image degradation can be due to the addition of different types of noise in the original image. Image noise can be modeled of many types and impulse noise is one of them. Impulse noise generates pixels with gray value not consistent with their local neighborhood. It appears as a sprinkle of both light and dark or only light spots in the image. Filtering is a technique for enhancing the image. Linear filter is the filtering in which the value of an output pixel is a linear combination of neighborhood values, which can produce blur in the image. Thus a variety of smoothing techniques have been developed that are non linear. Median filter is the one of the most popular non-linear filter. When considering a small neighborhood it is highly efficient but for large window and in case of high noise it gives rise to more blurring to image. The Centre Weighted Mean (CWM) filter has got a better average performance over the median filter. However the original pixel corrupted and noise reduction is substantial under high noise condition. Hence this technique has also blurring affect on the image. To illustrate the superiority of the proposed approach, the proposed new scheme has been simulated along with the standard ones and various restored performance measures have been compared.
Abstract: In this paper, the sum of squares in linear regression is
reduced to sum of squares in semi-parametric regression. We
indicated that different sums of squares in the linear regression are
similar to various deviance statements in semi-parametric regression.
In addition to, coefficient of the determination derived in linear
regression model is easily generalized to coefficient of the
determination of the semi-parametric regression model. Then, it is
made an application in order to support the theory of the linear
regression and semi-parametric regression. In this way, study is
supported with a simulated data example.
Abstract: In this paper we present, propose and examine
additional membership functions for the Smoothing Transition
Autoregressive (STAR) models. More specifically, we present the
tangent hyperbolic, Gaussian and Generalized bell functions.
Because Smoothing Transition Autoregressive (STAR) models
follow fuzzy logic approach, more fuzzy membership functions
should be tested. Furthermore, fuzzy rules can be incorporated or
other training or computational methods can be applied as the error
backpropagation or genetic algorithm instead to nonlinear squares.
We examine two macroeconomic variables of US economy, the
inflation rate and the 6-monthly treasury bills interest rates.