Abstract: The model-based approach to user interface design
relies on developing separate models capturing various aspects about
users, tasks, application domain, presentation and dialog structures.
This paper presents a task modeling approach for user interface
design and aims at exploring mappings between task, domain and
presentation models. The basic idea of our approach is to identify
typical configurations in task and domain models and to investigate
how they relate each other. A special emphasis is put on applicationspecific
functions and mappings between domain objects and
operational task structures. In this respect, we will address two
layers in task decomposition: a functional (planning) layer and an
operational layer.
Abstract: In this article we are going to discuss the improvement
of the multi classes- classification problem using multi layer
Perceptron. The considered approach consists in breaking down the
n-class problem into two-classes- subproblems. The training of each
two-class subproblem is made independently; as for the phase of test,
we are going to confront a vector that we want to classify to all two
classes- models, the elected class will be the strongest one that won-t
lose any competition with the other classes. Rates of recognition
gotten with the multi class-s approach by two-class-s decomposition
are clearly better that those gotten by the simple multi class-s
approach.
Abstract: Wavelet transform or wavelet analysis is a recently
developed mathematical tool in applied mathematics. In numerical
analysis, wavelets also serve as a Galerkin basis to solve partial
differential equations. Haar transform or Haar wavelet transform has
been used as a simplest and earliest example for orthonormal wavelet
transform. Since its popularity in wavelet analysis, there are several
definitions and various generalizations or algorithms for calculating
Haar transform. Fast Haar transform, FHT, is one of the algorithms
which can reduce the tedious calculation works in Haar transform. In
this paper, we present a modified fast and exact algorithm for FHT,
namely Modified Fast Haar Transform, MFHT. The algorithm or
procedure proposed allows certain calculation in the process
decomposition be ignored without affecting the results.
Abstract: In this work we adopt a combination of Laplace
transform and the decomposition method to find numerical solutions
of a system of multi-pantograph equations. The procedure leads to a
rapid convergence of the series to the exact solution after computing a
few terms. The effectiveness of the method is demonstrated in some
examples by obtaining the exact solution and in others by computing
the absolute error which decreases as the number of terms of the series
increases.
Abstract: In this work, we apply the Modified Laplace
decomposition algorithm in finding a numerical solution of Blasius’
boundary layer equation for the flat plate in a uniform stream. The
series solution is found by first applying the Laplace transform to the
differential equation and then decomposing the nonlinear term by the
use of Adomian polynomials. The resulting series, which is exactly the
same as that obtained by Weyl 1942a, was expressed as a rational
function by the use of diagonal padé approximant.
Abstract: This article provides empirical evidence on the effect
of domestic and international factors on the U.S. current account
deficit. Linear dynamic regression and vector autoregression models
are employed to estimate the relationships during the period from 1986
to 2011. The findings of this study suggest that the current and lagged
private saving rate and foreign current account for East Asian
economies have played a vital role in affecting the U.S. current
account. Additionally, using Granger causality tests and variance
decompositions, the change of the productivity growth and foreign
domestic demand are determined to influence significantly the change
of the U.S. current account. To summarize, the empirical relationship
between the U.S. current account deficit and its determinants is
sensitive to alternative regression models and specifications.
Abstract: Intermetallic Ni3Al – based alloys belong to a group
of advanced materials characterized by good chemical and physical
properties (such as structural stability, corrosion resistance) which
offer advenced technological applications. The paper presents the
study of catalytic properties of Ni3Al foils (thickness approximately
50 &m) in the methanol and hexane decomposition. The egzamined
material posses microcrystalline structure without any additional
catalysts on the surface. The better catalytic activity of Ni3Al foils
with respect to quartz plates in both methanol and hexane
decomposition was confirmed. On thin Ni3Al foils the methanol
conversion reaches approximately 100% above 480 oC while the
hexane conversion reaches approximately 100% (98,5%) at 500 oC.
Deposit formed during the methanol decomposition is built up of
carbon nanofibers decorated with metal-like nanoparticles.
Abstract: The hydrogen peroxide treatment was able to
remediate chlorophenols, polycyclic aromatic hydrocarbons, diesel
and transformer oil contaminated soil. Chemical treatment of
contaminants adsorbed in peat resulted in lower contaminants-
removal and required higher addition of chemicals than the treatment
of contaminants in sand. The hydrogen peroxide treatment was found
to be feasible for soil remediation at natural soil pH. Contaminants in
soil could degrade with the addition of hydrogen peroxide only
indicating the ability of transition metals ions and minerals of these
metals presented in soil to catalyse the reaction of hydrogen peroxide
decomposition.
Abstract: The concurrent era is characterised by strengthened interactions among financial markets and increased capital mobility globally. In this frames we examine the effects the international financial integration process has on the European bond markets. We perform a comparative study of the interactions of the European and international bond markets and exploit Cointegration analysis results on the elimination of stochastic trends and the decomposition of the underlying long run equilibria and short run causal relations. Our investigation provides evidence on the relation between the European integration process and that of globalisation, viewed through the bond markets- sector. Additionally the structural formulation applied, offers significant implications of the findings. All in all our analysis offers a number of answers on crucial queries towards the European bond markets integration process.
Abstract: In 1990 [1] the subband-DFT (SB-DFT) technique was proposed. This technique used the Hadamard filters in the decomposition step to split the input sequence into low- and highpass sequences. In the next step, either two DFTs are needed on both bands to compute the full-band DFT or one DFT on one of the two bands to compute an approximate DFT. A combination network with correction factors was to be applied after the DFTs. Another approach was proposed in 1997 [2] for using a special discrete wavelet transform (DWT) to compute the discrete Fourier transform (DFT). In the first step of the algorithm, the input sequence is decomposed in a similar manner to the SB-DFT into two sequences using wavelet decomposition with Haar filters. The second step is to perform DFTs on both bands to obtain the full-band DFT or to obtain a fast approximate DFT by implementing pruning at both input and output sides. In this paper, the wavelet-based DFT (W-DFT) with Haar filters is interpreted as SB-DFT with Hadamard filters. The only difference is in a constant factor in the combination network. This result is very important to complete the analysis of the W-DFT, since all the results concerning the accuracy and approximation errors in the SB-DFT are applicable. An application example in spectral analysis is given for both SB-DFT and W-DFT (with different filters). The adaptive capability of the SB-DFT is included in the W-DFT algorithm to select the band of most energy as the band to be computed. Finally, the W-DFT is extended to the two-dimensional case. An application in image transformation is given using two different types of wavelet filters.
Abstract: This paper deals with the current space-vector
decomposition in three-phase, three-wire systems on the basis of
some case studies. We propose four components of the current spacevector
in terms of DC and AC components of the instantaneous
active and reactive powers. The term of supplementary useless
current vector is also pointed out. The analysis shows that the current
decomposition which respects the definition of the instantaneous
apparent power vector is useful for compensation reasons only if the
supply voltages are sinusoidal. A modified definition of the
components of the current is proposed for the operation under
nonsinusoidal voltage conditions.
Abstract: Fixed-point simulation results are used for the performance measure of inverting matrices using a reconfigurable processing element. Matrices are inverted using the Cholesky decomposition algorithm. The reconfigurable processing element is capable of all required mathematical operations. The fixed-point word length analysis is based on simulations of different condition numbers and different matrix sizes.
Abstract: An ecofriendly Citrus paradisipeel extract mediated synthesis of TiO2 nanoparticles is reported under sonication. U.V.-vis, Transmission electron microscopy, Dynamic light scattering, and X-ray analyses are performed to characterize the formation of TiO2 nanoparticles. It is almost spherical in shape, having a size of 60–140 nm and the XRD peaks at 2θ = 25.363° confirm the characteristic facets for anatase form. The synthesized nanocatalyst is highly active in the decomposition of methyl orange (64 mg/L) in sunlight (~73%) for 2.5h.
Abstract: SDMA (Space-Division Multiple Access) is a MIMO
(Multiple-Input and Multiple-Output) based wireless communication
network architecture which has the potential to significantly increase
the spectral efficiency and the system performance. The maximum
likelihood (ML) detection provides the optimal performance, but its
complexity increases exponentially with the constellation size of
modulation and number of users. The QR decomposition (QRD)
MUD can be a substitute to ML detection due its low complexity and
near optimal performance. The minimum mean-squared-error
(MMSE) multiuser detection (MUD) minimises the mean square
error (MSE), which may not give guarantee that the BER of the
system is also minimum. But the minimum bit error rate (MBER)
MUD performs better than the classic MMSE MUD in term of
minimum probability of error by directly minimising the BER cost
function. Also the MBER MUD is able to support more users than
the number of receiving antennas, whereas the rest of MUDs fail in
this scenario. In this paper the performance of various MUD
techniques is verified for the correlated MIMO channel models based
on IEEE 802.16n standard.
Abstract: Powder of La0.6Sr0.4CoO3-α (LSCO) was synthesized
by a combined citrate-EDTA method. The as-synthesized LSCO
powder was calcined, respectively at temperatures of 800, 900 and
1000 °C with different heating/cooling rates which are 2, 5, 10 and
15 °C min-1. The effects of heat treatments on the phase formation of
perovskite phase of LSCO were investigated by powder X-ray
diffraction (XRD). The XRD patterns revealed that the rate of
5 °C min-1 is the optimum heating/cooling rate to obtain a single
perovskite phase of LSCO with calcination temperature of 800 °C.
This result was confirmed by a thermogravimetric analysis (TGA) as
it showed a complete decomposition of intermediate compounds to
form oxide material was also observed at 800 °C.
Abstract: In a previous work, we presented the numerical
solution of the two dimensional second order telegraph partial
differential equation discretized by the centred and rotated five-point
finite difference discretizations, namely the explicit group (EG) and
explicit decoupled group (EDG) iterative methods, respectively. In
this paper, we utilize a domain decomposition algorithm on these
group schemes to divide the tasks involved in solving the same
equation. The objective of this study is to describe the development
of the parallel group iterative schemes under OpenMP programming
environment as a way to reduce the computational costs of the
solution processes using multicore technologies. A detailed
performance analysis of the parallel implementations of points and
group iterative schemes will be reported and discussed.
Abstract: This paper introduces a new signal denoising based on the Empirical mode decomposition (EMD) framework. The method is a fully data driven approach. Noisy signal is decomposed adaptively into oscillatory components called Intrinsic mode functions (IMFs) by means of a process called sifting. The EMD denoising involves filtering or thresholding each IMF and reconstructs the estimated signal using the processed IMFs. The EMD can be combined with a filtering approach or with nonlinear transformation. In this work the Savitzky-Golay filter and shoftthresholding are investigated. For thresholding, IMF samples are shrinked or scaled below a threshold value. The standard deviation of the noise is estimated for every IMF. The threshold is derived for the Gaussian white noise. The method is tested on simulated and real data and compared with averaging, median and wavelet approaches.
Abstract: Speckle noise affects all coherent imaging systems
including medical ultrasound. In medical images, noise suppression
is a particularly delicate and difficult task. A tradeoff between noise
reduction and the preservation of actual image features has to be made
in a way that enhances the diagnostically relevant image content.
Even though wavelets have been extensively used for denoising
speckle images, we have found that denoising using contourlets gives
much better performance in terms of SNR, PSNR, MSE, variance and
correlation coefficient. The objective of the paper is to determine the
number of levels of Laplacian pyramidal decomposition, the number
of directional decompositions to perform on each pyramidal level and
thresholding schemes which yields optimal despeckling of medical
ultrasound images, in particular. The proposed method consists of the
log transformed original ultrasound image being subjected to contourlet
transform, to obtain contourlet coefficients. The transformed
image is denoised by applying thresholding techniques on individual
band pass sub bands using a Bayes shrinkage rule. We quantify the
achieved performance improvement.
Abstract: In this paper, we focus on the fusion of images from
different sources using multiresolution wavelet transforms. Based on
reviews of popular image fusion techniques used in data analysis,
different pixel and energy based methods are experimented. A novel
architecture with a hybrid algorithm is proposed which applies pixel
based maximum selection rule to low frequency approximations and
filter mask based fusion to high frequency details of wavelet
decomposition. The key feature of hybrid architecture is the
combination of advantages of pixel and region based fusion in a
single image which can help the development of sophisticated
algorithms enhancing the edges and structural details. A Graphical
User Interface is developed for image fusion to make the research
outcomes available to the end user. To utilize GUI capabilities for
medical, industrial and commercial activities without MATLAB
installation, a standalone executable application is also developed
using Matlab Compiler Runtime.
Abstract: In this paper, the implementation of low power,
high throughput convolutional filters for the one dimensional
Discrete Wavelet Transform and its inverse are presented. The
analysis filters have already been used for the implementation of a
high performance DWT encoder [15] with minimum memory
requirements for the JPEG 2000 standard. This paper presents the
design techniques and the implementation of the convolutional filters
included in the JPEG2000 standard for the forward and inverse DWT
for achieving low-power operation, high performance and reduced
memory accesses. Moreover, they have the ability of performing
progressive computations so as to minimize the buffering between
the decomposition and reconstruction phases. The experimental
results illustrate the filters- low power high throughput characteristics
as well as their memory efficient operation.