Abstract: In the past, there were more researches of recommendation system in applied electronic commerce. However, because all circles promote information technology integrative instruction actively, the quantity of instruction resources website is more and more increasing on the Internet. But there are less website including recommendation service, especially for teachers. This study established an instruction resource recommendation website that analyzed teaching style of teachers, then provided appropriate instruction resources for teachers immediately. We used the questionnaire survey to realize teacher-s suggestions and satisfactions with the instruction resource contents and recommendation results. The study shows: (1)The website used “Transactional Ability Inventory" that realized teacher-s style and provided appropriate instruction resources for teachers in a short time, it reduced the step of data filter. (2)According to the content satisfaction of questionnaire survey, four styles teachers were almost satisfied with the contents of the instruction resources that the website recommended, thus, the conception of developing instruction resources with different teaching style is accepted. (3) According to the recommendation satisfaction of questionnaire survey, four styles teachers were almost satisfied with the recommendation service of the website, thus, the recommendation strategy that provide different results for teachers in different teaching styles is accepted.
Abstract: For about two decades scientists have been
developing techniques for enhancing the quality of medical images
using Fourier transform, DWT (Discrete wavelet transform),PDE
model etc., Gabor wavelet on hexagonal sampled grid of the images
is proposed in this work. This method has optimal approximation
theoretic performances, for a good quality image. The computational
cost is considerably low when compared to similar processing in the
rectangular domain. As X-ray images contain light scattered pixels,
instead of unique sigma, the parameter sigma of 0.5 to 3 is found to
satisfy most of the image interpolation requirements in terms of high
Peak Signal-to-Noise Ratio (PSNR) , lower Mean Squared Error
(MSE) and better image quality by adopting windowing technique.
Abstract: In these days, multimedia data is transmitted and
processed in compressed format. Due to the decoding procedure and
filtering for edge detection, the feature extraction process of MPEG-7
Edge Histogram Descriptor is time-consuming as well as
computationally expensive. To improve efficiency of compressed
image retrieval, we propose a new edge histogram generation
algorithm in DCT domain in this paper. Using the edge information
provided by only two AC coefficients of DCT coefficients, we can get
edge directions and strengths directly in DCT domain. The
experimental results demonstrate that our system has good
performance in terms of retrieval efficiency and effectiveness.
Abstract: Many real-world data sets consist of a very high dimensional feature space. Most clustering techniques use the distance or similarity between objects as a measure to build clusters. But in high dimensional spaces, distances between points become relatively uniform. In such cases, density based approaches may give better results. Subspace Clustering algorithms automatically identify lower dimensional subspaces of the higher dimensional feature space in which clusters exist. In this paper, we propose a new clustering algorithm, ISC – Intelligent Subspace Clustering, which tries to overcome three major limitations of the existing state-of-art techniques. ISC determines the input parameter such as є – distance at various levels of Subspace Clustering which helps in finding meaningful clusters. The uniform parameters approach is not suitable for different kind of databases. ISC implements dynamic and adaptive determination of Meaningful clustering parameters based on hierarchical filtering approach. Third and most important feature of ISC is the ability of incremental learning and dynamic inclusion and exclusions of subspaces which lead to better cluster formation.
Abstract: The noteworthy point in the advancement of Brain Machine Interface (BMI) research is the ability to accurately extract features of the brain signals and to classify them into targeted control action with the easiest procedures since the expected beneficiaries are of disabled. In this paper, a new feature extraction method using the combination of adaptive band pass filters and adaptive autoregressive (AAR) modelling is proposed and applied to the classification of right and left motor imagery signals extracted from the brain. The introduction of the adaptive bandpass filter improves the characterization process of the autocorrelation functions of the AAR models, as it enhances and strengthens the EEG signal, which is noisy and stochastic in nature. The experimental results on the Graz BCI data set have shown that by implementing the proposed feature extraction method, a LDA and SVM classifier outperforms other AAR approaches of the BCI 2003 competition in terms of the mutual information, the competition criterion, or misclassification rate.
Abstract: Diagnostic and detection of the arterial stiffness is
very important; which gives indication of the associated increased risk of cardiovascular diseases. To make a cheap and easy method for general screening technique to avoid the future cardiovascular
complexes , due to the rising of the arterial stiffness ; a proposed algorithm depending on photoplethysmogram to be used. The
photoplethysmograph signals would be processed in MATLAB. The
signal will be filtered, baseline wandering removed, peaks and
valleys detected and normalization of the signals should be achieved
.The area under the catacrotic phase of the photoplethysmogram
pulse curve is calculated using trapezoidal algorithm ; then will used
in cooperation with other parameters such as age, height, blood
pressure in neural network for arterial stiffness detection. The Neural
network were implemented with sensitivity of 80%, accuracy 85%
and specificity of 90% were got from the patients data. It is
concluded that neural network can detect the arterial STIFFNESS
depending on risk factor parameters.
Abstract: In this paper, we propose a new class of Volterra series based filters for image enhancement and restoration. Generally the linear filters reduce the noise and cause blurring at the edges. Some nonlinear filters based on median operator or rank operator deal with only impulse noise and fail to cancel the most common Gaussian distributed noise. A class of second order Volterra filters is proposed to optimize the trade-off between noise removal and edge preservation. In this paper, we consider both the Gaussian and mixed Gaussian-impulse noise to test the robustness of the filter. Image enhancement and restoration results using the proposed Volterra filter are found to be superior to those obtained with standard linear and nonlinear filters.
Abstract: In this paper we are to find the optimum
multiwavelet for compression of electrocardiogram (ECG)
signals. At present, it is not well known which multiwavelet is
the best choice for optimum compression of ECG. In this
work, we examine different multiwavelets on 24 sets of ECG
data with entirely different characteristics, selected from MITBIH
database. For assessing the functionality of the different
multiwavelets in compressing ECG signals, in addition to
known factors such as Compression Ratio (CR), Percent Root
Difference (PRD), Distortion (D), Root Mean Square Error
(RMSE) in compression literature, we also employed the
Cross Correlation (CC) criterion for studying the
morphological relations between the reconstructed and the
original ECG signal and Signal to reconstruction Noise Ratio
(SNR). The simulation results show that the cardbal2 by the
means of identity (Id) prefiltering method to be the best
effective transformation.
Abstract: In this paper, a second order autoregressive (AR)
model is proposed to discriminate alcoholics using single trial
gamma band Visual Evoked Potential (VEP) signals using 3 different
classifiers: Simplified Fuzzy ARTMAP (SFA) neural network (NN),
Multilayer-perceptron-backpropagation (MLP-BP) NN and Linear
Discriminant (LD). Electroencephalogram (EEG) signals were
recorded from alcoholic and control subjects during the presentation
of visuals from Snodgrass and Vanderwart picture set. Single trial
VEP signals were extracted from EEG signals using Elliptic filtering
in the gamma band spectral range. A second order AR model was
used as gamma band VEP exhibits pseudo-periodic behaviour and
second order AR is optimal to represent this behaviour. This
circumvents the requirement of having to use some criteria to choose
the correct order. The averaged discrimination errors of 2.6%, 2.8%
and 11.9% were given by LD, MLP-BP and SFA classifiers. The
high LD discrimination results show the validity of the proposed
method to discriminate between alcoholic subjects.
Abstract: Discrete Wavelet Transform (DWT) has demonstrated
far superior to previous Discrete Cosine Transform (DCT) and
standard JPEG in natural as well as medical image compression. Due
to its localization properties both in special and transform domain,
the quantization error introduced in DWT does not propagate
globally as in DCT. Moreover, DWT is a global approach that avoids
block artifacts as in the JPEG. However, recent reports on natural
image compression have shown the superior performance of
contourlet transform, a new extension to the wavelet transform in two
dimensions using nonseparable and directional filter banks,
compared to DWT. It is mostly due to the optimality of contourlet in
representing the edges when they are smooth curves. In this work, we
investigate this fact for medical images, especially for CT images,
which has not been reported yet. To do that, we propose a
compression scheme in transform domain and compare the
performance of both DWT and contourlet transform in PSNR for
different compression ratios (CR) using this scheme. The results
obtained using different type of computed tomography images show
that the DWT has still good performance at lower CR but contourlet
transform performs better at higher CR.
Abstract: This paper study the high-level modelling and design
of delta-sigma (ΔΣ) noise shapers for audio Digital-to-Analog
Converter (DAC) so as to eliminate the in-band Signal-to-Noise-
Ratio (SNR) degradation that accompany one channel mismatch in
audio signal. The converter combines a cascaded digital signal
interpolation, a noise-shaping single loop delta-sigma modulator with
a 5-bit quantizer resolution in the final stage. To reduce sensitivity of
Digital-to-Analog Converter (DAC) nonlinearities of the last stage, a
high pass second order Data Weighted Averaging (R2DWA) is
introduced. This paper presents a MATLAB description modelling
approach of the proposed DAC architecture with low distortion and
swing suppression integrator designs. The ΔΣ Modulator design can
be configured as a 3rd-order and allows 24-bit PCM at sampling rate
of 64 kHz for Digital Video Disc (DVD) audio application. The
modeling approach provides 139.38 dB of dynamic range for a 32
kHz signal band at -1.6 dBFS input signal level.
Abstract: While the explosive increase in information published
on the Web, researchers have to filter information when searching for
conference related information. To make it easier for users to search
related information, this paper uses Topic Maps and social information
to implement ontology since ontology can provide the formalisms and
knowledge structuring for comprehensive and transportable machine
understanding that digital information requires. Besides enhancing
information in Topic Maps, this paper proposes a method of
constructing research Topic Maps considering social information.
First, extract conference data from the web. Then extract conference
topics and the relationships between them through the proposed
method. Finally visualize it for users to search and browse. This paper
uses ontology, containing abundant of knowledge hierarchy structure,
to facilitate researchers getting useful search results. However, most
previous ontology construction methods didn-t take “people" into
account. So this paper also analyzes the social information which helps
researchers find the possibilities of cooperation/combination as well as
associations between research topics, and tries to offer better results.
Abstract: This paper addresses the problem of how one can
improve the performance of a non-optimal filter. First the theoretical question on dynamical representation for a given time correlated
random process is studied. It will be demonstrated that for a wide class of random processes, having a canonical form, there exists
a dynamical system equivalent in the sense that its output has the
same covariance function. It is shown that the dynamical approach is more effective for simulating and estimating a Markov and non-
Markovian random processes, computationally is less demanding,
especially with increasing of the dimension of simulated processes.
Numerical examples and estimation problems in low dimensional
systems are given to illustrate the advantages of the approach. A very useful application of the proposed approach is shown for the
problem of state estimation in very high dimensional systems. Here a modified filter for data assimilation in an oceanic numerical model
is presented which is proved to be very efficient due to introducing
a simple Markovian structure for the output prediction error process
and adaptive tuning some parameters of the Markov equation.
Abstract: Image interpolation is a common problem in imaging applications. However, most interpolation algorithms in existence suffer visually to some extent the effects of blurred edges and jagged artifacts in the image. This paper presents an adaptive feature preserving bidirectional flow process, where an inverse diffusion is performed to enhance edges along the normal directions to the isophote lines (edges), while a normal diffusion is done to remove artifacts (''jaggies'') along the tangent directions. In order to preserve image features such as edges, angles and textures, the nonlinear diffusion coefficients are locally adjusted according to the first and second order directional derivatives of the image. Experimental results on synthetic images and nature images demonstrate that our interpolation algorithm substantially improves the subjective quality of the interpolated images over conventional interpolations.
Abstract: Edge detection is usually the first step in medical
image processing. However, the difficulty increases when a
conventional kernel-based edge detector is applied to ultrasonic
images with a textural pattern and speckle noise. We designed an
adaptive diffusion filter to remove speckle noise while preserving the
initial edges detected by using a Sobel edge detector. We also propose
a genetic algorithm for edge selection to form complete boundaries of
the detected entities. We designed two fitness functions to evaluate
whether a criterion with a complex edge configuration can render a
better result than a simple criterion such as the strength of gradient.
The edges obtained by using a complex fitness function are thicker and
more fragmented than those obtained by using a simple fitness
function, suggesting that a complex edge selecting scheme is not
necessary for good edge detection in medical ultrasonic images;
instead, a proper noise-smoothing filter is the key.
Abstract: This paper investigates the problem of automated defect
detection for textile fabrics and proposes a new optimal filter design
method to solve this problem. Gabor Wavelet Network (GWN) is
chosen as the major technique to extract the texture features from
textile fabrics. Based on the features extracted, an optimal Gabor filter
can be designed. In view of this optimal filter, a new semi-supervised
defect detection scheme is proposed, which consists of one real-valued
Gabor filter and one smoothing filter. The performance of the scheme
is evaluated by using an offline test database with 78 homogeneous
textile images. The test results exhibit accurate defect detection with
low false alarm, thus showing the effectiveness and robustness of the
proposed scheme. To evaluate the detection scheme comprehensively,
a prototyped detection system is developed to conduct a real time test.
The experiment results obtained confirm the efficiency and
effectiveness of the proposed detection scheme.
Abstract: Suppose KY and KX are large sets of observed and
reference signals, respectively, each containing N signals. Is it possible to construct a filter F : KY → KX that requires a priori
information only on few signals, p N, from KX but performs better than the known filters based on a priori information on every
reference signal from KX? It is shown that the positive answer is
achievable under quite unrestrictive assumptions. The device behind
the proposed method is based on a special extension of the piecewise
linear interpolation technique to the case of random signal sets. The proposed technique provides a single filter to process any signal from
the arbitrarily large signal set. The filter is determined in terms of pseudo-inverse matrices so that it always exists.
Abstract: An optimal mean-square fusion formulas with scalar
and matrix weights are presented. The relationship between them is
established. The fusion formulas are compared on the continuous-time
filtering problem. The basic differential equation for cross-covariance
of the local errors being the key quantity for distributed fusion is
derived. It is shown that the fusion filters are effective for multi-sensor
systems containing different types of sensors. An example
demonstrating the reasonable good accuracy of the proposed filters is
given.
Abstract: In the framework of adaptive parametric modelling of images, we propose in this paper a new technique based on the Chandrasekhar fast adaptive filter for texture characterization. An Auto-Regressive (AR) linear model of texture is obtained by scanning the image row by row and modelling this data with an adaptive Chandrasekhar linear filter. The characterization efficiency of the obtained model is compared with the model adapted with the Least Mean Square (LMS) 2-D adaptive algorithm and with the cooccurrence method features. The comparison criteria is based on the computation of a characterization degree using the ratio of "betweenclass" variances with respect to "within-class" variances of the estimated coefficients. Extensive experiments show that the coefficients estimated by the use of Chandrasekhar adaptive filter give better results in texture discrimination than those estimated by other algorithms, even in a noisy context.
Abstract: Monitored 3-Dimensional (3D) video experience can be utilized as “feedback information” to fine tune the service parameters for providing a better service to the demanding 3D service customers. The 3D video experience which includes both video quality and depth perception is influenced by several contextual and content related factors (e.g., ambient illumination condition, content characteristics, etc) due to the complex nature of the 3D video. Therefore, effective factors on this experience should be utilized while assessing it. In this paper, structural information of the depth map sequences of the 3D video is considered as content related factor effective on the depth perception assessment. Cartoon-like filter is utilized to abstract the significant depth levels in the depth map sequences to determine the structural information. Moreover, subjective experiments are conducted using 3D videos associated with cartoon-like depth map sequences to investigate the effectiveness of ambient illumination condition, which is a contextual factor, on depth perception. Using the knowledge gained through this study, 3D video experience metrics can be developed to deliver better service to the 3D video service users.