Abstract: With the rapid growth in business size, today's businesses orient towards electronic technologies. Amazon.com and e-bay.com are some of the major stakeholders in this regard. Unfortunately the enormous size and hugely unstructured data on the web, even for a single commodity, has become a cause of ambiguity for consumers. Extracting valuable information from such an everincreasing data is an extremely tedious task and is fast becoming critical towards the success of businesses. Web content mining can play a major role in solving these issues. It involves using efficient algorithmic techniques to search and retrieve the desired information from a seemingly impossible to search unstructured data on the Internet. Application of web content mining can be very encouraging in the areas of Customer Relations Modeling, billing records, logistics investigations, product cataloguing and quality management. In this paper we present a review of some very interesting, efficient yet implementable techniques from the field of web content mining and study their impact in the area specific to business user needs focusing both on the customer as well as the producer. The techniques we would be reviewing include, mining by developing a knowledge-base repository of the domain, iterative refinement of user queries for personalized search, using a graphbased approach for the development of a web-crawler and filtering information for personalized search using website captions. These techniques have been analyzed and compared on the basis of their execution time and relevance of the result they produced against a particular search.
Abstract: A hybrid feature based adaptive particle filter algorithm is presented for object tracking in real scenarios with static camera.
The hybrid feature is combined by two effective features: the Grayscale Arranging Pairs (GAP) feature and the color histogram feature. The GAP feature has high discriminative ability even under conditions of severe illumination variation and dynamic background
elements, while the color histogram feature has high reliability to identify the detected objects. The combination of two features covers the shortage of single feature. Furthermore, we adopt an updating
target model so that some external problems such as visual angles can be overcame well. An automatic initialization algorithm is introduced which provides precise initial positions of objects. The experimental
results show the good performance of the proposed method.
Abstract: We study in this paper the effect of the scene
changing on image sequences coding system using Embedded
Zerotree Wavelet (EZW). The scene changing considered here is the
full motion which may occurs. A special image sequence is generated
where the scene changing occurs randomly. Two scenarios are
considered: In the first scenario, the system must provide the
reconstruction quality as best as possible by the management of the
bit rate (BR) while the scene changing occurs. In the second scenario,
the system must keep the bit rate as constant as possible by the
management of the reconstruction quality. The first scenario may be
motivated by the availability of a large band pass transmission
channel where an increase of the bit rate may be possible to keep the
reconstruction quality up to a given threshold. The second scenario
may be concerned by the narrow band pass transmission channel
where an increase of the bit rate is not possible. In this last case,
applications for which the reconstruction quality is not a constraint
may be considered. The simulations are performed with five scales
wavelet decomposition using the 9/7-tap filter bank biorthogonal
wavelet. The entropy coding is performed using a specific defined
binary code book and EZW algorithm. Experimental results are
presented and compared to LEAD H263 EVAL. It is shown that if
the reconstruction quality is the constraint, the system increases the
bit rate to obtain the required quality. In the case where the bit rate
must be constant, the system is unable to provide the required quality
if the scene change occurs; however, the system is able to improve
the quality while the scene changing disappears.
Abstract: The new concept of two–dimensional (2D) image
processing implementation for auto-guiding system is shown in this
paper. It is dedicated to astrophotography and operates with
astronomy CCD guide cameras or with self-guided dual-detector
CCD cameras and ST4 compatible equatorial mounts. This idea was
verified by MATLAB model, which was used to test all procedures
and data conversions. Next the circuit prototype was implemented at
Altera MAX II CPLD device and tested for real astronomical object
images. The digital processing speed of CPLD prototype board was
sufficient for correct equatorial mount guiding in real-time system.
Abstract: We have previously introduced an ultrasonic imaging
approach that combines harmonic-sensitive pulse sequences with a
post-beamforming quadratic kernel derived from a second-order
Volterra filter (SOVF). This approach is designed to produce images
with high sensitivity to nonlinear oscillations from microbubble
ultrasound contrast agents (UCA) while maintaining high levels of
noise rejection. In this paper, a two-step algorithm for computing the
coefficients of the quadratic kernel leading to reduction of tissue
component introduced by motion, maximizing the noise rejection and
increases the specificity while optimizing the sensitivity to the UCA
is presented. In the first step, quadratic kernels from individual
singular modes of the PI data matrix are compared in terms of their
ability of maximize the contrast to tissue ratio (CTR). In the second
step, quadratic kernels resulting in the highest CTR values are
convolved. The imaging results indicate that a signal processing
approach to this clinical challenge is feasible.
Abstract: This paper describes a low-voltage and low-power
channel selection analog front end with continuous-time low pass
filters and highly linear programmable gain amplifier (PGA). The
filters were realized as balanced Gm-C biquadratic filters to achieve a
low current consumption. High linearity and a constant wide
bandwidth are achieved by using a new transconductance (Gm) cell.
The PGA has a voltage gain varying from 0 to 65dB, while
maintaining a constant bandwidth. A filter tuning circuit that requires
an accurate time base but no external components is presented.
With a 1-Vrms differential input and output, the filter achieves
-85dB THD and a 78dB signal-to-noise ratio. Both the filter and PGA
were implemented in a 0.18um 1P6M n-well CMOS process. They
consume 3.2mW from a 1.8V power supply and occupy an area of
0.19mm2.
Abstract: In this paper, a new approach for target recognition based on the Empirical mode decomposition (EMD) algorithm of Huang etal. [11] and the energy tracking operator of Teager [13]-[14] is introduced. The conjunction of these two methods is called Teager-Huang analysis. This approach is well suited for nonstationary signals analysis. The impulse response (IR) of target is first band pass filtered into subsignals (components) called Intrinsic mode functions (IMFs) with well defined Instantaneous frequency (IF) and Instantaneous amplitude (IA). Each IMF is a zero-mean AM-FM component. In second step, the energy of each IMF is tracked using the Teager energy operator (TEO). IF and IA, useful to describe the time-varying characteristics of the signal, are estimated using the Energy separation algorithm (ESA) algorithm of Maragos et al .[16]-[17]. In third step, a set of features such as skewness and kurtosis are extracted from the IF, IA and IMF energy functions. The Teager-Huang analysis is tested on set of synthetic IRs of Sonar targets with different physical characteristics (density, velocity, shape,? ). PCA is first applied to features to discriminate between manufactured and natural targets. The manufactured patterns are classified into spheres and cylinders. One hundred percent of correct recognition is achieved with twenty three echoes where sixteen IRs, used for training, are free noise and seven IRs, used for testing phase, are corrupted with white Gaussian noise.
Abstract: The behavior of Radial Basis Function (RBF) Networks greatly depends on how the center points of the basis functions are selected. In this work we investigate the use of instance reduction techniques, originally developed to reduce the storage requirements of instance based learners, for this purpose. Five Instance-Based Reduction Techniques were used to determine the set of center points, and RBF networks were trained using these sets of centers. The performance of the RBF networks is studied in terms of classification accuracy and training time. The results obtained were compared with two Radial Basis Function Networks: RBF networks that use all instances of the training set as center points (RBF-ALL) and Probabilistic Neural Networks (PNN). The former achieves high classification accuracies and the latter requires smaller training time. Results showed that RBF networks trained using sets of centers located by noise-filtering techniques (ALLKNN and ENN) rather than pure reduction techniques produce the best results in terms of classification accuracy. The results show that these networks require smaller training time than that of RBF-ALL and higher classification accuracy than that of PNN. Thus, using ALLKNN and ENN to select center points gives better combination of classification accuracy and training time. Our experiments also show that using the reduced sets to train the networks is beneficial especially in the presence of noise in the original training sets.
Abstract: Intelligent Video-Surveillance (IVS) systems are
being more and more popular in security applications. The analysis
and recognition of abnormal behaviours in a video sequence has
gradually drawn the attention in the field of IVS, since it allows
filtering out a large number of useless information, which guarantees
the high efficiency in the security protection, and save a lot of human
and material resources. We present in this paper ADABeV, an
intelligent video-surveillance framework for event recognition in
crowded scene to detect the abnormal human behaviour. This
framework is attended to be able to achieve real-time alarming,
reducing the lags in traditional monitoring systems. This architecture
proposal addresses four main challenges: behaviour understanding in
crowded scenes, hard lighting conditions, multiple input kinds of
sensors and contextual-based adaptability to recognize the active
context of the scene.
Abstract: For stricter drinking water regulations in the future, reducing the humic acid and disinfection byproducts in raw water, namely, trihalomethanes (THMs) and haloacetic acids (HAAs) is worthy for research. To investigate the removal of waterborne organic material using a lab-scale of bio-activated carbon filter under different EBCT, the concentrations of humic acid prepared were 0.01, 0.03, 0.06, 0.12, 0.17, 0.23, and 0.29 mg/L. Then we conducted experiments using a pilot plant with in-field of the serially connected bio-activated carbon filters and hollow fiber membrane processes employed in traditional water purification plants. Results showed under low TOC conditions of humic acid in influent (0.69 to 1.03 mg TOC/L) with an EBCT of 30 min, 40 min, and 50 min, TOC removal rates increases with greater EBCT, attaining about 39 % removal rate. The removal rate of THMs and HAAs by BACF was 54.8 % and 89.0 %, respectively.
Abstract: An experiment was conducted using two aeration
methods (water-into-air and air-into-water) and followed by filtration
processes using manganese greensand material. The properties of
groundwater such as pH, dissolved oxygen, turbidity and heavy metal
concentration (iron and manganese) will be assessed. The objectives
of this study are i) to determine the effective aeration method and ii)
to assess the effectiveness of manganese greensand as filter media in
removing iron and manganese concentration in groundwater. Results
showed that final pH for all samples after treatment are in range from
7.40 and 8.40. Both aeration methods increased the dissolved oxygen
content. Final turbidity for groundwater samples are between 3 NTU
to 29 NTU. Only three out of eight samples achieved iron
concentration of 0.3mg/L and less and all samples reach manganese
concentration of 0.1mg/L and less. Air-into-water aeration method
gives higher percentage of iron and manganese removal compare to
water-into-air method.
Abstract: Performance control law is studied for an
interconnected fractional nonlinear system. Applying a backstepping
algorithm, a backstepping sliding mode controller (BSMC) is
developed for fractional nonlinear system. To improve control law
performance, BSMC is coupled to an adaptive sliding mode observer
have a filtered error as a sliding surface. The both architecture
performance is studied throughout the inverted pendulum mounted on
a cart. Simulation result show that the BSMC coupled to an adaptive
sliding mode observer have stable control law and eligible control
amplitude than the BSMC.
Abstract: In this paper a class of analog algorithms based on the
concept of Cellular Neural Network (CNN) is applied in some
processing operations of some important medical images, namely
retina images, for detecting various symptoms connected with
diabetic retinopathy. Some specific processing tasks like
morphological operations, linear filtering and thresholding are
proposed, the corresponding template values are given and
simulations on real retina images are provided.
Abstract: Background noise is particularly damaging to speech
intelligibility for people with hearing loss especially for sensorineural
loss patients. Several investigations on speech intelligibility have
demonstrated sensorineural loss patients need 5-15 dB higher SNR
than the normal hearing subjects. This paper describes Discrete
Cosine Transform Power Normalized Least Mean Square algorithm
to improve the SNR and to reduce the convergence rate of the LMS
for Sensory neural loss patients. Since it requires only real arithmetic,
it establishes the faster convergence rate as compare to time domain
LMS and also this transformation improves the eigenvalue
distribution of the input autocorrelation matrix of the LMS filter.
The DCT has good ortho-normal, separable, and energy compaction
property. Although the DCT does not separate frequencies, it is a
powerful signal decorrelator. It is a real valued function and thus
can be effectively used in real-time operation. The advantages of
DCT-LMS as compared to standard LMS algorithm are shown via
SNR and eigenvalue ratio computations. . Exploiting the symmetry
of the basis functions, the DCT transform matrix [AN] can be
factored into a series of ±1 butterflies and rotation angles. This
factorization results in one of the fastest DCT implementation. There
are different ways to obtain factorizations. This work uses the fast
factored DCT algorithm developed by Chen and company. The
computer simulations results show superior convergence
characteristics of the proposed algorithm by improving the SNR at
least 10 dB for input SNR less than and equal to 0 dB, faster
convergence speed and better time and frequency characteristics.
Abstract: The error diffusion method generates worm artifacts,
and weakens the edge of the halftone image when the continuous gray
scale image is reproduced by a binary image. First, to enhance the
edges, we propose the edge-enhancing filter by considering the
quantization error information and gradient of the neighboring pixels.
Furthermore, to remove worm artifacts often appearing in a halftone
image, we add adaptively random noise into the weights of an error
filter.
Abstract: Single side band modulation is a widespread technique in communication with significant impact on communication technologies such as DSL modems and ATSC TV. Its widespread utilization is due to its bandwidth and power saving characteristics. In this paper, we present a new scheme for SSB signal generation which is cost efficient and enjoys superior characteristics in terms of frequency stability, selectivity, and robustness to noise. In the process, we develop novel Hilbert transform properties.
Abstract: Biological treatment of secondary effluent wastewater
by two combined denitrification/oxic filtration systems packed with
Lock type(denitrification filter) and ceramic ball (oxic filter) has been
studied for 5months. Two phases of operating conditions were carried
out with an influent nitrate and ammonia concentrations varied from
5.8 to 11.7mg/L and 5.4 to 12.4mg/L,respectively.
Denitrification/oxic filter treatment system were operated under an
EBCT (Empty Bed Contact Time) of 4h at system recirculation ratio in
the range from 0 to 300% (Linear Velocity increased 19.5m/d to
78m/d). The system efficiency of denitrification , nitrification over
95% respectively. Total nitrogen and COD removal range from
54.6%(recirculation 0%) to 92.3%(recirculation 300%) and 10% to
62.5%, respectively.
Abstract: A theory for optimal filtering of infinite sets of random
signals is presented. There are several new distinctive features of the
proposed approach. First, a single optimal filter for processing any
signal from a given infinite signal set is provided. Second, the filter is
presented in the special form of a sum with p terms where each term
is represented as a combination of three operations. Each operation
is a special stage of the filtering aimed at facilitating the associated
numerical work. Third, an iterative scheme is implemented into the
filter structure to provide an improvement in the filter performance at
each step of the scheme. The final step of the scheme concerns signal
compression and decompression. This step is based on the solution of
a new rank-constrained matrix approximation problem. The solution
to the matrix problem is described in this paper. A rigorous error
analysis is given for the new filter.
Abstract: In this paper, an extended study is performed on the
effect of different factors on the quality of vector data based on a
previous study. In the noise factor, one kind of noise that appears in
document images namely Gaussian noise is studied while the previous
study involved only salt-and-pepper noise. High and low levels of
noise are studied. For the noise cleaning methods, algorithms that were
not covered in the previous study are used namely Median filters and
its variants. For the vectorization factor, one of the best available
commercial raster to vector software namely VPstudio is used to
convert raster images into vector format. The performance of line
detection will be judged based on objective performance evaluation
method. The output of the performance evaluation is then analyzed
statistically to highlight the factors that affect vector quality.
Abstract: The empirical mode decomposition (EMD) represents any time series into a finite set of basis functions. The bases are termed as intrinsic mode functions (IMFs) which are mutually orthogonal containing minimum amount of cross-information. The EMD successively extracts the IMFs with the highest local frequencies in a recursive way, which yields effectively a set low-pass filters based entirely on the properties exhibited by the data. In this paper, EMD is applied to explore the properties of the multi-year air temperature and to observe its effects on climate change under global warming. This method decomposes the original time-series into intrinsic time scale. It is capable of analyzing nonlinear, non-stationary climatic time series that cause problems to many linear statistical methods and their users. The analysis results show that the mode of EMD presents seasonal variability. The most of the IMFs have normal distribution and the energy density distribution of the IMFs satisfies Chi-square distribution. The IMFs are more effective in isolating physical processes of various time-scales and also statistically significant. The analysis results also show that the EMD method provides a good job to find many characteristics on inter annual climate. The results suggest that climate fluctuations of every single element such as temperature are the results of variations in the global atmospheric circulation.