Abstract: This paper addresses the problem of asymptotic tracking
control of a linear parabolic partial differential equation with indomain
point actuation. As the considered model is a non-standard
partial differential equation, we firstly developed a map that allows
transforming this problem into a standard boundary control problem
to which existing infinite-dimensional system control methods can
be applied. Then, a combination of energy multiplier and differential
flatness methods is used to design an asymptotic tracking controller.
This control scheme consists of stabilizing state-feedback derived
from the energy multiplier method and feed-forward control based
on the flatness property of the system. This approach represents
a systematic procedure to design tracking control laws for a class
of partial differential equations with in-domain point actuation. The
applicability and system performance are assessed by simulation
studies.
Abstract: Association rules are an important problem in data
mining. Massively increasing volume of data in real life databases
has motivated researchers to design novel and incremental algorithms
for association rules mining. In this paper, we propose an incremental
association rules mining algorithm that integrates shocking
interestingness criterion during the process of building the model. A
new interesting measure called shocking measure is introduced. One
of the main features of the proposed approach is to capture the user
background knowledge, which is monotonically augmented. The
incremental model that reflects the changing data and the user beliefs
is attractive in order to make the over all KDD process more
effective and efficient. We implemented the proposed approach and
experiment it with some public datasets and found the results quite
promising.
Abstract: This paper presents an evaluation for a wavelet-based
digital watermarking technique used in estimating the quality of
video sequences transmitted over Additive White Gaussian Noise
(AWGN) channel in terms of a classical objective metric, such as
Peak Signal-to-Noise Ratio (PSNR) without the need of the original
video. In this method, a watermark is embedded into the Discrete
Wavelet Transform (DWT) domain of the original video frames
using a quantization method. The degradation of the extracted
watermark can be used to estimate the video quality in terms of
PSNR with good accuracy. We calculated PSNR for video frames
contaminated with AWGN and compared the values with those
estimated using the Watermarking-DWT based approach. It is found
that the calculated and estimated quality measures of the video
frames are highly correlated, suggesting that this method can provide
a good quality measure for video frames transmitted over AWGN
channel without the need of the original video.
Abstract: In H.264/AVC video encoding, rate-distortion
optimization for mode selection plays a significant role to achieve
outstanding performance in compression efficiency and video quality.
However, this mode selection process also makes the encoding
process extremely complex, especially in the computation of the ratedistortion
cost function, which includes the computations of the sum
of squared difference (SSD) between the original and reconstructed
image blocks and context-based entropy coding of the block. In this
paper, a transform-domain rate-distortion optimization accelerator
based on fast SSD (FSSD) and VLC-based rate estimation algorithm
is proposed. This algorithm could significantly simplify the hardware
architecture for the rate-distortion cost computation with only
ignorable performance degradation. An efficient hardware structure
for implementing the proposed transform-domain rate-distortion
optimization accelerator is also proposed. Simulation results
demonstrated that the proposed algorithm reduces about 47% of total
encoding time with negligible degradation of coding performance.
The proposed method can be easily applied to many mobile video
application areas such as a digital camera and a DMB (Digital
Multimedia Broadcasting) phone.
Abstract: A generalized Dirichlet to Neumann map is
one of the main aspects characterizing a recently introduced
method for analyzing linear elliptic PDEs, through which it
became possible to couple known and unknown components
of the solution on the boundary of the domain without
solving on its interior. For its numerical solution, a well conditioned
quadratically convergent sine-Collocation method
was developed, which yielded a linear system of equations
with the diagonal blocks of its associated coefficient matrix
being point diagonal. This structural property, among others,
initiated interest for the employment of iterative methods for
its solution. In this work we present a conclusive numerical
study for the behavior of classical (Jacobi and Gauss-Seidel)
and Krylov subspace (GMRES and Bi-CGSTAB) iterative
methods when they are applied for the solution of the Dirichlet
to Neumann map associated with the Laplace-s equation
on regular polygons with the same boundary conditions on
all edges.
Abstract: A simple and easy algorithm is presented for a fast calculation of kernel functions which required in fluid simulations using the Smoothed Particle Hydrodynamic (SPH) method. Present proposed algorithm improves the Linked-list algorithm and adopts the Pair-Wise Interaction technique, which are widely used for evaluating kernel functions in fluid simulations using the SPH method. The algorithm is easy to be implemented without any complexities in programming. Some benchmark examples are used to show the simulation time saved by using the proposed algorithm. Parametric studies on the number of divisions for sub-domains, smoothing length and total amount of particles are conducted to show the effectiveness of the present technique. A compact formulation is proposed for practical usage.
Abstract: This Paper presents an on-going research in the area of Model-Driven Engineering (MDE). The premise is that UML is too unwieldy to serve as the basis for model-driven engineering. We need a smaller, simpler notation with a cleaner semantics. We propose some ideas for a simpler notation with a clean semantics. The result is known as μML, or the Micro-Modelling Language.
Abstract: The perfect operation of common Active Filters is depended on accuracy of identification system distortion. Also, using a suitable method in current injection and reactive power compensation, leads to increased filter performance. Due to this fact, this paper presents a method based on predictive current control theory in shunt active filter applications. The harmonics of the load current is identified by using o–d–q reference frame on load current and eliminating the DC part of d–q components. Then, the rest of these components deliver to predictive current controller as a Threephase reference current by using Park inverse transformation. System is modeled in discreet time domain. The proposed method has been tested using MATLAB model for a nonlinear load (with Total Harmonic Distortion=20%). The simulation results indicate that the proposed filter leads to flowing a sinusoidal current (THD=0.15%) through the source. In addition, the results show that the filter tracks the reference current accurately.
Abstract: The main focus of this paper is on the human induced
forces. Almost all existing force models for this type of load (defined
either in the time or frequency domain) are developed from the
assumption of perfect periodicity of the force and are based on force
measurements conducted on rigid (i.e. high frequency) surfaces. To
verify the different authors conclusions the vertical pressure
measurements invoked during the walking was performed, using
pressure gauges in various configurations. The obtained forces are
analyzed using Fourier transformation. This load is often decisive in
the design of footbridges. Design criteria and load models proposed
by widely used standards and other researchers were introduced and a
comparison was made.
Abstract: Camera calibration plays an important role in the domain of the analysis of sports video. Considering soccer video, in most cases, the cross-points can be used for calibration at the center of the soccer field are not sufficient, so this paper introduces a new automatic camera calibration algorithm focus on solving this problem by using the properties of images of the center circle, halfway line and a touch line. After the theoretical analysis, a practicable automatic algorithm is proposed. Very little information used though, results of experiments with both synthetic data and real data show that the algorithm is applicable.
Abstract: The seemingly ambiguous title of this paper – use of the terms maturity and innovation in concord – signifies the imperative of every organisation within the competitive domain. Where organisational maturity and innovativeness were traditionally considered antonymous, the assimilation of these two seemingly contradictory notions is fundamental to the assurance of long-term organisational prosperity. Organisations are required, now more than ever, to grow and mature their innovation capability – rending consistent innovative outputs. This paper describes research conducted to consolidate the principles of innovation and identify the fundamental components that constitute organisational innovation capability. The process of developing an Innovation Capability Maturity Model is presented. A brief description is provided of the basic components of the model, followed by a description of the case studies that were conducted to evaluate the model. The paper concludes with a summary of the findings and potential future research.
Abstract: This paper describes a new method for extracting the fetal heart rate (fHR) and the fetal heart rate variability (fHRV) signal non-invasively using abdominal maternal electrocardiogram (mECG) recordings. The extraction is based on the fundamental frequency (Fourier-s) theorem. The fundamental frequency of the mother-s electrocardiogram signal (fo-m) is calculated directly from the abdominal signal. The heart rate of the fetus is usually higher than that of the mother; as a result, the fundamental frequency of the fetal-s electrocardiogram signal (fo-f) is higher than that of the mother-s (fo-f > fo-m). Notch filters to suppress mother-s higher harmonics were designed; then a bandpass filter to target fo-f and reject fo-m is implemented. Although the bandpass filter will pass some other frequencies (harmonics), we have shown in this study that those harmonics are actually carried on fo-f, and thus have no impact on the evaluation of the beat-to-beat changes (RR intervals). The oscillations of the time-domain extracted signal represent the RR intervals. We have also shown in this study that zero-to-zero evaluation of the periods is more accurate than the peak-to-peak evaluation. This method is evaluated both on simulated signals and on different abdominal recordings obtained at different gestational ages.
Abstract: Response Surface Methodology (RSM) is a powerful
and efficient mathematical approach widely applied in the
optimization of cultivation process. Cellulase enzyme production by
Trichoderma reesei RutC30 using agricultural waste rice straw and
banana fiber as carbon source were investigated. In this work,
sequential optimization strategy based statistical design was
employed to enhance the production of cellulase enzyme through
submerged cultivation. A fractional factorial design (26-2) was applied
to elucidate the process parameters that significantly affect cellulase
production. Temperature, Substrate concentration, Inducer
concentration, pH, inoculum age and agitation speed were identified
as important process parameters effecting cellulase enzyme synthesis.
The concentration of lignocelluloses and lactose (inducer) in the
cultivation medium were found to be most significant factors. The
steepest ascent method was used to locate the optimal domain and a
Central Composite Design (CCD) was used to estimate the quadratic
response surface from which the factor levels for maximum
production of cellulase were determined.
Abstract: The automatic classification of non stationary signals is an important practical goal in several domains. An essential classification task is to allocate the incoming signal to a group associated with the kind of physical phenomena producing it. In this paper, we present a modular system composed by three blocs: 1) Representation, 2) Dimensionality reduction and 3) Classification. The originality of our work consists in the use of a new wavelet called "Ben wavelet" in the representation stage. For the dimensionality reduction, we propose a new algorithm based on the random projection and the principal component analysis.
Abstract: In this paper, the application of the Mode Matching
(MM) method in the case of photonic crystal waveguide
discontinuities is presented. The structure under consideration is
divided into a number of cells, which supports a number of guided
and evanescent modes. These modes can be calculated numerically
by an alternative formulation of the plane wave expansion method
for each frequency. A matrix equation is then formed relating the
modal amplitudes at the beginning and at the end of the structure.
The theory is highly efficient and accurate and can be applied to
study the transmission sensitivity of photonic crystal devices due to
fabrication tolerances. The accuracy of the MM method is compared
to the Finite Difference Frequency Domain (FDFD) and the Adjoint
Variable Method (AVM) and good agreement is observed.
Abstract: This paper presents investigation effects of a sharp edged gust on aeroelastic behavior and time-domain response of a typical section model using Jones approximate aerodynamics for pure plunging motion. Flutter analysis has been done by using p and p-k methods developed for presented finite-state aerodynamic model for a typical section model (airfoil). Introduction of gust analysis as a linear set of ordinary differential equations in a simplified procedure has been carried out by using transformation into an eigenvalue problem.
Abstract: In this paper, applying frequency domain approach, a delayed predator-prey fishery model with prey reserve is investigated. By choosing the delay τ as a bifurcation parameter, It is found that Hopf bifurcation occurs as the bifurcation parameter τ passes a sequence of critical values. That is, a family of periodic solutions bifurcate from the equilibrium when the bifurcation parameter exceeds a critical value. The length of delay which preserves the stability of the positive equilibrium is calculated. Some numerical simulations are included to justify the theoretical analysis results. Finally, main conclusions are given.
Abstract: Nowadays, precipitation prediction is required for proper planning and management of water resources. Prediction with neural network models has received increasing interest in various research and application domains. However, it is difficult to determine the best neural network architecture for prediction since it is not immediately obvious how many input or hidden nodes are used in the model. In this paper, neural network model is used as a forecasting tool. The major aim is to evaluate a suitable neural network model for monthly precipitation mapping of Myanmar. Using 3-layerd neural network models, 100 cases are tested by changing the number of input and hidden nodes from 1 to 10 nodes, respectively, and only one outputnode used. The optimum model with the suitable number of nodes is selected in accordance with the minimum forecast error. In measuring network performance using Root Mean Square Error (RMSE), experimental results significantly show that 3 inputs-10 hiddens-1 output architecture model gives the best prediction result for monthly precipitation in Myanmar.
Abstract: Question answering (QA) aims at retrieving precise information from a large collection of documents. Most of the Question Answering systems composed of three main modules: question processing, document processing and answer processing. Question processing module plays an important role in QA systems to reformulate questions. Moreover answer processing module is an emerging topic in QA systems, where these systems are often required to rank and validate candidate answers. These techniques aiming at finding short and precise answers are often based on the semantic relations and co-occurrence keywords. This paper discussed about a new model for question answering which improved two main modules, question processing and answer processing which both affect on the evaluation of the system operations. There are two important components which are the bases of the question processing. First component is question classification that specifies types of question and answer. Second one is reformulation which converts the user's question into an understandable question by QA system in a specific domain. The objective of an Answer Validation task is thus to judge the correctness of an answer returned by a QA system, according to the text snippet given to support it. For validating answers we apply candidate answer filtering, candidate answer ranking and also it has a final validation section by user voting. Also this paper described new architecture of question and answer processing modules with modeling, implementing and evaluating the system. The system differs from most question answering systems in its answer validation model. This module makes it more suitable to find exact answer. Results show that, from total 50 asked questions, evaluation of the model, show 92% improving the decision of the system.
Abstract: In this paper an alternative analysis in the time
domain is described and the results of the interpolation process are
presented by means of functions that are based on the rule of
conditional mathematical expectation and the covariance function. A
comparison between the interpolation error caused by low order
filters and the classic sinc(t) truncated function is also presented.
When fewer samples are used, low-order filters have less error. If the
number of samples increases, the sinc(t) type functions are a better
alternative. Generally speaking there is an optimal filter for each
input signal which depends on the filter length and covariance
function of the signal. A novel scheme of work for adaptive
interpolation filters is also presented.