Abstract: Unified Speech Audio Coding (USAC), the latest MPEG standardization for unified speech and audio coding, uses a speech/audio classification algorithm to distinguish speech and audio segments of the input signal. The quality of the recovered audio can be increased by well-designed orchestra/percussion classification and subsequent processing. However, owing to the shortcoming of the system, introducing an orchestra/percussion classification and modifying subsequent processing can enormously increase the quality of the recovered audio. This paper proposes an orchestra/percussion classification algorithm for the USAC system which only extracts 3 scales of Mel-Frequency Cepstral Coefficients (MFCCs) rather than traditional 13 scales of MFCCs and use Iterative Dichotomiser 3 (ID3) Decision Tree rather than other complex learning method, thus the proposed algorithm has lower computing complexity than most existing algorithms. Considering that frequent changing of attributes may lead to quality loss of the recovered audio signal, this paper also design a modified subsequent process to help the whole classification system reach an accurate rate as high as 97% which is comparable to classical 99%.
Abstract: This paper deals with the comparison between two proposed control strategies for a DC-DC boost converter. The first control is a classical Sliding Mode Control (SMC) and the second one is a distance based Fuzzy Sliding Mode Control (FSMC). The SMC is an analytical control approach based on the boost mathematical model. However, the FSMC is a non-conventional control approach which does not need the controlled system mathematical model. It needs only the measures of the output voltage to perform the control signal. The obtained simulation results show that the two proposed control methods are robust for the case of load resistance and the input voltage variations. However, the proposed FSMC gives a better step voltage response than the one obtained by the SMC.
Abstract: Weather systems use enormously complex
combinations of numerical tools for study and forecasting.
Unfortunately, due to phenomena in the world climate, such
as the greenhouse effect, classical models may become
insufficient mostly because they lack adaptation. Therefore,
the weather forecast problem is matched for heuristic
approaches, such as Evolutionary Algorithms.
Experimentation with heuristic methods like Particle Swarm
Optimization (PSO) algorithm can lead to the development of
new insights or promising models that can be fine tuned with
more focused techniques. This paper describes a PSO
approach for analysis and prediction of data and provides
experimental results of the aforementioned method on realworld
meteorological time series.
Abstract: The data exchanged on the Web are of different nature
from those treated by the classical database management systems;
these data are called semi-structured data since they do not have a
regular and static structure like data found in a relational database;
their schema is dynamic and may contain missing data or types.
Therefore, the needs for developing further techniques and
algorithms to exploit and integrate such data, and extract relevant
information for the user have been raised. In this paper we present
the system OSIX (Osiris based System for Integration of XML
Sources). This system has a Data Warehouse model designed for the
integration of semi-structured data and more precisely for the
integration of XML documents. The architecture of OSIX relies on
the Osiris system, a DL-based model designed for the representation
and management of databases and knowledge bases. Osiris is a viewbased
data model whose indexing system supports semantic query
optimization. We show that the problem of query processing on a
XML source is optimized by the indexing approach proposed by
Osiris.
Abstract: Representing objects in a dynamic domain is essential
in commonsense reasoning under some circumstances. Classical logics
and their nonmonotonic consequences, however, are usually not
able to deal with reasoning with dynamic domains due to the fact that
every constant in the logical language denotes some existing object
in the static domain. In this paper, we explore a logical formalization
which allows us to represent nonexisting objects in commonsense
reasoning. A formal system named N-theory is proposed for this
purpose and its possible application in computer security is briefly
discussed.
Abstract: A model of vortex wake is suggested to determine the
induced power during animal hovering flight. The wake is modeled
by a series of equi-spaced rigid rectangular vortex plates, positioned
horizontally and moving vertically downwards with identical speeds;
each plate is generated during powering of the functionally wing
stroke. The vortex representation of the wake considered in the
current theory allows a considerable loss of momentum to occur. The
current approach accords well with the nature of the wingbeat since it
considers the unsteadiness in the wake as an important fluid
dynamical characteristic. Induced power in hovering is calculated as
the aerodynamic power required to generate the vortex wake system.
Specific mean induced power to mean wing tip velocity ratio is
determined by solely the normal spacing parameter (f) for a given
wing stroke amplitude. The current theory gives much higher specific
induced power estimate than anticipated by classical methods.
Abstract: The study of a real function of two real variables can be supported by visualization using a Computer Algebra System (CAS). One type of constraints of the system is due to the algorithms implemented, yielding continuous approximations of the given function by interpolation. This often masks discontinuities of the function and can provide strange plots, not compatible with the mathematics. In recent years, point based geometry has gained increasing attention as an alternative surface representation, both for efficient rendering and for flexible geometry processing of complex surfaces. In this paper we present different artifacts created by mesh surfaces near discontinuities and propose a point based method that controls and reduces these artifacts. A least squares penalty method for an automatic generation of the mesh that controls the behavior of the chosen function is presented. The special feature of this method is the ability to improve the accuracy of the surface visualization near a set of interior points where the function may be discontinuous. The present method is formulated as a minimax problem and the non uniform mesh is generated using an iterative algorithm. Results show that for large poorly conditioned matrices, the new algorithm gives more accurate results than the classical preconditioned conjugate algorithm.
Abstract: The join dependency provides the basis for obtaining
lossless join decomposition in a classical relational schema. The
existence of Join dependency shows that that the tables always
represent the correct data after being joined. Since the classical
relational databases cannot handle imprecise data, they were
extended to fuzzy relational databases so that uncertain, ambiguous,
imprecise and partially known information can also be stored in
databases in a formal way. However like classical databases, the
fuzzy relational databases also undergoes decomposition during
normalization, the issue of joining the decomposed fuzzy relations
remains intact. Our effort in the present paper is to emphasize on this
issue. In this paper we define fuzzy join dependency in the
framework of type-1 fuzzy relational databases & type-2 fuzzy
relational databases using the concept of fuzzy equality which is
defined using fuzzy functions. We use the fuzzy equi-join operator
for computing the fuzzy equality of two attribute values. We also
discuss the dependency preservation property on execution of this
fuzzy equi- join and derive the necessary condition for the fuzzy
functional dependencies to be preserved on joining the decomposed
fuzzy relations. We also derive the conditions for fuzzy join
dependency to exist in context of both type-1 and type-2 fuzzy
relational databases. We find that unlike the classical relational
databases even the existence of a trivial join dependency does not
ensure lossless join decomposition in type-2 fuzzy relational
databases. Finally we derive the conditions for the fuzzy equality to
be non zero and the qualification of an attribute for fuzzy key.
Abstract: The coalescer process is one of the methods for oily water treatment by increasing the oil droplet size in order to enhance the separating velocity and thus effective separation. However, the presence of surfactants in an oily emulsion can limit the obtained mechanisms due to the small oil size related with stabilized emulsion. In this regard, the purpose of this research is to improve the efficiency of the coalescer process for treating the stabilized emulsion. The effects of bed types, bed height, liquid flow rate and stage coalescer (step-bed) on the treatment efficiencies in term of COD values were studied. Note that the treatment efficiency obtained experimentally was estimated by using the COD values and oil droplet size distribution. The study has shown that the plastic media has more effective to attach with oil particles than the stainless one due to their hydrophobic properties. Furthermore, the suitable bed height (3.5 cm) and step bed (3.5 cm with 2 steps) were necessary in order to well obtain the coalescer performance. The application of step bed coalescer process in reactor has provided the higher treatment efficiencies in term of COD removal than those obtained with classical process. The proposed model for predicting the area under curve and thus treatment efficiency, based on the single collector efficiency (ηT) and the attachment efficiency (α), provides relatively a good coincidence between the experimental and predicted values of treatment efficiencies in this study.
Abstract: This paper describes simple implementation of
homotopy (also called continuation) algorithm for determining the proper resistance of the resistor to dissipate energy at a specified rate of an electric circuit. Homotopy algorithm can be considered as a developing of the classical methods in numerical computing such as Newton-Raphson and fixed
point methods. In homoptopy methods, an embedding
parameter is used to control the convergence. The method purposed in this work utilizes a special homotopy called Newton homotopy. Numerical example solved in MATLAB is given to show the effectiveness of the purposed method
Abstract: This paper presents a generalized formulation for the
problem of buckling optimization of anisotropic, radially graded,
thin-walled, long cylinders subject to external hydrostatic pressure.
The main structure to be analyzed is built of multi-angle fibrous
laminated composite lay-ups having different volume fractions of the
constituent materials within the individual plies. This yield to a
piecewise grading of the material in the radial direction; that is the
physical and mechanical properties of the composite material are
allowed to vary radially. The objective function is measured by
maximizing the critical buckling pressure while preserving the total
structural mass at a constant value equals to that of a baseline
reference design. In the selection of the significant optimization
variables, the fiber volume fractions adjoin the standard design
variables including fiber orientation angles and ply thicknesses. The
mathematical formulation employs the classical lamination theory,
where an analytical solution that accounts for the effective axial and
flexural stiffness separately as well as the inclusion of the coupling
stiffness terms is presented. The proposed model deals with
dimensionless quantities in order to be valid for thin shells having
arbitrary thickness-to-radius ratios. The critical buckling pressure
level curves augmented with the mass equality constraint are given
for several types of cylinders showing the functional dependence of
the constrained objective function on the selected design variables. It
was shown that material grading can have significant contribution to
the whole optimization process in achieving the required structural
designs with enhanced stability limits.
Abstract: Nature conducts its action in a very private manner. To
reveal these actions classical science has done a great effort. But
classical science can experiment only with the things that can be seen
with eyes. Beyond the scope of classical science quantum science
works very well. It is based on some postulates like qubit,
superposition of two states, entanglement, measurement and
evolution of states that are briefly described in the present paper.
One of the applications of quantum computing i.e.
implementation of a novel quantum evolutionary algorithm(QEA) to
automate the time tabling problem of Dayalbagh Educational Institute
(Deemed University) is also presented in this paper. Making a good
timetable is a scheduling problem. It is NP-hard, multi-constrained,
complex and a combinatorial optimization problem. The solution of
this problem cannot be obtained in polynomial time. The QEA uses
genetic operators on the Q-bit as well as updating operator of
quantum gate which is introduced as a variation operator to converge
toward better solutions.
Abstract: Knowledge Discovery in Databases (KDD) has
evolved into an important and active area of research because of
theoretical challenges and practical applications associated with the
problem of discovering (or extracting) interesting and previously
unknown knowledge from very large real-world databases. Rough
Set Theory (RST) is a mathematical formalism for representing
uncertainty that can be considered an extension of the classical set
theory. It has been used in many different research areas, including
those related to inductive machine learning and reduction of
knowledge in knowledge-based systems. One important concept
related to RST is that of a rough relation. In this paper we presented
the current status of research on applying rough set theory to KDD,
which will be helpful for handle the characteristics of real-world
databases. The main aim is to show how rough set and rough set
analysis can be effectively used to extract knowledge from large
databases.
Abstract: In the classical buckling analysis of rectangular plates
subjected to the concurrent action of shear and uniaxial forces, the
Euler shear buckling stress is generally evaluated separately, so that
no influence on the shear buckling coefficient, due to the in-plane
tensile or compressive forces, is taken into account.
In this paper the buckling problem of simply supported rectangular
plates, under the combined action of shear and uniaxial forces, is
discussed from the beginning, in order to obtain new project formulas
for the shear buckling coefficient that take into account the presence
of uniaxial forces.
Furthermore, as the classical expression of the shear buckling
coefficient for simply supported rectangular plates is considered only
a “rough" approximation, as the exact one is defined by a system of
intersecting curves, the convergence and the goodness of the classical
solution are analyzed, too.
Finally, as the problem of the Euler shear buckling stress
evaluation is a very important topic for a variety of structures, (e.g.
ship ones), two numerical applications are carried out, in order to
highlight the role of the uniaxial stresses on the plating scantling
procedures and the goodness of the proposed formulas.
Abstract: In this paper we present a generic approach for the problem of the blind estimation of the parameters of linear and convolutional error correcting codes. In a non-cooperative context, an adversary has only access to the noised transmission he has intercepted. The intercepter has no knowledge about the parameters used by the legal users. So, before having acess to the information he has first to blindly estimate the parameters of the error correcting code of the communication. The presented approach has the main advantage that the problem of reconstruction of such codes can be expressed in a very simple way. This allows us to evaluate theorical bounds on the complexity of the reconstruction process but also bounds on the estimation rate. We show that some classical reconstruction techniques are optimal and also explain why some of them have theorical complexities greater than these experimentally observed.
Abstract: This paper deals with efficient quadrature formulas involving functions that are observed only at fixed sampling points. The approach that we develop is derived from efficient continuous quadrature formulas, such as Gauss-Legendre or Clenshaw-Curtis quadrature. We select nodes at sampling positions that are as close as possible to those of the associated classical quadrature and we update quadrature weights accordingly. We supply the theoretical quadrature error formula for this new approach. We show on examples the potential gain of this approach.
Abstract: Musculoskeletal problems are common in high
performance dance population. This study attempts to identify lower
extremity muscle flexibility parameters prevailing among
bharatanatyam dancers and analyze if there is any significant
difference exist between normal and injured dancers in flexibility
parameters. Four hundred and one female dancers and 17 male
dancers were participated in this study. Flexibility parameters
(hamstring tightness, hip internal and external rotation and
tendoachilles in supine and sitting posture) were measured using
goniometer. Results of our study it is evident that injured female
bharathnatyam dancers had significantly (p < 0.05) high hamstring
tightness on left side lower extremity compared to normal female
dancers. The range of motion for left tendoachilles was significantly
(p < 0.05) high for the normal female group when compared to
injured dancers during supine lying posture. Majority of the injured
dancers had high hamstring tightness that could be a possible reason
for pain and MSDs.