Abstract: Fractional Fourier Transform is a powerful tool,
which is a generalization of the classical Fourier Transform. This
paper provides a mathematical relation relating the span in Fractional
Fourier domain with the amplitude and phase functions of the signal,
which is further used to study the variation of quality factor with
different values of the transform order. It is seen that with the
increase in the number of transients in the signal, the deviation of
average Fractional Fourier span from the frequency bandwidth
increases. Also, with the increase in the transient nature of the signal,
the optimum value of transform order can be estimated based on the
quality factor variation, and this value is found to be very close to
that for which one can obtain the most compact representation. With
the entire mathematical analysis and experimentation, we consolidate
the fact that Fractional Fourier Transform gives more optimal
representations for a number of transform orders than Fourier
transform.
Abstract: Heavy rainfall greatly affects the aerodynamic performance of the aircraft. There are many accidents of aircraft caused by aerodynamic efficiency degradation by heavy rain. In this Paper we have studied the heavy rain effects on the aerodynamic efficiency of NACA 64-210 & NACA 0012 airfoils. For our analysis, CFD method and preprocessing grid generator are used as our main analytical tools, and the simulation of rain is accomplished via two phase flow approach-s Discrete Phase Model (DPM). Raindrops are assumed to be non-interacting, non-deforming, non-evaporating and non-spinning spheres. Both airfoil sections exhibited significant reduction in lift and increase in drag for a given lift condition in simulated rain. The most significant difference between these two airfoils was the sensitivity of the NACA 64-210 to liquid water content (LWC), while NACA 0012 performance losses in the rain environment is not a function of LWC . It is expected that the quantitative information gained in this paper will be useful to the operational airline industry and greater effort such as small scale and full scale flight tests should put in this direction to further improve aviation safety.
Abstract: Many digital signal processing, techniques have been used to automatically distinguish protein coding regions (exons) from non-coding regions (introns) in DNA sequences. In this work, we have characterized these sequences according to their nonlinear dynamical features such as moment invariants, correlation dimension, and largest Lyapunov exponent estimates. We have applied our model to a number of real sequences encoded into a time series using EIIP sequence indicators. In order to discriminate between coding and non coding DNA regions, the phase space trajectory was first reconstructed for coding and non-coding regions. Nonlinear dynamical features are extracted from those regions and used to investigate a difference between them. Our results indicate that the nonlinear dynamical characteristics have yielded significant differences between coding (CR) and non-coding regions (NCR) in DNA sequences. Finally, the classifier is tested on real genes where coding and non-coding regions are well known.
Abstract: Technology of thin film deposition is of interest in
many engineering fields, from electronic manufacturing to corrosion
protective coating. A typical deposition process, like that developed
at the University of Eindhoven, considers the deposition of a thin,
amorphous film of C:H or of Si:H on the substrate, using the
Expanding Thermal arc Plasma technique. In this paper a computing
procedure is proposed to simulate the flow field in a deposition
chamber similar to that at the University of Eindhoven and a
sensitivity analysis is carried out in terms of: precursor mass flow
rate, electrical power, supplied to the torch and fluid-dynamic
characteristics of the plasma jet, using different nozzles. To this
purpose a deposition chamber similar in shape, dimensions and
operating parameters to the above mentioned chamber is considered.
Furthermore, a method is proposed for a very preliminary evaluation
of the film thickness distribution on the substrate. The computing
procedure relies on two codes working in tandem; the output from
the first code is the input to the second one. The first code simulates
the flow field in the torch, where Argon is ionized according to the
Saha-s equation, and in the nozzle. The second code simulates the
flow field in the chamber. Due to high rarefaction level, this is a
(commercial) Direct Simulation Monte Carlo code. Gas is a mixture
of 21 chemical species and 24 chemical reactions from Argon plasma
and Acetylene are implemented in both codes. The effects of the
above mentioned operating parameters are evaluated and discussed
by 2-D maps and profiles of some important thermo-fluid-dynamic
parameters, as per Mach number, velocity and temperature. Intensity,
position and extension of the shock wave are evaluated and the
influence of the above mentioned test conditions on the film
thickness and uniformity of distribution are also evaluated.
Abstract: In today-s information age, numbers of organizations
are still arguing on capitalizing the values of Information Technology
(IT) and Knowledge Management (KM) to which individuals can
benefit from and effective communication among the individuals can
be established. IT exists in enabling positive improvement for
communication among knowledge workers (k-workers) with a
number of social network technology domains at workplace. The
acceptance of digital discourse in sharing of knowledge and
facilitating the knowledge and information flows at most of the
organizations indeed impose the culture of knowledge sharing in
Digital Social Networks (DSN). Therefore, this study examines
whether the k-workers with IT background would confer an effect on
the three knowledge characteristics -- conceptual, contextual, and
operational. Derived from these three knowledge characteristics, five
potential factors will be examined on the effects of knowledge
exchange via e-mail domain as the chosen query. It is expected, that
the results could provide such a parameter in exploring how DSN
contributes in supporting the k-workers- virtues, performance and
qualities as well as revealing the mutual point between IT and KM.
Abstract: Over last two decades, due to hostilities of environment
over the internet the concerns about confidentiality of information
have increased at phenomenal rate. Therefore to safeguard the information
from attacks, number of data/information hiding methods have
evolved mostly in spatial and transformation domain.In spatial domain
data hiding techniques,the information is embedded directly on
the image plane itself. In transform domain data hiding techniques the
image is first changed from spatial domain to some other domain and
then the secret information is embedded so that the secret information
remains more secure from any attack. Information hiding algorithms
in time domain or spatial domain have high capacity and relatively
lower robustness. In contrast, the algorithms in transform domain,
such as DCT, DWT have certain robustness against some multimedia
processing.In this work the authors propose a novel steganographic
method for hiding information in the transform domain of the gray
scale image.The proposed approach works by converting the gray
level image in transform domain using discrete integer wavelet
technique through lifting scheme.This approach performs a 2-D
lifting wavelet decomposition through Haar lifted wavelet of the cover
image and computes the approximation coefficients matrix CA and
detail coefficients matrices CH, CV, and CD.Next step is to apply the
PMM technique in those coefficients to form the stego image. The
aim of this paper is to propose a high-capacity image steganography
technique that uses pixel mapping method in integer wavelet domain
with acceptable levels of imperceptibility and distortion in the cover
image and high level of overall security. This solution is independent
of the nature of the data to be hidden and produces a stego image
with minimum degradation.
Abstract: Recent advances in both the testing and verification of software based on formal specifications of the system to be built have reached a point where the ideas can be applied in a powerful way in the design of agent-based systems. The software engineering research has highlighted a number of important issues: the importance of the type of modeling technique used; the careful design of the model to enable powerful testing techniques to be used; the automated verification of the behavioural properties of the system; the need to provide a mechanism for translating the formal models into executable software in a simple and transparent way. This paper introduces the use of the X-machine formalism as a tool for modeling biology inspired agents proposing the use of the techniques built around X-machine models for the construction of effective, and reliable agent-based software systems.
Abstract: This paper presents a generalized form of the
mechanistic deconvolution technique (GMD) to modeling image sensors applicable in various pan–tilt planes of view. The mechanistic deconvolution technique (UMD) is modified with the
given angles of a pan–tilt plane of view to formulate constraint parameters and characterize distortion effects, and thereby, determine
the corrected image data. This, as a result, does not require experimental setup or calibration. Due to the mechanistic nature of
the sensor model, the necessity for the sensor image plane to be
orthogonal to its z-axis is eliminated, and it reduces the dependency on image data. An experiment was constructed to evaluate the
accuracy of a model created by GMD and its insensitivity to changes in sensor properties and in pan and tilt angles. This was compared
with a pre-calibrated model and a model created by UMD using two sensors with different specifications. It achieved similar accuracy
with one-seventh the number of iterations and attained lower mean error by a factor of 2.4 when compared to the pre-calibrated and
UMD model respectively. The model has also shown itself to be robust and, in comparison to pre-calibrated and UMD model, improved the accuracy significantly.
Abstract: The objective of the paper is to develop the forecast
model for the HW flows. The methodology of the research included
6 modules: historical data, assumptions, choose of indicators, data
processing, and data analysis with STATGRAPHICS, and forecast
models. The proposed methodology was validated for the case study
for Latvia. Hypothesis on the changes in HW for time period of
2010-2020 have been developed and mathematically described with
confidence level of 95.0% and 50.0%. Sensitivity analysis for the
analyzed scenarios was done. The results show that the growth of
GDP affects the total amount of HW in the country. The total amount
of the HW is projected to be within the corridor of – 27.7% in the
optimistic scenario up to +87.8% in the pessimistic scenario with
confidence level of 50.0% for period of 2010-2020. The optimistic
scenario has shown to be the least flexible to the changes in the GDP
growth.
Abstract: In this paper the problem of face recognition under variable illumination conditions is considered. Most of the works in the literature exhibit good performance under strictly controlled acquisition conditions, but the performance drastically drop when changes in pose and illumination occur, so that recently number of approaches have been proposed to deal with such variability. The aim of this work is to introduce an efficient local appearance feature extraction method based steerable pyramid (SP) for face recognition. Local information is extracted from SP sub-bands using LBP(Local binary Pattern). The underlying statistics allow us to reduce the required amount of data to be stored. The experiments carried out on different face databases confirm the effectiveness of the proposed approach.
Abstract: The paper reports on the results of experimental and
numerical study of nonstationary swirling flow in an isothermal
model of vortex burner. It has been identified that main source of the
instability is related to a precessing vortex core (PVC) phenomenon.
The PVC induced flow pulsation characteristics such as precession
frequency and its variation as a function of flowrate and swirl number
have been explored making use of acoustic probes. Additionally
pressure transducers were used to measure the pressure drops on the
working chamber and across the vortex flow. The experiments have
been included also the mean velocity measurements making use of a
laser-Doppler anemometry. The features of instantaneous flowfield
generated by the PVC were analyzed employing a commercial CFD
code (Star-CCM+) based on Detached Eddy Simulation (DES)
approach. Validity of the numerical code has been checked by
comparison calculated flowfield data with the obtained experimental
results. It has been confirmed particularly that the CFD code applied
correctly reproduces the flow features.
Abstract: This research intends to introduce a new usage of Artificial Intelligent (AI) approaches in Stepping Stone Detection (SSD) fields of research. By using Self-Organizing Map (SOM) approaches as the engine, through the experiment, it is shown that SOM has the capability to detect the number of connection chains that involved in a stepping stones. Realizing that by counting the number of connection chain is one of the important steps of stepping stone detection and it become the research focus currently, this research has chosen SOM as the AI techniques because of its capabilities. Through the experiment, it is shown that SOM can detect the number of involved connection chains in Network-based Stepping Stone Detection (NSSD).
Abstract: Waste management is now a global concern due to its
high environmental impact on climate change. Because of generating
huge amount of waste through our daily activities, managing waste in
an efficient way has become more important than ever. Alternative
Waste Technology (AWT), a new category of waste treatment
technology has been developed for energy recovery in recent years to
address this issue. AWT describes a technology that redirects waste
away from landfill, recovers more useable resources from the waste
flow and reduces the impact on the surroundings. Australia is one of
the largest producers of waste per-capita. A number of AWTs are
using in Australia to produce energy from waste. Presently, it is vital
to identify an appropriate AWT to establish a sustainable waste
management system in Australia. Identification of an appropriate
AWT through Multi-criteria analysis (MCA) of four AWTs by using
five key decision making criteria is presented and discussed in this
paper.
Abstract: In the recent years, high dynamic range imaging has
gain popularity with the advancement in digital photography. In this
contribution we present a subjective evaluation of various tone
production and tone mapping techniques by a number of participants.
Firstly, standard HDR images were used and the participants were
asked to rate them based on a given rating scheme. After that, the
participant was asked to rate HDR image generated using linear and
nonlinear combination approach of multiple exposure images. The
experimental results showed that linearly generated HDR images
have better visualization than the nonlinear combined ones. In
addition, Reinhard et al. and the exponential tone mapping operators
have shown better results compared to logarithmic and the Garrett et
al. tone mapping operators.
Abstract: The electromagnetic spectrum is a natural resource
and hence well-organized usage of the limited natural resources is the
necessities for better communication. The present static frequency
allocation schemes cannot accommodate demands of the rapidly
increasing number of higher data rate services. Therefore, dynamic
usage of the spectrum must be distinguished from the static usage to
increase the availability of frequency spectrum. Cognitive radio is not
a single piece of apparatus but it is a technology that can incorporate
components spread across a network. It offers great promise for
improving system efficiency, spectrum utilization, more effective
applications, reduction in interference and reduced complexity of
usage for users. Cognitive radio is aware of its environmental,
internal state, and location, and autonomously adjusts its operations
to achieve designed objectives. It first senses its spectral environment
over a wide frequency band, and then adapts the parameters to
maximize spectrum efficiency with high performance. This paper
only focuses on the analysis of Bit-Error-Rate in cognitive radio by
using Particle Swarm Optimization Algorithm. It is theoretically as
well as practically analyzed and interpreted in the sense of
advantages and drawbacks and how BER affects the efficiency and
performance of the communication system.
Abstract: Thirty six samples from each (aerobic and anoxic)
activated sludge were collected from two wastewater treatment plants
with MBRs in Berlin, Germany. The samples were prepared for count
and definition of fungal isolates; these isolates were purified by
conventional techniques and identified by microscopic examination.
Sixty tow species belonging to 28 genera were isolated from
activated sludge samples under aerobic conditions (28 genera and 58
species) and anoxic conditions (26 genera and 52 species). The
obtained data show that, Aspergillus was found at 94.4% followed by
Penicillium 61.1 %, Fusarium (61.1 %), Trichoderma (44.4 %) and
Geotrichum candidum (41.6 %) species were the most prevalent in all
activated sludge samples. The study confirmed that fungi can thrive
in activated sludge and sporulation, but isolated in different numbers
depending on the effect of aeration system. Some fungal species in
our study are saprophytic, and other a pathogenic to plants and
animals.
Abstract: The vertex connectivity of a graph is the smallest number of vertices whose deletion separates the graph or makes it trivial. This work is devoted to the problem of vertex connectivity test of graphs in a distributed environment based on a general and a constructive approach. The contribution of this paper is threefold. First, using a preconstructed spanning tree of the considered graph, we present a protocol to test whether a given graph is 2-connected using only local knowledge. Second, we present an encoding of this protocol using graph relabeling systems. The last contribution is the implementation of this protocol in the message passing model. For a given graph G, where M is the number of its edges, N the number of its nodes and Δ is its degree, our algorithms need the following requirements: The first one uses O(Δ×N2) steps and O(Δ×logΔ) bits per node. The second one uses O(Δ×N2) messages, O(N2) time and O(Δ × logΔ) bits per node. Furthermore, the studied network is semi-anonymous: Only the root of the pre-constructed spanning tree needs to be identified.
Abstract: Data mining uses a variety of techniques each of which
is useful for some particular task. It is important to have a deep
understanding of each technique and be able to perform sophisticated
analysis. In this article we describe a tool built to simulate a variation
of the Kohonen network to perform unsupervised clustering and
support the entire data mining process up to results visualization. A
graphical representation helps the user to find out a strategy to
optimize classification by adding, moving or delete a neuron in order
to change the number of classes. The tool is able to automatically
suggest a strategy to optimize the number of classes optimization, but
also support both tree classifications and semi-lattice organizations of
the classes to give to the users the possibility of passing from one
class to the ones with which it has some aspects in common.
Examples of using tree and semi-lattice classifications are given to
illustrate advantages and problems. The tool is applied to classify
macroeconomic data that report the most developed countries- import
and export. It is possible to classify the countries based on their
economic behaviour and use the tool to characterize the commercial
behaviour of a country in a selected class from the analysis of
positive and negative features that contribute to classes formation.
Possible interrelationships between the classes and their meaning are
also discussed.
Abstract: In this paper, we propose a practical digital music matching system that is robust to variation in sound qualities. The proposed system is subdivided into two parts: client and server. The client part consists of the input, preprocessing and feature extraction modules. The preprocessing module, including the music onset module, revises the value gap occurring on the time axis between identical songs of different formats. The proposed method uses delta-grouped Mel frequency cepstral coefficients (MFCCs) to extract music features that are robust to changes in sound quality. According to the number of sound quality formats (SQFs) used, a music server is constructed with a feature database (FD) that contains different sub feature databases (SFDs). When the proposed system receives a music file, the selection module selects an appropriate SFD from a feature database; the selected SFD is subsequently used by the matching module. In this study, we used 3,000 queries for matching experiments in three cases with different FDs. In each case, we used 1,000 queries constructed by mixing 8 SQFs and 125 songs. The success rate of music matching improved from 88.6% when using single a single SFD to 93.2% when using quadruple SFDs. By this experiment, we proved that the proposed method is robust to various sound qualities.
Abstract: Let F(x, y) = ax2 + bxy + cy2 be a positive definite
binary quadratic form with discriminant Δ whose base points lie on
the line x = -1/m for an integer m ≥ 2, let p be a prime number
and let Fp be a finite field. Let EF : y2 = ax3 + bx2 + cx be an
elliptic curve over Fp and let CF : ax3 + bx2 + cx ≡ 0(mod p) be
the cubic congruence corresponding to F. In this work we consider
some properties of positive definite quadratic forms, elliptic curves
and cubic congruences.