Abstract: We seek exact solutions of the coupled Klein-Gordon-Schrödinger equation with variable coefficients with the aid of Lie classical approach. By using the Lie classical method, we are able to derive symmetries that are used for reducing the coupled system of partial differential equations into ordinary differential equations. From reduced differential equations we have derived some new exact solutions of coupled Klein-Gordon-Schrödinger equations involving some special functions such as Airy wave functions, Bessel functions, Mathieu functions etc.
Abstract: Attitude Determination (AD) of a spacecraft using the
phase measurements of the Global Navigation Satellite System
(GNSS) is an active area of research. Various attitude determination
algorithms have been developed in yester years for spacecrafts using
different sensors but the last two decades have witnessed a
phenomenal increase in research related with GPS receivers as a
stand-alone sensor for determining the attitude of satellite using the
phase measurements of the signals from GNSS. The GNSS-based
Attitude determination algorithms have been experimented in many
real missions. The problem of AD algorithms using GNSS phase
measurements has two important parts; the ambiguity resolution and
the determining of attitude. Ambiguity resolution is the widely
addressed topic in literature for implementing the AD algorithm
using GNSS phase measurements for achieving the accuracy of
millimeter level. This paper broadly overviews the different
techniques for resolving the integer ambiguities encountered in AD
using GNSS phase measurements.
Abstract: A new deployment of the multiple criteria decision
making (MCDM) techniques: the Simple Additive Weighting
(SAW), and the Technique for Order Preference by Similarity to
Ideal Solution (TOPSIS) for portfolio allocation, is demonstrated in
this paper. Rather than exclusive reference to mean and variance as in
the traditional mean-variance method, the criteria used in this
demonstration are the first four moments of the portfolio distribution.
Each asset is evaluated based on its marginal impacts to portfolio
higher moments that are characterized by trapezoidal fuzzy numbers.
Then centroid-based defuzzification is applied to convert fuzzy
numbers to the crisp numbers by which SAW and TOPSIS can be
deployed. Experimental results suggest the similar efficiency of these
MCDM approaches to selecting dominant assets for an optimal
portfolio under higher moments. The proposed approaches allow
investors flexibly adjust their risk preferences regarding higher
moments via different schemes adapting to various (from
conservative to risky) kinds of investors. The other significant
advantage is that, compared to the mean-variance analysis, the
portfolio weights obtained by SAW and TOPSIS are consistently
well-diversified.
Abstract: Multi-user interference (MUI) is the main reason of system deterioration in the Spectral Amplitude Coding Optical Code Division Multiple Access (SAC-OCDMA) system. MUI increases with the number of simultaneous users, resulting into higher probability bit rate and limits the maximum number of simultaneous users. On the other hand, Phase induced intensity noise (PIIN) problem which is originated from spontaneous emission of broad band source from MUI severely limits the system performance should be addressed as well. Since the MUI is caused by the interference of simultaneous users, reducing the MUI value as small as possible is desirable. In this paper, an extensive study for the system performance specified by MUI and PIIN reducing is examined. Vectors Combinatorial (VC) codes families are adopted as a signature sequence for the performance analysis and a comparison with reported codes is performed. The results show that, when the received power increases, the PIIN noise for all the codes increases linearly. The results also show that the effect of PIIN can be minimized by increasing the code weight leads to preserve adequate signal to noise ratio over bit error probability. A comparison study between the proposed code and the existing codes such as Modified frequency hopping (MFH), Modified Quadratic- Congruence (MQC) has been carried out.
Abstract: The self-organizing map (SOM) model is a well-known neural network model with wide spread of applications. The main characteristics of SOM are two-fold, namely dimension reduction and topology preservation. Using SOM, a high-dimensional data space will be mapped to some low-dimensional space. Meanwhile, the topological relations among data will be preserved. With such characteristics, the SOM was usually applied on data clustering and visualization tasks. However, the SOM has main disadvantage of the need to know the number and structure of neurons prior to training, which are difficult to be determined. Several schemes have been proposed to tackle such deficiency. Examples are growing/expandable SOM, hierarchical SOM, and growing hierarchical SOM. These schemes could dynamically expand the map, even generate hierarchical maps, during training. Encouraging results were reported. Basically, these schemes adapt the size and structure of the map according to the distribution of training data. That is, they are data-driven or dataoriented SOM schemes. In this work, a topic-oriented SOM scheme which is suitable for document clustering and organization will be developed. The proposed SOM will automatically adapt the number as well as the structure of the map according to identified topics. Unlike other data-oriented SOMs, our approach expands the map and generates the hierarchies both according to the topics and their characteristics of the neurons. The preliminary experiments give promising result and demonstrate the plausibility of the method.
Abstract: An adaptive spatial Gaussian mixture model is proposed for clustering based color image segmentation. A new clustering objective function which incorporates the spatial information is introduced in the Bayesian framework. The weighting parameter for controlling the importance of spatial information is made adaptive to the image content to augment the smoothness towards piecewisehomogeneous region and diminish the edge-blurring effect and hence the name adaptive spatial finite mixture model. The proposed approach is compared with the spatially variant finite mixture model for pixel labeling. The experimental results with synthetic and Berkeley dataset demonstrate that the proposed method is effective in improving the segmentation and it can be employed in different practical image content understanding applications.
Abstract: This paper deals with e-government issues at several
levels. Initially we look at the concept of e-government itself in order
to give it a sound framework. Than we look at the e-government
issues at three levels, first we analyse it at the global level, second we
analyse it at the level of transition economies, and finally we take a
closer look on developments in Croatia. The analysis includes actual
progress being made in selected transition economies given the Euro
area averages, along with e-government potential in future
demanding period.
Abstract: This paper presents preliminary results on modeling
and control of a quadrotor UAV. With aerodynamic concepts, a
mathematical model is firstly proposed to describe the dynamics
of the quadrotor UAV. Parameters of this model are identified by
experiments with Matlab Identify Toolbox. A group of PID controllers
are then designed based on the developed model. To verify
the developed model and controllers, simulations and experiments for
altitude control, position control and trajectory tracking are carried
out. The results show that the quadrotor UAV well follows the
referenced commands, which clearly demonstrates the effectiveness
of the proposed approach.
Abstract: Both the minimum energy consumption and
smoothness, which is quantified as a function of jerk, are generally
needed in many dynamic systems such as the automobile and the
pick-and-place robot manipulator that handles fragile equipments.
Nevertheless, many researchers come up with either solely
concerning on the minimum energy consumption or minimum jerk
trajectory. This research paper proposes a simple yet very interesting
when combining the minimum energy and jerk of indirect jerks
approaches in designing the time-dependent system yielding an
alternative optimal solution. Extremal solutions for the cost functions
of the minimum energy, the minimum jerk and combining them
together are found using the dynamic optimization methods together
with the numerical approximation. This is to allow us to simulate
and compare visually and statistically the time history of state inputs
employed by combining minimum energy and jerk designs. The
numerical solution of minimum direct jerk and energy problem are
exactly the same solution; however, the solutions from problem of
minimum energy yield the similar solution especially in term of
tendency.
Abstract: User-based Collaborative filtering (CF), one of the
most prevailing and efficient recommendation techniques, provides
personalized recommendations to users based on the opinions of other
users. Although the CF technique has been successfully applied in
various applications, it suffers from serious sparsity problems. The
cloud-model approach addresses the sparsity problems by
constructing the user-s global preference represented by a cloud
eigenvector. The user-based CF approach works well with dense
datasets while the cloud-model CF approach has a greater
performance when the dataset is sparse. In this paper, we present a
hybrid approach that integrates the predictions from both the
user-based CF and the cloud-model CF approaches. The experimental
results show that the proposed hybrid approach can ameliorate the
sparsity problem and provide an improved prediction quality.
Abstract: In mobile environments, unspecified numbers of transactions
arrive in continuous streams. To prove correctness of their
concurrent execution a method of modelling an infinite number of
transactions is needed. Standard database techniques model fixed
finite schedules of transactions. Lately, techniques based on temporal
logic have been proposed as suitable for modelling infinite schedules.
The drawback of these techniques is that proving the basic
serializability correctness condition is impractical, as encoding (the
absence of) conflict cyclicity within large sets of transactions results
in prohibitively large temporal logic formulae. In this paper, we show
that, under certain common assumptions on the graph structure of
data items accessed by the transactions, conflict cyclicity need only
be checked within all possible pairs of transactions. This results in
formulae of considerably reduced size in any temporal-logic-based
approach to proving serializability, and scales to arbitrary numbers
of transactions.
Abstract: This paper describes an automatic algorithm to restore
the shape of three-dimensional (3D) left ventricle (LV) models created
from magnetic resonance imaging (MRI) data using a geometry-driven
optimization approach. Our basic premise is to restore the LV shape
such that the LV epicardial surface is smooth after the restoration. A
geometrical measure known as the Minimum Principle Curvature (κ2)
is used to assess the smoothness of the LV. This measure is used to
construct the objective function of a two-step optimization process.
The objective of the optimization is to achieve a smooth epicardial
shape by iterative in-plane translation of the MRI slices.
Quantitatively, this yields a minimum sum in terms of the magnitude
of κ
2, when κ2 is negative. A limited memory quasi-Newton algorithm,
L-BFGS-B, is used to solve the optimization problem. We tested our
algorithm on an in vitro theoretical LV model and 10 in vivo
patient-specific models which contain significant motion artifacts. The
results show that our method is able to automatically restore the shape
of LV models back to smoothness without altering the general shape of
the model. The magnitudes of in-plane translations are also consistent
with existing registration techniques and experimental findings.
Abstract: The aim of this research is to design a collaborative
framework that integrates risk analysis activities into the geospatial
database design (GDD) process. Risk analysis is rarely undertaken
iteratively as part of the present GDD methods in conformance to
requirement engineering (RE) guidelines and risk standards.
Accordingly, when risk analysis is performed during the GDD, some
foreseeable risks may be overlooked and not reach the output
specifications especially when user intentions are not systematically
collected. This may lead to ill-defined requirements and ultimately in
higher risks of geospatial data misuse. The adopted approach consists
of 1) reviewing risk analysis process within the scope of RE and
GDD, 2) analyzing the challenges of risk analysis within the context
of GDD, and 3) presenting the components of a risk-based
collaborative framework that improves the collection of the
intended/forbidden usages of the data and helps geo-IT experts to
discover implicit requirements and risks.
Abstract: In order to develop forest management strategies in
tropical forest in Malaysia, surveying the forest resources and
monitoring the forest area affected by logging activities is essential.
There are tremendous effort has been done in classification of land
cover related to forest resource management in this country as it is a
priority in all aspects of forest mapping using remote sensing and
related technology such as GIS. In fact classification process is a
compulsory step in any remote sensing research. Therefore, the main
objective of this paper is to assess classification accuracy of
classified forest map on Landsat TM data from difference number of
reference data (200 and 388 reference data). This comparison was
made through observation (200 reference data), and interpretation
and observation approaches (388 reference data). Five land cover
classes namely primary forest, logged over forest, water bodies, bare
land and agricultural crop/mixed horticultural can be identified by
the differences in spectral wavelength. Result showed that an overall
accuracy from 200 reference data was 83.5 % (kappa value
0.7502459; kappa variance 0.002871), which was considered
acceptable or good for optical data. However, when 200 reference
data was increased to 388 in the confusion matrix, the accuracy
slightly improved from 83.5% to 89.17%, with Kappa statistic
increased from 0.7502459 to 0.8026135, respectively. The accuracy
in this classification suggested that this strategy for the selection of
training area, interpretation approaches and number of reference data
used were importance to perform better classification result.
Abstract: There are many approaches proposed for solving
Sudoku puzzles. One of them is by modelling the puzzles as block
world problems. There have been three model for Sudoku solvers
based on this approach. Each model expresses Sudoku solver as
a parameterized multi agent systems. In this work, we propose a
new model which is an improvement over the existing models. This
paper presents the development of a Sudoku solver that implements
all the proposed models. Some experiments have been conducted to
determine the performance of each model.
Abstract: In this paper, we propose a fully-utilized, block-based 2D DWT (discrete wavelet transform) architecture, which consists of four 1D DWT filters with two-channel QMF lattice structure. The proposed architecture requires about 2MN-3N registers to save the intermediate results for higher level decomposition, where M and N stand for the filter length and the row width of the image respectively. Furthermore, the proposed 2D DWT processes in horizontal and vertical directions simultaneously without an idle period, so that it computes the DWT for an N×N image in a period of N2(1-2-2J)/3. Compared to the existing approaches, the proposed architecture shows 100% of hardware utilization and high throughput rates. To mitigate the long critical path delay due to the cascaded lattices, we can apply the pipeline technique with four stages, while retaining 100% of hardware utilization. The proposed architecture can be applied in real-time video signal processing.
Abstract: The sensitivity of orifice plate metering to disturbed
flow (either asymmetric or swirling) is a subject of great concern to
flow meter users and manufacturers. The distortions caused by pipe
fittings and pipe installations upstream of the orifice plate are major
sources of this type of non-standard flows. These distortions can alter
the accuracy of metering to an unacceptable degree. In this work, a
multi-scale object known as metal foam has been used to generate a
predetermined turbulent flow upstream of the orifice plate. The
experimental results showed that the combination of an orifice plate
and metal foam flow conditioner is broadly insensitive to upstream
disturbances. This metal foam demonstrated a good performance in
terms of removing swirl and producing a repeatable flow profile
within a short distance downstream of the device. The results of using
a combination of a metal foam flow conditioner and orifice plate for
non-standard flow conditions including swirling flow and asymmetric
flow show this package can preserve the accuracy of metering up to
the level required in the standards.
Abstract: This paper applies an anthropological approach to illuminate the dynamic cultural geography of Kazakhstani Korean ethnicity focusing on its turning point, the historic “Seoul Olympic Games in 1988." The Korean ethnic group was easily considered as a harmonious and homogeneous community by outsiders, but there existed deep-seated conflicts and hostilities within the ethnic group. The majority-s oppositional dichotomy of superiority and inferiority toward the minority was continuously reorganized and reinforced by difference in experience, memory and sentiment. However, such a chronic exclusive boundary was collapsed following the patriotism ignited by the Olympics held in their mother country. This paper explores the fluidity of subject by formation of the boundary in which constructed cultural differences are continuously essentialized and reproduced, and by dissolution of cultural barrier in certain contexts.
Abstract: In this paper we improve the quasilinearization method by barycentric Lagrange interpolation because of its numerical stability and computation speed to achieve a stable semi analytical solution. Then we applied the improved method for solving the Fin problem which is a nonlinear equation that occurs in the heat transferring. In the quasilinearization approach the nonlinear differential equation is treated by approximating the nonlinear terms by a sequence of linear expressions. The modified QLM is iterative but not perturbative and gives stable semi analytical solutions to nonlinear problems without depending on the existence of a smallness parameter. Comparison with some numerical solutions shows that the present solution is applicable.
Abstract: This research proposes a Preemptive Possibilistic
Linear Programming (PPLP) approach for solving multiobjective
Aggregate Production Planning (APP) problem with interval demand
and imprecise unit price and related operating costs. The proposed
approach attempts to maximize profit and minimize changes of
workforce. It transforms the total profit objective that has imprecise
information to three crisp objective functions, which are maximizing
the most possible value of profit, minimizing the risk of obtaining the
lower profit and maximizing the opportunity of obtaining the higher
profit. The change of workforce level objective is also converted.
Then, the problem is solved according to objective priorities. It is
easier than simultaneously solve the multiobjective problem as
performed in existing approach. Possible range of interval demand is
also used to increase flexibility of obtaining the better production
plan. A practical application of an electronic company is illustrated to
show the effectiveness of the proposed model.