Abstract: This paper deals with e-government issues at several
levels. Initially we look at the concept of e-government itself in order
to give it a sound framework. Than we look at the e-government
issues at three levels, first we analyse it at the global level, second we
analyse it at the level of transition economies, and finally we take a
closer look on developments in Croatia. The analysis includes actual
progress being made in selected transition economies given the Euro
area averages, along with e-government potential in future
demanding period.
Abstract: In this paper, we develop an accurate and efficient Haar wavelet method for well-known FitzHugh-Nagumo equation. The proposed scheme can be used to a wide class of nonlinear reaction-diffusion equations. The power of this manageable method is confirmed. Moreover the use of Haar wavelets is found to be accurate, simple, fast, flexible, convenient, small computation costs and computationally attractive.
Abstract: Explosive forming is one of the unconventional
techniques in which, most commonly, the water is used as the
pressure transmission medium. One of the newest methods in
explosive forming is gas detonation forming which uses a normal
shock wave derived of gas detonation, to form sheet metals. For this
purpose a detonation is developed from the reaction of H2+O2
mixture in a long cylindrical detonation tube. The detonation wave
goes through the detonation tube and acts as a blast load on the steel
blank and forms it. Experimental results are compared with a finite
element model; and the comparison of the experimental and
numerical results obtained from strain, thickness variation and
deformed geometry is carried out. Numerical and experimental
results showed approximately 75 – 90 % similarity in formability of
desired shape. Also optimum percent of gas mixture obtained when
we mix 68% H2 with 32% O2.
Abstract: Recently the use of data mining to scientific bibliographic data bases has been implemented to analyze the pathways of the knowledge or the core scientific relevances of a laureated novel or a country. This specific case of data mining has been named citation mining, and it is the integration of citation bibliometrics and text mining. In this paper we present an improved WEB implementation of statistical physics algorithms to perform the text mining component of citation mining. In particular we use an entropic like distance between the compression of text as an indicator of the similarity between them. Finally, we have included the recently proposed index h to characterize the scientific production. We have used this web implementation to identify users, applications and impact of the Mexican scientific institutions located in the State of Morelos.
Abstract: User-based Collaborative filtering (CF), one of the
most prevailing and efficient recommendation techniques, provides
personalized recommendations to users based on the opinions of other
users. Although the CF technique has been successfully applied in
various applications, it suffers from serious sparsity problems. The
cloud-model approach addresses the sparsity problems by
constructing the user-s global preference represented by a cloud
eigenvector. The user-based CF approach works well with dense
datasets while the cloud-model CF approach has a greater
performance when the dataset is sparse. In this paper, we present a
hybrid approach that integrates the predictions from both the
user-based CF and the cloud-model CF approaches. The experimental
results show that the proposed hybrid approach can ameliorate the
sparsity problem and provide an improved prediction quality.
Abstract: In order to develop forest management strategies in
tropical forest in Malaysia, surveying the forest resources and
monitoring the forest area affected by logging activities is essential.
There are tremendous effort has been done in classification of land
cover related to forest resource management in this country as it is a
priority in all aspects of forest mapping using remote sensing and
related technology such as GIS. In fact classification process is a
compulsory step in any remote sensing research. Therefore, the main
objective of this paper is to assess classification accuracy of
classified forest map on Landsat TM data from difference number of
reference data (200 and 388 reference data). This comparison was
made through observation (200 reference data), and interpretation
and observation approaches (388 reference data). Five land cover
classes namely primary forest, logged over forest, water bodies, bare
land and agricultural crop/mixed horticultural can be identified by
the differences in spectral wavelength. Result showed that an overall
accuracy from 200 reference data was 83.5 % (kappa value
0.7502459; kappa variance 0.002871), which was considered
acceptable or good for optical data. However, when 200 reference
data was increased to 388 in the confusion matrix, the accuracy
slightly improved from 83.5% to 89.17%, with Kappa statistic
increased from 0.7502459 to 0.8026135, respectively. The accuracy
in this classification suggested that this strategy for the selection of
training area, interpretation approaches and number of reference data
used were importance to perform better classification result.
Abstract: The statistical distributions are modeled in explaining
nature of various types of data sets. Although these distributions are
mostly uni-modal, it is quite common to see multiple modes in the
observed distribution of the underlying variables, which make the
precise modeling unrealistic. The observed data do not exhibit
smoothness not necessarily due to randomness, but could also be due
to non-randomness resulting in zigzag curves, oscillations, humps
etc. The present paper argues that trigonometric functions, which
have not been used in probability functions of distributions so far,
have the potential to take care of this, if incorporated in the
distribution appropriately. A simple distribution (named as, Sinoform
Distribution), involving trigonometric functions, is illustrated in the
paper with a data set. The importance of trigonometric functions is
demonstrated in the paper, which have the characteristics to make
statistical distributions exotic. It is possible to have multiple modes,
oscillations and zigzag curves in the density, which could be suitable
to explain the underlying nature of select data set.
Abstract: Model Predictive Control (MPC) is increasingly being
proposed for real time applications and embedded systems. However
comparing to PID controller, the implementation of the MPC in
miniaturized devices like Field Programmable Gate Arrays (FPGA)
and microcontrollers has historically been very small scale due to its
complexity in implementation and its computation time requirement.
At the same time, such embedded technologies have become an
enabler for future manufacturing enterprises as well as a transformer
of organizations and markets. Recently, advances in microelectronics
and software allow such technique to be implemented in embedded
systems. In this work, we take advantage of these recent advances
in this area in the deployment of one of the most studied and
applied control technique in the industrial engineering. In fact in
this paper, we propose an efficient framework for implementation
of Generalized Predictive Control (GPC) in the performed STM32
microcontroller. The STM32 keil starter kit based on a JTAG interface
and the STM32 board was used to implement the proposed GPC
firmware. Besides the GPC, the PID anti windup algorithm was
also implemented using Keil development tools designed for ARM
processor-based microcontroller devices and working with C/Cµ
langage. A performances comparison study was done between both
firmwares. This performances study show good execution speed and
low computational burden. These results encourage to develop simple
predictive algorithms to be programmed in industrial standard hardware.
The main features of the proposed framework are illustrated
through two examples and compared with the anti windup PID
controller.
Abstract: The application of a simple microcontroller to deal
with a three variable input and a single output fuzzy logic controller,
with Proportional – Integral – Derivative (PID) response control
built-in has been tested for an automatic voltage regulator. The
fuzzifiers are based on fixed range of the variables of output voltage.
The control output is used to control the wiper motor of the auto
transformer to adjust the voltage, using fuzzy logic principles, so that
the voltage is stabilized. In this report, the author will demonstrate
how fuzzy logic might provide elegant and efficient solutions in the
design of multivariable control based on experimental results rather
than on mathematical models.
Abstract: In this work, we developed the concept of
supercompression, i.e., compression above the compression standard
used. In this context, both compression rates are multiplied. In fact,
supercompression is based on super-resolution. That is to say,
supercompression is a data compression technique that superpose
spatial image compression on top of bit-per-pixel compression to
achieve very high compression ratios. If the compression ratio is very
high, then we use a convolutive mask inside decoder that restores the
edges, eliminating the blur. Finally, both, the encoder and the
complete decoder are implemented on General-Purpose computation
on Graphics Processing Units (GPGPU) cards. Specifically, the
mentio-ned mask is coded inside texture memory of a GPGPU.
Abstract: Recently, various services such as television and the
Internet have come to be received through various terminals.
However, we could gain greater convenience by receiving these
services through cellular phone terminals when we go out and then
continuing to receive the same services through a large screen digital
television after we have come home. However, it is necessary to go
through the same authentication processing again when using TVs
after we have come home. In this study, we have developed an
authentication method that enables users to switch terminals in
environments in which the user receives service from a server through
a terminal. Specifically, the method simplifies the authentication of
the server side when switching from one terminal to another terminal
by using previously authenticated information.
Abstract: In this paper a study on the vibration of thin
cylindrical shells with ring supports and made of functionally graded
materials (FGMs) composed of stainless steel and nickel is presented.
Material properties vary along the thickness direction of the shell
according to volume fraction power law. The cylindrical shells have
ring supports which are arbitrarily placed along the shell and impose
zero lateral deflections. The study is carried out based on third order
shear deformation shell theory (T.S.D.T). The analysis is carried out
using Hamilton-s principle. The governing equations of motion of
FGM cylindrical shells are derived based on shear deformation
theory. Results are presented on the frequency characteristics,
influence of ring support position and the influence of boundary
conditions. The present analysis is validated by comparing results
with those available in the literature.
Abstract: Flow movement in unsaturated soil can be expressed
by a partial differential equation, named Richards equation. The
objective of this study is the finding of an appropriate implicit
numerical solution for head based Richards equation. Some of the
well known finite difference schemes (fully implicit, Crank Nicolson
and Runge-Kutta) have been utilized in this study. In addition, the
effects of different approximations of moisture capacity function,
convergence criteria and time stepping methods were evaluated. Two
different infiltration problems were solved to investigate the
performance of different schemes. These problems include of vertical
water flow in a wet and very dry soils. The numerical solutions of
two problems were compared using four evaluation criteria and the
results of comparisons showed that fully implicit scheme is better
than the other schemes. In addition, utilizing of standard chord slope
method for approximation of moisture capacity function, automatic
time stepping method and difference between two successive
iterations as convergence criterion in the fully implicit scheme can
lead to better and more reliable results for simulation of fluid
movement in different unsaturated soils.
Abstract: The competitive learning is an adaptive process in
which the neurons in a neural network gradually become sensitive to
different input pattern clusters. The basic idea behind the Kohonen-s
Self-Organizing Feature Maps (SOFM) is competitive learning.
SOFM can generate mappings from high-dimensional signal spaces
to lower dimensional topological structures. The main features of this
kind of mappings are topology preserving, feature mappings and
probability distribution approximation of input patterns. To overcome
some limitations of SOFM, e.g., a fixed number of neural units and a
topology of fixed dimensionality, Growing Self-Organizing Neural
Network (GSONN) can be used. GSONN can change its topological
structure during learning. It grows by learning and shrinks by
forgetting. To speed up the training and convergence, a new variant
of GSONN, twin growing cell structures (TGCS) is presented here.
This paper first gives an introduction to competitive learning, SOFM
and its variants. Then, we discuss some GSONN with fixed
dimensionality, which include growing cell structures, its variants
and the author-s model: TGCS. It is ended with some testing results
comparison and conclusions.
Abstract: Image enhancement is the most important challenging preprocessing for almost all applications of Image Processing. By now, various methods such as Median filter, α-trimmed mean filter, etc. have been suggested. It was proved that the α-trimmed mean filter is the modification of median and mean filters. On the other hand, ε-filters have shown excellent performance in suppressing noise. In spite of their simplicity, they achieve good results. However, conventional ε-filter is based on moving average. In this paper, we suggested a new ε-filter which utilizes α-trimmed mean. We argue that this new method gives better outcomes compared to previous ones and the experimental results confirmed this claim.
Abstract: Face recognition is a technique to automatically
identify or verify individuals. It receives great attention in
identification, authentication, security and many more applications.
Diverse methods had been proposed for this purpose and also a lot of
comparative studies were performed. However, researchers could not
reach unified conclusion. In this paper, we are reporting an extensive
quantitative accuracy analysis of four most widely used face
recognition algorithms: Principal Component Analysis (PCA),
Independent Component Analysis (ICA), Linear Discriminant
Analysis (LDA) and Support Vector Machine (SVM) using AT&T,
Sheffield and Bangladeshi people face databases under diverse
situations such as illumination, alignment and pose variations.
Abstract: We constructed a method of phase unwrapping for a typical wave-front by utilizing the maximizer of the posterior marginal (MPM) estimate corresponding to equilibrium statistical mechanics of the three-state Ising model on a square lattice on the basis of an analogy between statistical mechanics and Bayesian inference. We investigated the static properties of an MPM estimate from a phase diagram using Monte Carlo simulation for a typical wave-front with synthetic aperture radar (SAR) interferometry. The simulations clarified that the surface-consistency conditions were useful for extending the phase where the MPM estimate was successful in phase unwrapping with a high degree of accuracy and that introducing prior information into the MPM estimate also made it possible to extend the phase under the constraint of the surface-consistency conditions with a high degree of accuracy. We also found that the MPM estimate could be used to reconstruct the original wave-fronts more smoothly, if we appropriately tuned hyper-parameters corresponding to temperature to utilize fluctuations around the MAP solution. Also, from the viewpoint of statistical mechanics of the Q-Ising model, we found that the MPM estimate was regarded as a method for searching the ground state by utilizing thermal fluctuations under the constraint of the surface-consistency condition.
Abstract: In this work, bending fatigue life of notched
specimens with various notch geometries and dimensions is
investigated by experiment and Manson-Caffin theoretical method. In
this theoretical method, fatigue life of notched specimens is
calculated using the fatigue life obtained from the experiments for
plain specimens (without notch). Three notch geometries including
∪-shape, ∨-shape and C -shape notches are considered in this
investigation. The experiments are conducted on a rotary bending
Moore machine. The specimens are made of a low carbon steel alloy,
which has wide application in industry. The stress- life curves are
captured for all notched specimen by experiment. The results indicate
that Manson-Caffin analytical method cannot adequately predict
the fatigue life of notched specimen. However, it seems that the
difference between the experiments and Manson-Caffin predictions
can be compensated by a proportional factor.
Abstract: In this work a new method for low complexity
image coding is presented, that permits different settings and great
scalability in the generation of the final bit stream. This coding
presents a continuous-tone still image compression system that
groups loss and lossless compression making use of finite arithmetic
reversible transforms. Both transformation in the space of color and
wavelet transformation are reversible. The transformed coefficients
are coded by means of a coding system in depending on a
subdivision into smaller components (CFDS) similar to the bit
importance codification. The subcomponents so obtained are
reordered by means of a highly configure alignment system
depending on the application that makes possible the re-configure of
the elements of the image and obtaining different importance levels
from which the bit stream will be generated. The subcomponents of
each importance level are coded using a variable length entropy
coding system (VBLm) that permits the generation of an embedded
bit stream. This bit stream supposes itself a bit stream that codes a
compressed still image. However, the use of a packing system on the
bit stream after the VBLm allows the realization of a final highly
scalable bit stream from a basic image level and one or several
improvement levels.
Abstract: Testable software has two inherent properties – observability and controllability. Observability facilitates observation of internal behavior of software to required degree of detail. Controllability allows creation of difficult-to-achieve states prior to execution of various tests. In this paper, we describe COTT, a Controllability and Observability Testing Tool, to create testable object-oriented software. COTT provides a framework that helps the user to instrument object-oriented software to build the required controllability and observability. During testing, the tool facilitates creation of difficult-to-achieve states required for testing of difficultto- test conditions and observation of internal details of execution at unit, integration and system levels. The execution observations are logged in a test log file, which are used for post analysis and to generate test coverage reports.