Abstract: This paper presents the design and implementation of
the WebGD, a CORBA-based document classification and retrieval
system on Internet. The WebGD makes use of such techniques as Web,
CORBA, Java, NLP, fuzzy technique, knowledge-based processing
and database technology. Unified classification and retrieval model,
classifying and retrieving with one reasoning engine and flexible
working mode configuration are some of its main features. The
architecture of WebGD, the unified classification and retrieval model,
the components of the WebGD server and the fuzzy inference engine
are discussed in this paper in detail.
Abstract: The aim of this research is to design a collaborative
framework that integrates risk analysis activities into the geospatial
database design (GDD) process. Risk analysis is rarely undertaken
iteratively as part of the present GDD methods in conformance to
requirement engineering (RE) guidelines and risk standards.
Accordingly, when risk analysis is performed during the GDD, some
foreseeable risks may be overlooked and not reach the output
specifications especially when user intentions are not systematically
collected. This may lead to ill-defined requirements and ultimately in
higher risks of geospatial data misuse. The adopted approach consists
of 1) reviewing risk analysis process within the scope of RE and
GDD, 2) analyzing the challenges of risk analysis within the context
of GDD, and 3) presenting the components of a risk-based
collaborative framework that improves the collection of the
intended/forbidden usages of the data and helps geo-IT experts to
discover implicit requirements and risks.
Abstract: In order to develop forest management strategies in
tropical forest in Malaysia, surveying the forest resources and
monitoring the forest area affected by logging activities is essential.
There are tremendous effort has been done in classification of land
cover related to forest resource management in this country as it is a
priority in all aspects of forest mapping using remote sensing and
related technology such as GIS. In fact classification process is a
compulsory step in any remote sensing research. Therefore, the main
objective of this paper is to assess classification accuracy of
classified forest map on Landsat TM data from difference number of
reference data (200 and 388 reference data). This comparison was
made through observation (200 reference data), and interpretation
and observation approaches (388 reference data). Five land cover
classes namely primary forest, logged over forest, water bodies, bare
land and agricultural crop/mixed horticultural can be identified by
the differences in spectral wavelength. Result showed that an overall
accuracy from 200 reference data was 83.5 % (kappa value
0.7502459; kappa variance 0.002871), which was considered
acceptable or good for optical data. However, when 200 reference
data was increased to 388 in the confusion matrix, the accuracy
slightly improved from 83.5% to 89.17%, with Kappa statistic
increased from 0.7502459 to 0.8026135, respectively. The accuracy
in this classification suggested that this strategy for the selection of
training area, interpretation approaches and number of reference data
used were importance to perform better classification result.
Abstract: Construction projects generally take place in
uncontrolled and dynamic environments where construction waste is
a serious environmental problem in many large cities. The total
amount of waste and carbon dioxide emissions from transportation
vehicles are still out of control due to increasing construction
projects, massive urban development projects and the lack of
effective tools for minimizing adverse environmental impacts in
construction. This research is about utilization of the integrated
applications of automated advanced tracking and data storage
technologies in the area of environmental management to monitor
and control adverse environmental impacts such as construction
waste and carbon dioxide emissions. Radio Frequency Identification
(RFID) integrated with the Global Position System (GPS) provides
an opportunity to uniquely identify materials, components, and
equipments and to locate and track them using minimal or no worker
input. The transmission of data to the central database will be carried
out with the help of Global System for Mobile Communications
(GSM).
Abstract: Model Predictive Control (MPC) is increasingly being
proposed for real time applications and embedded systems. However
comparing to PID controller, the implementation of the MPC in
miniaturized devices like Field Programmable Gate Arrays (FPGA)
and microcontrollers has historically been very small scale due to its
complexity in implementation and its computation time requirement.
At the same time, such embedded technologies have become an
enabler for future manufacturing enterprises as well as a transformer
of organizations and markets. Recently, advances in microelectronics
and software allow such technique to be implemented in embedded
systems. In this work, we take advantage of these recent advances
in this area in the deployment of one of the most studied and
applied control technique in the industrial engineering. In fact in
this paper, we propose an efficient framework for implementation
of Generalized Predictive Control (GPC) in the performed STM32
microcontroller. The STM32 keil starter kit based on a JTAG interface
and the STM32 board was used to implement the proposed GPC
firmware. Besides the GPC, the PID anti windup algorithm was
also implemented using Keil development tools designed for ARM
processor-based microcontroller devices and working with C/Cµ
langage. A performances comparison study was done between both
firmwares. This performances study show good execution speed and
low computational burden. These results encourage to develop simple
predictive algorithms to be programmed in industrial standard hardware.
The main features of the proposed framework are illustrated
through two examples and compared with the anti windup PID
controller.
Abstract: To decompose organochlorides by bioremediation, co-culture biohydrogen producer and dehalogenation microorganisms is a useful method. In this study, we combined these two characteristics from a biohydrogen producer, Rhodopseudomonas palustris, and a dehalogenation microorganism, Pseudomonas putida, to enchance halorespiration in R. palustris. The genes encoding cytochrome P450cam system (camC, camA, and camB) from P. putida were expressed in R. palustris with designated expression plasmid. All tested strains were cultured to log phase then presented pentachloroethane (PCA) in media. The vector control strain could degrade PCA about 78% after 16 hours, however, the cytochrome P450cam system expressed strain, CGA-camCAB, could completely degrade PCA in 12 hours. While taking chlorinated aromatic, 3-chlorobenzoate, as sole carbon source or present benzoate as co-substrate, CGA-camCAB presented faster growth rate than vector control strain.
Abstract: In this work, we developed the concept of
supercompression, i.e., compression above the compression standard
used. In this context, both compression rates are multiplied. In fact,
supercompression is based on super-resolution. That is to say,
supercompression is a data compression technique that superpose
spatial image compression on top of bit-per-pixel compression to
achieve very high compression ratios. If the compression ratio is very
high, then we use a convolutive mask inside decoder that restores the
edges, eliminating the blur. Finally, both, the encoder and the
complete decoder are implemented on General-Purpose computation
on Graphics Processing Units (GPGPU) cards. Specifically, the
mentio-ned mask is coded inside texture memory of a GPGPU.
Abstract: This paper is intended to assist anyone with some general technical experience, but perhaps limited specific knowledge of heat transfer equipment. A characteristic of heat exchanger design is the procedure of specifying a design, heat transfer area and pressure drops and checking whether the assumed design satisfies all requirements or not. The purpose of this paper is how to design the oil cooler (heat exchanger) especially for shell-and-tube heat exchanger which is the majority type of liquid-to-liquid heat exchanger. General design considerations and design procedure are also illustrated in this paper and a flow diagram is provided as an aid of design procedure. In design calculation, the MatLAB and AutoCAD software are used. Fundamental heat transfer concepts and complex relationships involved in such exchanger are also presented in this paper. The primary aim of this design is to obtain a high heat transfer rate without exceeding the allowable pressure drop. This computer program is highly useful to design the shell-and-tube type heat exchanger and to modify existing deign.
Abstract: In this paper, we propose a fully-utilized, block-based 2D DWT (discrete wavelet transform) architecture, which consists of four 1D DWT filters with two-channel QMF lattice structure. The proposed architecture requires about 2MN-3N registers to save the intermediate results for higher level decomposition, where M and N stand for the filter length and the row width of the image respectively. Furthermore, the proposed 2D DWT processes in horizontal and vertical directions simultaneously without an idle period, so that it computes the DWT for an N×N image in a period of N2(1-2-2J)/3. Compared to the existing approaches, the proposed architecture shows 100% of hardware utilization and high throughput rates. To mitigate the long critical path delay due to the cascaded lattices, we can apply the pipeline technique with four stages, while retaining 100% of hardware utilization. The proposed architecture can be applied in real-time video signal processing.
Abstract: In this paper a study on the vibration of thin
cylindrical shells with ring supports and made of functionally graded
materials (FGMs) composed of stainless steel and nickel is presented.
Material properties vary along the thickness direction of the shell
according to volume fraction power law. The cylindrical shells have
ring supports which are arbitrarily placed along the shell and impose
zero lateral deflections. The study is carried out based on third order
shear deformation shell theory (T.S.D.T). The analysis is carried out
using Hamilton-s principle. The governing equations of motion of
FGM cylindrical shells are derived based on shear deformation
theory. Results are presented on the frequency characteristics,
influence of ring support position and the influence of boundary
conditions. The present analysis is validated by comparing results
with those available in the literature.
Abstract: Flow movement in unsaturated soil can be expressed
by a partial differential equation, named Richards equation. The
objective of this study is the finding of an appropriate implicit
numerical solution for head based Richards equation. Some of the
well known finite difference schemes (fully implicit, Crank Nicolson
and Runge-Kutta) have been utilized in this study. In addition, the
effects of different approximations of moisture capacity function,
convergence criteria and time stepping methods were evaluated. Two
different infiltration problems were solved to investigate the
performance of different schemes. These problems include of vertical
water flow in a wet and very dry soils. The numerical solutions of
two problems were compared using four evaluation criteria and the
results of comparisons showed that fully implicit scheme is better
than the other schemes. In addition, utilizing of standard chord slope
method for approximation of moisture capacity function, automatic
time stepping method and difference between two successive
iterations as convergence criterion in the fully implicit scheme can
lead to better and more reliable results for simulation of fluid
movement in different unsaturated soils.
Abstract: The competitive learning is an adaptive process in
which the neurons in a neural network gradually become sensitive to
different input pattern clusters. The basic idea behind the Kohonen-s
Self-Organizing Feature Maps (SOFM) is competitive learning.
SOFM can generate mappings from high-dimensional signal spaces
to lower dimensional topological structures. The main features of this
kind of mappings are topology preserving, feature mappings and
probability distribution approximation of input patterns. To overcome
some limitations of SOFM, e.g., a fixed number of neural units and a
topology of fixed dimensionality, Growing Self-Organizing Neural
Network (GSONN) can be used. GSONN can change its topological
structure during learning. It grows by learning and shrinks by
forgetting. To speed up the training and convergence, a new variant
of GSONN, twin growing cell structures (TGCS) is presented here.
This paper first gives an introduction to competitive learning, SOFM
and its variants. Then, we discuss some GSONN with fixed
dimensionality, which include growing cell structures, its variants
and the author-s model: TGCS. It is ended with some testing results
comparison and conclusions.
Abstract: Image enhancement is the most important challenging preprocessing for almost all applications of Image Processing. By now, various methods such as Median filter, α-trimmed mean filter, etc. have been suggested. It was proved that the α-trimmed mean filter is the modification of median and mean filters. On the other hand, ε-filters have shown excellent performance in suppressing noise. In spite of their simplicity, they achieve good results. However, conventional ε-filter is based on moving average. In this paper, we suggested a new ε-filter which utilizes α-trimmed mean. We argue that this new method gives better outcomes compared to previous ones and the experimental results confirmed this claim.
Abstract: The development of entrepreneurial competences of
farmers has been pointed out as a necessary condition for the
modernization of land in facing the phenomenon of globalization.
However, the educational processes involved in such a development
have been studied little, especially in emerging economies. This
research aims to enlighten some of the critical issues behind the early
stages of the transformation of farmers into entrepreneurs, through in
depth interviews with farmers, entrepreneurial promoters and public
officials participating in a public pilot project in Mexico. Although
major impacts were expected only in the long run, important positive
changes in the mind set of farmers and other participants were found
in early stages of the intervention. Apparently, the farmers started a
process of becoming more conscious about the importance of
preserving the aquiferous resources, as well as more market and
entrepreneurial oriented.
Abstract: The condition of lightning surge causes the traveling waves and the temporary increase in voltage in the transmission line system. Lightning is the most harmful for destroying the transmission line and setting devices so it is necessary to study and analyze the temporary increase in voltage for designing and setting the surge arrester. This analysis describes the figure of the lightning wave in transmission line with 115 kV voltage level in Thailand by using ATP/EMTP program to create the model of the transmission line and lightning surge. Because of the limit of this program, it must be calculated for the geometry of the transmission line and surge parameter and calculation in the manual book for the closest value of the parameter. On the other hand, for the effects on surge protector when the lightning comes, the surge arrester model must be right and standardized as metropolitan electrical authority's standard. The candidate compared the real information to the result from calculation, also. The results of the analysis show that the temporary increase in voltage value will be rise to 326.59 kV at the line which is done by lightning when the surge arrester is not set in the system. On the other hand, the temporary increase in voltage value will be 182.83 kV at the line which is done by lightning when the surge arrester is set in the system and the period of the traveling wave is reduced, also. The distance for setting the surge arrester must be as near to the transformer as possible. Moreover, it is necessary to know the right distance for setting the surge arrester and the size of the surge arrester for preventing the temporary increase in voltage, effectively.
Abstract: An envelope echo signal measurement is proposed in
this paper using echo signal observation from the 200 kHz echo
sounder receiver. The envelope signal without any object is compared
with the envelope signal of the sphere. Two diameter size steel ball
(3.1 cm & 2.2 cm) and two diameter size air filled stainless steel ball
(4.8 cm & 7.4 cm) used in this experiment. The target was positioned
about 0.5 m and 1.0 meter from the transducer face using nylon rope.
From the echo observation in time domain, it is obviously shown that
echo signal structure is different between the size, distance and type
of metal sphere. The amplitude envelope voltage for the bigger
sphere is higher compare to the small sphere and it confirm that the
bigger sphere have higher target strength compare to the small
sphere. Although the structure signal without any object are different
compare to the signal from the sphere, the reflected signal from the
tank floor increase linearly with the sphere size. We considered this
event happened because of the object position approximately to the
tank floor.
Abstract: Amongst the consistently fluctuating conditions
prevailing today, changeability represents a strategic key factor for a
manufacturing company to achieve success on the international
markets. In order to cope with turbulences and the increasing level of
incalculability, not only the flexible design of production systems but
in particular the employee as enabler of change provide the focus
here. It is important to enable employees from manufacturing
companies to participate actively in change events and in change
decisions. To this end, the learning factory has been created, which is
intended to serve the development of change-promoting competences
and the sensitization of employees for the necessity of changes.
Abstract: A computer model of Quantum Theory (QT) has been
developed by the author. Major goal of the computer model was
support and demonstration of an as large as possible scope of QT.
This includes simulations for the major QT (Gedanken-) experiments
such as, for example, the famous double-slit experiment.
Besides the anticipated difficulties with (1) transforming exacting
mathematics into a computer program, two further types of problems
showed up, namely (2) areas where QT provides a complete mathematical
formalism, but when it comes to concrete applications the
equations are not solvable at all, or only with extremely high effort;
(3) QT rules which are formulated in natural language and which do
not seem to be translatable to precise mathematical expressions, nor
to a computer program.
The paper lists problems in all three categories and describes also
the possible solutions or circumventions developed for the computer
model.
Abstract: The paper deals with the perspectives and possibilities
of "smart solutions" to critical infrastructure protection. It means that
common computer aided technologies are used from the perspective
of new, better protection of selected infrastructure objects. The paper
is focused on the co-product of the Czech Defence Research Project -
ADAPTIV. This project is carrying out by the University of Defence,
Faculty of Economics and Management at the Department of Civil
Protection. The project creates system and technology for adaptive
cybernetic camouflage of armed forces objects, armaments, vehicles
and troops and of mobilization infrastructure. These adaptive
camouflage system and technology will be useful for army tactic
activities protection and for decoys generation also. The fourth
chapter of the paper concerns the possibilities of using the introduced
technology to the protection of selected civil (economically
important), critical infrastructure objects. The aim of this section
is to introduce the scientific capabilities and potential of the
University of Defence research results and solutions for the practice.
Abstract: Face recognition is a technique to automatically
identify or verify individuals. It receives great attention in
identification, authentication, security and many more applications.
Diverse methods had been proposed for this purpose and also a lot of
comparative studies were performed. However, researchers could not
reach unified conclusion. In this paper, we are reporting an extensive
quantitative accuracy analysis of four most widely used face
recognition algorithms: Principal Component Analysis (PCA),
Independent Component Analysis (ICA), Linear Discriminant
Analysis (LDA) and Support Vector Machine (SVM) using AT&T,
Sheffield and Bangladeshi people face databases under diverse
situations such as illumination, alignment and pose variations.