Abstract: The Virtual Reality (VR) is becoming increasingly
important for business, education, and entertainment, therefore VR
technology have been applied for training purposes in the areas of
military, safety training and flying simulators. In particular, the
superior and high reliability VR training system is very important in
immersion. Manipulation training in immersive virtual environments
is difficult partly because users must do without the hap contact with
real objects they rely on in the real world to orient themselves and
their manipulated.
In this paper, we create a convincing questionnaire of immersion
and an experiment to assess the influence of immersion on
performance in VR training system. The Immersion Questionnaire
(IQ) included spatial immersion, Psychological immersion, and
Sensory immersion. We show that users with a training system
complete visual attention and detection of signals. Twenty subjects
were allocated to a factorial design consisting of two different VR
systems (Desktop VR and Projector VR). The results indicated that
different VR representation methods significantly affected the
participants- Immersion dimensions.
Abstract: This paper proposes improved delay-dependent stability conditions of the linear time-delay systems of neutral type. The proposed methods employ a suitable Lyapunov-Krasovskii’s functional and a new form of the augmented system. New delay-dependent stability criteria for the systems are established in terms of Linear matrix inequalities (LMIs) which can be easily solved by various effective optimization algorithms. Numerical examples showed that the proposed method is effective and can provide less conservative results.
Abstract: The multi-agent system for processing Bio-signals
will help the medical practitioners to have a standard examination
procedure stored in web server. Web Servers supporting any standard
Search Engine follow all possible combinations of the search
keywords as an input by the user to a Search Engine. As a result, a
huge number of Web-pages are shown in the Web browser. It also
helps the medical practitioner to interact with the expert in the field
his need in order to make a proper judgment in the diagnosis phase
[3].A web server uses a web server plug in to establish and
maintained the medical practitioner to make a fast analysis. If the
user uses the web server client can get a related data requesting their
search. DB agent, EEG / ECG / EMG agents- user placed with
difficult aspects for updating medical information-s in web server.
Abstract: This article concerned with the translation of Quranic
verses to Braille symbols, by using Visual basic program. The
system has the ability to translate the special vibration for the Quran.
This study limited for the (Noun + Scoon) vibrations. It builds on an
existing translation system that combines a finite state machine with
left and right context matching and a set of translation rules. This
allows to translate the Arabic language from text to Braille symbols
after detect the vibration for the Quran verses.
Abstract: The distinction among urban, periurban and rural areas represents a classical example of uncertainty in land classification. Satellite images, geostatistical analysis and all kinds of spatial data are very useful in urban sprawl studies, but it is important to define precise rules in combining great amounts of data to build complex knowledge about territory. Rough Set theory may be a useful method to employ in this field. It represents a different mathematical approach to uncertainty by capturing the indiscernibility. Two different phenomena can be indiscernible in some contexts and classified in the same way when combining available information about them. This approach has been applied in a case of study, comparing the results achieved with both Map Algebra technique and Spatial Rough Set. The study case area, Potenza Province, is particularly suitable for the application of this theory, because it includes 100 municipalities with different number of inhabitants and morphologic features.
Abstract: There is lot of work done in prediction of the fault proneness of the software systems. But, it is the severity of the faults that is more important than number of faults existing in the developed system as the major faults matters most for a developer and those major faults needs immediate attention. In this paper, we tried to predict the level of impact of the existing faults in software systems. Neuro-Fuzzy based predictor models is applied NASA-s public domain defect dataset coded in C programming language. As Correlation-based Feature Selection (CFS) evaluates the worth of a subset of attributes by considering the individual predictive ability of each feature along with the degree of redundancy between them. So, CFS is used for the selecting the best metrics that have highly correlated with level of severity of faults. The results are compared with the prediction results of Logistic Models (LMT) that was earlier quoted as the best technique in [17]. The results are recorded in terms of Accuracy, Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE). The results show that Neuro-fuzzy based model provide a relatively better prediction accuracy as compared to other models and hence, can be used for the modeling of the level of impact of faults in function based systems.
Abstract: The objective of the research was focused on the
design, development and evaluation of a sustainable web based
network system to be used as an interoperable environment for
University process workflow and document management. In this
manner the most of the process workflows in Universities can be
entirely realized electronically and promote integrated University.
Definition of the most used University process workflows enabled
creating electronic workflows and their execution on standard
workflow execution engines. Definition or reengineering of
workflows provided increased work efficiency and helped in having
standardized process through different faculties. The concept and the
process definition as well as the solution applied as Case study are
evaluated and findings are reported.
Abstract: Biological data has several characteristics that strongly differentiate it from typical business data. It is much more complex, usually large in size, and continuously changes. Until recently business data has been the main target for discovering trends, patterns or future expectations. However, with the recent rise in biotechnology, the powerful technology that was used for analyzing business data is now being applied to biological data. With the advanced technology at hand, the main trend in biological research is rapidly changing from structural DNA analysis to understanding cellular functions of the DNA sequences. DNA chips are now being used to perform experiments and DNA analysis processes are being used by researchers. Clustering is one of the important processes used for grouping together similar entities. There are many clustering algorithms such as hierarchical clustering, self-organizing maps, K-means clustering and so on. In this paper, we propose a clustering algorithm that imitates the ecosystem taking into account the features of biological data. We implemented the system using an Ant-Colony clustering algorithm. The system decides the number of clusters automatically. The system processes the input biological data, runs the Ant-Colony algorithm, draws the Topic Map, assigns clusters to the genes and displays the output. We tested the algorithm with a test data of 100 to1000 genes and 24 samples and show promising results for applying this algorithm to clustering DNA chip data.
Abstract: This paper presents a new fingerprint coding technique
based on contourlet transform and multistage vector quantization.
Wavelets have shown their ability in representing natural images that
contain smooth areas separated with edges. However, wavelets
cannot efficiently take advantage of the fact that the edges usually
found in fingerprints are smooth curves. This issue is addressed by
directional transforms, known as contourlets, which have the
property of preserving edges. The contourlet transform is a new
extension to the wavelet transform in two dimensions using
nonseparable and directional filter banks. The computation and
storage requirements are the major difficulty in implementing a
vector quantizer. In the full-search algorithm, the computation and
storage complexity is an exponential function of the number of bits
used in quantizing each frame of spectral information. The storage
requirement in multistage vector quantization is less when compared
to full search vector quantization. The coefficients of contourlet
transform are quantized by multistage vector quantization. The
quantized coefficients are encoded by Huffman coding. The results
obtained are tabulated and compared with the existing wavelet based
ones.
Abstract: This study, focusing on the importance of encouraging
outdoor activities for children, aims to propose and implement a Web-GIS based outdoor education program for junior high schools,
which will then be evaluated by users. Specifically, for the purpose of improved outdoor activities in the junior high school education, the
outdoor education program, with chiefly using the Web-GIS that provides a good information provision and sharing tool, is proposed
and implemented before being evaluated by users. The conclusion of
this study can be summarized in the following two points.
(1) A five -step outdoor education program based on Web-GIS was
proposed for a “second school" at junior high schools that was then implemented before being evaluated by teachers as users.
(2) Based on the results of evaluation by teachers, it was clear that the general operation of Web-GIS based outdoor education program with them only is difficult due to their lack of knowledge regarding Web-GIS and that support staff who can effectively utilize Web-GIS are essential.
Abstract: Self-organizing map (SOM) provides both clustering and visualization capabilities in mining data. Dynamic self-organizing maps such as Growing Self-organizing Map (GSOM) has been developed to overcome the problem of fixed structure in SOM to enable better representation of the discovered patterns. However, in mining large datasets or historical data the hierarchical structure of the data is also useful to view the cluster formation at different levels of abstraction. In this paper, we present a technique to generate concept trees from the GSOM. The formation of tree from different spread factor values of GSOM is also investigated and the quality of the trees analyzed. The results show that concept trees can be generated from GSOM, thus, eliminating the need for re-clustering of the data from scratch to obtain a hierarchical view of the data under study.
Abstract: This work proposes a recursive weighted ELS
algorithm for system identification by applying numerically robust
orthogonal Householder transformations. The properties of the
proposed algorithm show it obtains acceptable results in a noisy
environment: fast convergence and asymptotically unbiased
estimates. Comparative analysis with others robust methods well
known from literature are also presented.
Abstract: The task of face recognition has been actively
researched in recent years. This paper provides an up-to-date review of major human face recognition research. We first present an
overview of face recognition and its applications. Then, a literature review of the most recent face recognition techniques is presented.
Description and limitations of face databases which are used to test
the performance of these face recognition algorithms are given. A
brief summary of the face recognition vendor test (FRVT) 2002, a
large scale evaluation of automatic face recognition technology, and
its conclusions are also given. Finally, we give a summary of the research results.
Abstract: As more people from non-technical backgrounds
are becoming directly involved with large-scale ontology
development, the focal point of ontology research has shifted
from the more theoretical ontology issues to problems
associated with the actual use of ontologies in real-world,
large-scale collaborative applications. Recently the National
Science Foundation funded a large collaborative ontology
development project for which a new formal ontology model,
the Ontology Abstract Machine (OAM), was developed to
satisfy some unique functional and data representation
requirements. This paper introduces the OAM model and the
related algorithms that enable maintenance of an ontology that
supports node-based user access. The successful software
implementation of the OAM model and its subsequent
acceptance by a large research community proves its validity
and its real-world application value.
Abstract: In this paper an approaches for increasing the
effectiveness of error detection in computer network channels with
Pulse-Amplitude Modulation (PAM) has been proposed. Proposed
approaches are based on consideration of special feature of errors,
which are appearances in line with PAM. The first approach consists
of CRC modification specifically for line with PAM. The second
approach is base of weighted checksums using. The way for
checksum components coding has been developed. It has been shown
that proposed checksum modification ensure superior digital data
control transformation reliability for channels with PAM in compare
to CRC.
Abstract: The indistinctness of the manufacturing processes makes that a parts cannot be realized in an absolutely exact way towards the specifications on the dimensions. It is thus necessary to assume that the effectively realized product has to belong in a very strict way to compatible intervals with a correct functioning of the parts. In this paper we present an approach based on mixing tow different characteristics theories, the fuzzy system and Petri net system. This tool has been proposed to model and control the quality in an assembly system. A robust command of a mechanical assembly process is presented as an application. This command will then have to maintain the specifications interval of parts in front of the variations. It also illustrates how the technique reacts when the product quality is high, medium, or low.
Abstract: Taking into account the link between the efficiency of
a detector and the complexity of a stealth mechanism, we propose in
this paper a new formalism for stealth using graph theory.
Abstract: Resource-constrained project scheduling is an NPhard
optimisation problem. There are many different heuristic
strategies how to shift activities in time when resource requirements
exceed their available amounts. These strategies are frequently based
on priorities of activities. In this paper, we assume that a suitable
heuristic has been chosen to decide which activities should be
performed immediately and which should be postponed and
investigate the resource-constrained project scheduling problem
(RCPSP) from the implementation point of view. We propose an
efficient routine that, instead of shifting the activities, extends their
duration. It makes it possible to break down their duration into active
and sleeping subintervals. Then we can apply the classical Critical
Path Method that needs only polynomial running time. This
algorithm can simply be adapted for multiproject scheduling with
limited resources.
Abstract: To solve the problem of multisensor data fusion under
non-Gaussian channel noise. The advanced M-estimates are known
to be robust solution while trading off some accuracy. In order to
improve the estimation accuracy while still maintaining the equivalent
robustness, a two-stage robust fusion algorithm is proposed using
preliminary rejection of outliers then an optimal linear fusion. The
numerical experiments show that the proposed algorithm is equivalent
to the M-estimates in the case of uncorrelated local estimates and
significantly outperforms the M-estimates when local estimates are
correlated.
Abstract: Ontologies are broadly used in the context of networked home environments. With ontologies it is possible to define and store context information, as well as to model different kinds of physical environments. Ontologies are central to networked home environments as they carry the meaning. However, ontologies and the OWL language is complex. Several ontology visualization approaches have been developed to enhance the understanding of ontologies. The domain of networked home environments sets some special requirements for the ontology visualization approach. The visualization tool presented here, visualizes ontologies in a domain-specific way. It represents effectively the physical structures and spatial relationships of networked home environments. In addition, it provides extensive interaction possibilities for editing and manipulating the visualization. The tool shortens the gap from beginner to intermediate OWL ontology reader by visualizing instances in their actual locations and making OWL ontologies more interesting and concrete, and above all easier to comprehend.