Abstract: This paper covered a series of key points in terms of 2D to 3D stereoscopic conversion. A successfully applied stereoscopic conversion approach in current visual effects industry was presented. The purpose of this paper is to cover a detailed workflow and concept, which has been successfully used in 3D stereoscopic conversion for feature films in visual effects industry, and therefore to clarify the process in stereoscopic conversion production and provide a clear idea for those entry-level artists to improve an overall understanding of 3D stereoscopic in digital compositing field as well as to the higher education factor of visual effects and hopefully inspire further collaboration and participants particularly between academia and industry.
Abstract: The ElectroEncephaloGram (EEG) is useful for
clinical diagnosis and biomedical research. EEG signals often
contain strong ElectroOculoGram (EOG) artifacts produced
by eye movements and eye blinks especially in EEG recorded
from frontal channels. These artifacts obscure the underlying
brain activity, making its visual or automated inspection
difficult. The goal of ocular artifact removal is to remove
ocular artifacts from the recorded EEG, leaving the underlying
background signals due to brain activity. In recent times,
Independent Component Analysis (ICA) algorithms have
demonstrated superior potential in obtaining the least
dependent source components. In this paper, the independent
components are obtained by using the JADE algorithm (best
separating algorithm) and are classified into either artifact
component or neural component. Neural Network is used for
the classification of the obtained independent components.
Neural Network requires input features that exactly represent
the true character of the input signals so that the neural
network could classify the signals based on those key
characters that differentiate between various signals. In this
work, Auto Regressive (AR) coefficients are used as the input
features for classification. Two neural network approaches
are used to learn classification rules from EEG data. First, a
Polynomial Neural Network (PNN) trained by GMDH (Group
Method of Data Handling) algorithm is used and secondly,
feed-forward neural network classifier trained by a standard
back-propagation algorithm is used for classification and the
results show that JADE-FNN performs better than JADEPNN.
Abstract: Information hiding, especially watermarking is a
promising technique for the protection of intellectual property rights.
This technology is mainly advanced for multimedia but the same has
not been done for text. Web pages, like other documents, need a
protection against piracy. In this paper, some techniques are
proposed to show how to hide information in web pages using some
features of the markup language used to describe these pages. Most
of the techniques proposed here use the white space to hide
information or some varieties of the language in representing
elements. Experiments on a very small page and analysis of five
thousands web pages show that these techniques have a wide
bandwidth available for information hiding, and they might form a
solid base to develop a robust algorithm for web page watermarking.
Abstract: One of the main advantages of the LO paradigm is to
allow the availability of good quality, shareable learning material
through the Web. The effectiveness of the retrieval process requires a
formal description of the resources (metadata) that closely fits the
user-s search criteria; in spite of the huge international efforts in this
field, educational metadata schemata often fail to fulfil this
requirement. This work aims to improve the situation, by the
definition of a metadata model capturing specific didactic features of
shareable learning resources. It classifies LOs into “teacher-oriented"
and “student-oriented" categories, in order to describe the role a LO
is to play when it is integrated into the educational process. This
article describes the model and a first experimental validation process
that has been carried out in a controlled environment.
Abstract: In this paper, we propose novel algorithmic models
based on information fusion and feature transformation in crossmodal
subspace for different types of residue features extracted from
several intra-frame and inter-frame pixel sub-blocks in video
sequences for detecting digital video tampering or forgery. An
evaluation of proposed residue features – the noise residue features
and the quantization features, their transformation in cross-modal
subspace, and their multimodal fusion, for emulated copy-move
tamper scenario shows a significant improvement in tamper detection
accuracy as compared to single mode features without transformation
in cross-modal subspace.
Abstract: In the globalization process, when the struggle for minds and values of the people is taking place, the impact of the virtual space can cause unexpected effects and consequences in the process of adjustment of young people in this world. Their special significance is defined by unconscious influence on the underlying process of meaning and therefore the values preached by them are much more effective and affect both the personal characteristics and the peculiarities of adjustment process. Related to this the challenge is to identify factors influencing the reflection characteristics of virtual subjects and measures their impact on the personal characteristics of the students.
Abstract: In this paper we propose a computational model for the representation and processing of morpho-phonological phenomena in a natural language, like Modern Greek. We aim at a unified treatment of inflection, compounding, and word-internal phonological changes, in a model that is used for both analysis and generation. After discussing certain difficulties cuase by well-known finitestate approaches, such as Koskenniemi-s two-level model [7] when applied to a computational treatment of compounding, we argue that a morphology-based model provides a more adequate account of word-internal phenomena. Contrary to the finite state approaches that cannot handle hierarchical word constituency in a satisfactory way, we propose a unification-based word grammar, as the nucleus of our strategy, which takes into consideration word representations that are based on affixation and [stem stem] or [stem word] compounds. In our formalism, feature-passing operations are formulated with the use of the unification device, and phonological rules modeling the correspondence between lexical and surface forms apply at morpheme boundaries. In the paper, examples from Modern Greek illustrate our approach. Morpheme structures, stress, and morphologically conditioned phoneme changes are analyzed and generated in a principled way.
Abstract: Much research into handwritten Thai character
recognition have been proposed, such as comparing heads of
characters, Fuzzy logic and structure trees, etc. This paper presents a
system of handwritten Thai character recognition, which is based on
the Ant-minor algorithm (data mining based on Ant colony
optimization). Zoning is initially used to determine each character.
Then three distinct features (also called attributes) of each character
in each zone are extracted. The attributes are Head zone, End point,
and Feature code. All attributes are used for construct the
classification rules by an Ant-miner algorithm in order to classify
112 Thai characters. For this experiment, the Ant-miner algorithm is
adapted, with a small change to increase the recognition rate. The
result of this experiment is a 97% recognition rate of the training set
(11200 characters) and 82.7% recognition rate of unseen data test
(22400 characters).
Abstract: The new framework the Higher Education is
immersed in involves a complete change in the way lecturers must
teach and students must learn. Whereas the lecturer was the main
character in traditional education, the essential goal now is to
increase the students' participation in the process. Thus, one of the
main tasks of lecturers in this new context is to design activities of
different nature in order to encourage such participation. Seminars
are one of the activities included in this environment. They are active
sessions that enable going in depth into specific topics as support of
other activities. They are characterized by some features such as
favoring interaction between students and lecturers or improving
their communication skills. Hence, planning and organizing strategic
seminars is indeed a great challenge for lecturers with the aim of
acquiring knowledge and abilities. This paper proposes a method
using Artificial Intelligence techniques to obtain student profiles
from their marks and preferences. The goal of building such profiles
is twofold. First, it facilitates the task of splitting the students into
different groups, each group with similar preferences and learning
difficulties. Second, it makes it easy to select adequate topics to be a
candidate for the seminars. The results obtained can be either a
guarantee of what the lecturers could observe during the development
of the course or a clue to reconsider new methodological strategies in
certain topics.
Abstract: In the automotive industry test drives are being conducted
during the development of new vehicle models or as a part of
quality assurance of series-production vehicles. The communication
on the in-vehicle network, data from external sensors, or internal
data from the electronic control units is recorded by automotive
data loggers during the test drives. The recordings are used for fault
analysis. Since the resulting data volume is tremendous, manually
analysing each recording in great detail is not feasible.
This paper proposes to use machine learning to support domainexperts
by preventing them from contemplating irrelevant data and
rather pointing them to the relevant parts in the recordings. The
underlying idea is to learn the normal behaviour from available
recordings, i.e. a training set, and then to autonomously detect
unexpected deviations and report them as anomalies.
The one-class support vector machine “support vector data description”
is utilised to calculate distances of feature vectors. SVDDSUBSEQ
is proposed as a novel approach, allowing to classify subsequences
in multivariate time series data. The approach allows to
detect unexpected faults without modelling effort as is shown with
experimental results on recordings from test drives.
Abstract: We present a system that finds road boundaries and
constructs the virtual lane based on fusion data from a laser and a
monocular sensor, and detects forward vehicle position even in no lane
markers or bad environmental conditions. When the road environment
is dark or a lot of vehicles are parked on the both sides of the road, it is
difficult to detect lane and road boundary. For this reason we use
fusion of laser and vision sensor to extract road boundary to acquire
three dimensional data. We use parabolic road model to calculate road
boundaries which is based on vehicle and sensors state parameters and
construct virtual lane. And then we distinguish vehicle position in each
lane.
Abstract: Results of Chilean wine classification based on the
information provided by an electronic nose are reported in this paper.
The classification scheme consists of two parts; in the first stage,
Principal Component Analysis is used as feature extraction method to
reduce the dimensionality of the original information. Then, Radial
Basis Functions Neural Networks is used as pattern recognition
technique to perform the classification. The objective of this study is
to classify different Cabernet Sauvignon, Merlot and Carménère wine
samples from different years, valleys and vineyards of Chile.
Abstract: A key to success of high quality software development
is to define valid and feasible requirements specification. We have
proposed a method of model-driven requirements analysis using
Unified Modeling Language (UML). The main feature of our method
is to automatically generate a Web user interface mock-up from UML
requirements analysis model so that we can confirm validity of
input/output data for each page and page transition on the system by
directly operating the mock-up. This paper proposes a support method
to check the validity of a data life cycle by using a model checking tool
“UPPAAL" focusing on CRUD (Create, Read, Update and Delete).
Exhaustive checking improves the quality of requirements analysis
model which are validated by the customers through automatically
generated mock-up. The effectiveness of our method is discussed by a
case study of requirements modeling of two small projects which are a
library management system and a supportive sales system for text
books in a university.
Abstract: This research work is aimed at speech recognition
using scaly neural networks. A small vocabulary of 11 words were
established first, these words are “word, file, open, print, exit, edit,
cut, copy, paste, doc1, doc2". These chosen words involved with
executing some computer functions such as opening a file, print
certain text document, cutting, copying, pasting, editing and exit.
It introduced to the computer then subjected to feature extraction
process using LPC (linear prediction coefficients). These features are
used as input to an artificial neural network in speaker dependent
mode. Half of the words are used for training the artificial neural
network and the other half are used for testing the system; those are
used for information retrieval.
The system components are consist of three parts, speech
processing and feature extraction, training and testing by using neural
networks and information retrieval.
The retrieve process proved to be 79.5-88% successful, which is
quite acceptable, considering the variation to surrounding, state of
the person, and the microphone type.
Abstract: This paper presents the region based segmentation method for ultrasound images using local statistics. In this segmentation approach the homogeneous regions depends on the image granularity features, where the interested structures with dimensions comparable to the speckle size are to be extracted. This method uses a look up table comprising of the local statistics of every pixel, which are consisting of the homogeneity and similarity bounds according to the kernel size. The shape and size of the growing regions depend on this look up table entries. The algorithms are implemented by using connected seeded region growing procedure where each pixel is taken as seed point. The region merging after the region growing also suppresses the high frequency artifacts. The updated merged regions produce the output in formed of segmented image. This algorithm produces the results that are less sensitive to the pixel location and it also allows a segmentation of the accurate homogeneous regions.
Abstract: It well recognized that one feature that makes a
successful company is its ability to successfully align its business goals with its information communication technologies platform.
Enterprise Resource Planning (ERP) systems contribute to achieve better performance by integrating various business functions and
providing support for information flows. However, the technological
systems complexity is known to prevent the business users to exploit in an efficient way the Enterprise Resource Planning Systems (ERP).
This paper aims to investigate the role of training in improving the
usage of ERP systems. To this end, we have designed an instrument
survey to employees of a Norwegian multinational global provider of
technology solutions. Based on the analysis of collected data, we have delineated a training model that could be high relevance for
both researchers and practitioners as a step towards a better
understanding of ERP system implementation.
Abstract: Traffic Engineering (TE) is the process of controlling
how traffic flows through a network in order to facilitate efficient and
reliable network operations while simultaneously optimizing network
resource utilization and traffic performance. TE improves the
management of data traffic within a network and provides the better
utilization of network resources. Many research works considers intra
and inter Traffic Engineering separately. But in reality one influences
the other. Hence the effective network performances of both inter and
intra Autonomous Systems (AS) are not optimized properly. To
achieve a better Joint Optimization of both Intra and Inter AS TE, we
propose a joint Optimization technique by considering intra-AS
features during inter – AS TE and vice versa. This work considers the
important criterion say latency within an AS and between ASes. and
proposes a Bi-Criteria Latency optimization model. Hence an overall
network performance can be improved by considering this jointoptimization
technique in terms of Latency.
Abstract: The aim of this article is to explain how features of attacks could be extracted from the packets. It also explains how vectors could be built and then applied to the input of any analysis stage. For analyzing, the work deploys the Feedforward-Back propagation neural network to act as misuse intrusion detection system. It uses ten types if attacks as example for training and testing the neural network. It explains how the packets are analyzed to extract features. The work shows how selecting the right features, building correct vectors and how correct identification of the training methods with nodes- number in hidden layer of any neural network affecting the accuracy of system. In addition, the work shows how to get values of optimal weights and use them to initialize the Artificial Neural Network.
Abstract: In this paper we describe a computer-aided diagnosis (CAD) system for automated detection of pulmonary nodules in computed-tomography (CT) images. After extracting the pulmonary parenchyma using a combination of image processing techniques, a region growing method is applied to detect nodules based on 3D geometric features. We applied the CAD system to CT scans collected in a screening program for lung cancer detection. Each scan consists of a sequence of about 300 slices stored in DICOM (Digital Imaging and Communications in Medicine) format. All malignant nodules were detected and a low false-positive detection rate was achieved.
Abstract: Sickness absence represents a major economic and
social issue. Analysis of sick leave data is a recurrent challenge to analysts because of the complexity of the data structure which is
often time dependent, highly skewed and clumped at zero. Ignoring these features to make statistical inference is likely to be inefficient
and misguided. Traditional approaches do not address these problems. In this study, we discuss model methodologies in terms of statistical techniques for addressing the difficulties with sick leave data. We also introduce and demonstrate a new method by performing a longitudinal assessment of long-term absenteeism using
a large registration dataset as a working example available from the Helsinki Health Study for municipal employees from Finland during the period of 1990-1999. We present a comparative study on model
selection and a critical analysis of the temporal trends, the occurrence
and degree of long-term sickness absences among municipal employees. The strengths of this working example include the large
sample size over a long follow-up period providing strong evidence in supporting of the new model. Our main goal is to propose a way to
select an appropriate model and to introduce a new methodology for analysing sickness absence data as well as to demonstrate model
applicability to complicated longitudinal data.