Abstract: Social, mobility and information aggregation inside
business environment need to converge to reach the next step of
collaboration to enhance interaction and innovation. The following
article is based on the “Assemblage" concept seen as a framework to
formalize new user interfaces and applications. The area of research
is the Energy Social Business Environment, especially the Energy
Smart Grids, which are considered as functional and technical
foundations of the revolution of the Energy Sector of tomorrow. The
assemblages are modelized by means of mereology and simplicial
complexes. Its objective is to offer new central attention and
decision-making tools to end-users.
Abstract: H.264/AVC offers a considerably higher improvement
in coding efficiency compared to other compression standards such
as MPEG-2, but computational complexity is increased significantly.
In this paper, we propose selective mode decision schemes for fast
intra prediction mode selection. The objective is to reduce the
computational complexity of the H.264/AVC encoder without
significant rate-distortion performance degradation. In our proposed
schemes, the intra prediction complexity is reduced by limiting the
luma and chroma prediction modes using the directional information
of the 16×16 prediction mode. Experimental results are presented to
show that the proposed schemes reduce the complexity by up to 78%
maintaining the similar PSNR quality with about 1.46% bit rate
increase in average.
Abstract: In this paper, a novel algorithm based on Ridgelet
Transform and support vector machine is proposed for human action
recognition. The Ridgelet transform is a directional multi-resolution
transform and it is more suitable for describing the human action by
performing its directional information to form spatial features
vectors. The dynamic transition between the spatial features is carried
out using both the Principal Component Analysis and clustering
algorithm K-means. First, the Principal Component Analysis is used
to reduce the dimensionality of the obtained vectors. Then, the kmeans
algorithm is then used to perform the obtained vectors to form
the spatio-temporal pattern, called set-of-labels, according to given
periodicity of human action. Finally, a Support Machine classifier is
used to discriminate between the different human actions. Different
tests are conducted on popular Datasets, such as Weizmann and
KTH. The obtained results show that the proposed method provides
more significant accuracy rate and it drives more robustness in very
challenging situations such as lighting changes, scaling and dynamic
environment
Abstract: Information of nodes’ locations is an important
criterion for lots of applications in Wireless Sensor Networks. In the
hop-based range-free localization methods, anchors transmit the
localization messages counting a hop count value to the whole
network. Each node receives this message and calculates its own
distance with anchor in hops and then approximates its own position.
However the estimative distances can provoke large error, and affect
the localization precision. To solve the problem, this paper proposes
an algorithm, which makes the unknown nodes fix the nearest anchor
as a reference and select two other anchors which are the most
accurate to achieve the estimated location. Compared to the DV-Hop
algorithm, experiment results illustrate that proposed algorithm has
less average localization error and is more effective.
Abstract: The vast amount of information hidden in huge
databases has created tremendous interests in the field of data
mining. This paper examines the possibility of using data clustering
techniques in oral medicine to identify functional relationships
between different attributes and classification of similar patient
examinations. Commonly used data clustering algorithms have been
reviewed and as a result several interesting results have been
gathered.
Abstract: The Continuously Adaptive Mean-Shift (CamShift)
algorithm, incorporating scene depth information is combined with
the l1-minimization sparse representation based method to form a
hybrid kernel and state space-based tracking algorithm. We take
advantage of the increased efficiency of the former with the
robustness to occlusion property of the latter. A simple interchange
scheme transfers control between algorithms based upon drift and
occlusion likelihood. It is quantified by the projection of target
candidates onto a depth map of the 2D scene obtained with a low cost
stereo vision webcam. Results are improved tracking in terms of drift
over each algorithm individually, in a challenging practical outdoor
multiple occlusion test case.
Abstract: This paper deals with automatic sentence modality
recognition in French. In this work, only prosodic features are
considered. The sentences are recognized according to the three
following modalities: declarative, interrogative and exclamatory
sentences. This information will be used to animate a talking head for
deaf and hearing-impaired children. We first statistically study a real
radio corpus in order to assess the feasibility of the automatic
modeling of sentence types. Then, we test two sets of prosodic
features as well as two different classifiers and their combination. We
further focus our attention on questions recognition, as this modality
is certainly the most important one for the target application.
Abstract: This study aimed at developing a forecasting model on the number of Dengue Haemorrhagic Fever (DHF) incidence in Northern Thailand using time series analysis. We developed Seasonal Autoregressive Integrated Moving Average (SARIMA) models on the data collected between 2003-2006 and then validated the models using the data collected between January-September 2007. The results showed that the regressive forecast curves were consistent with the pattern of actual values. The most suitable model was the SARIMA(2,0,1)(0,2,0)12 model with a Akaike Information Criterion (AIC) of 12.2931 and a Mean Absolute Percent Error (MAPE) of 8.91713. The SARIMA(2,0,1)(0,2,0)12 model fitting was adequate for the data with the Portmanteau statistic Q20 = 8.98644 ( x20,95= 27.5871, P>0.05). This indicated that there was no significant autocorrelation between residuals at different lag times in the SARIMA(2,0,1)(0,2,0)12 model.
Abstract: In today-s highly globalised and competitive world
access to information plays key role in having an upper hand between
business rivals. Hence, proper protection of such crucial resource is
core to any modern business. Implementing a successful information
security system is basically centered around three pillars; technical
solution involving both software and hardware, information security
controls to translate the policies and procedure in the system and the
people to implement. This paper shows that a lot needs to be done for
countries adapting information technology to process, store and
distribute information to secure adequately such core resource.
Abstract: The aim of this study is to discover secondary school students’ perceptions related to information technologies and the connections between concepts in their cognitive structures. A word association test consisting of six concepts related to information technologies is used to collect data from 244 secondary school students. Concept maps that present students’ cognitive structures are drawn with the help of frequency data. Data are analyzed and interpreted according to the connections obtained as a result of the concept maps. It is determined students associate most with these concepts—computer, Internet, and communication of the given concepts, and associate least with these concepts—computer-assisted education and information technologies. These results show the concepts, Internet, communication, and computer, are an important part of students’ cognitive structures. In addition, students mostly answer computer, phone, game, Internet and Facebook as the key concepts. These answers show students regard information technologies as a means for entertainment and free time activity, not as a means for education.
Abstract: Environmental impact assessment (EIA) is a procedure tool of environmental management for identifying, predicting, evaluating and mitigating the adverse effects of development proposals. EIA reports usually analyze how the amounts or concentrations of pollutants obey the relevant standards. Actually, many analytical tools can deepen the analysis of environmental impacts in EIA reports, such as life cycle assessment (LCA) and environmental risk assessment (ERA). Life cycle impact assessment (LCIA) is one of steps in LCA to introduce the causal relationships among environmental hazards and damage. Incorporating the LCIA concept into ERA as an integrated tool for EIA can extend the focus of the regulatory compliance of environmental impacts to determine of the significance of environmental impacts. Sometimes, when using integrated tools, it is necessary to consider fuzzy situations due to insufficient information; therefore, ERA should be generalized to fuzzy risk assessment (FRA). Finally, the use of the proposed methodology is demonstrated through the study case of the expansion plan of the world-s largest plastics processing factory.
Abstract: The purpose of this paper is to provide an overview on methodological aspects of the information technology outsourcing (ITO) surveys, in an attempt to improve the data quality and reporting in survey research. It is based on a review of thirty articles on ITO surveys and focuses on two commonly explored dimensions of ITO, namely what are outsourced and why should there be ITO. This study highlights weaknesses in ITO surveys including lack of a clear definition of population, lack of information regarding the sampling method used, not citing the response rate, no information pertaining to pilot testing of survey instrument and absence of information on internal validity in the use or reporting of surveys. This study represents an attempt with a limited scope to point to shortfalls in the use survey methodology in ITO, and thus raise awareness among researchers in enhancing the reliability of survey findings.
Abstract: People from different cultures favor web pages
characterized by the values of their culture and, therefore, tend to
prefer different characteristics of a website according to their cultural
values in terms of navigation, security, product information, customer
service, shopping and design tools. For a company aiming to
globalize its market it is useful to implement country specific cultural
interfaces and different web sites for countries with different cultures.
This paper, following the conclusions proposed by two models of
Hall and Hofstede, and the studies of Marcus and Gould, defines,
through an empirical analysis, the guidelines of web design for both
the Scandinavian countries and Malaysia.
Abstract: The Learning Management Systems present learning
environment which offers a collection of e-learning tools in a
package that allows a common interface and information sharing
among the tools. South East European University initial experience
in LMS was with the usage of the commercial LMS-ANGEL. After a
three year experience on ANGEL usage because of expenses that
were very high it was decided to develop our own software. As part
of the research project team for the in-house design and development
of the new LMS, we primarily had to select the features that would
cover our needs and also comply with the actual trends in the area of
software development, and then design and develop the system. In
this paper we present the process of LMS in-house development for
South East European University, its architecture, conception and
strengths with a special accent on the process of migration and
integration with other enterprise applications.
Abstract: Using the animations video of teaching materials is an
effective learning method. However, we thought that more effective learning method is to produce the teaching video by learners
themselves. The learners who act as the producer must learn and understand well to produce and present video of teaching materials to
others. The purpose of this study is to propose the project based learning (PBL) technique by co-producing video of IT (information
technology) teaching materials. We used the T2V player to produce
the video based on TVML a TV program description language. By
proposed method, we have assigned the learners to produce the
animations video for “National Examination for Information
Processing Technicians (IPA examination)" in Japan, in order to get
them learns various knowledge and skill on IT field. Experimental
result showed that learning effect has occurred at the video production
process that useful for IT personnel resources development.
Abstract: The problem of Small Area Estimation (SAE) is complex because of various information sources and insufficient data. In this paper, an approach for SAE is presented for decision-making at national, regional and local level. We propose an Empirical Best Linear Unbiased Predictor (EBLUP) as an estimator in order to combine several information sources to evaluate various indicators. First, we present the urban audit project and its environmental, social and economic indicators. Secondly, we propose an approach for decision making in order to estimate indicators. An application is used to validate the theoretical proposal. Finally, a decision support system is presented based on open-source environment.
Abstract: Traffic density, an indicator of traffic
conditions, is one of the most critical characteristics to
Intelligent Transport Systems (ITS). This paper investigates
recursive traffic density estimation using the information
provided from inductive loop detectors. On the basis of the
phenomenological relationship between speed and density, the
existing studies incorporate a state space model and update the
density estimate using vehicular speed observations via the
extended Kalman filter, where an approximation is made
because of the linearization of the nonlinear observation
equation. In practice, this may lead to substantial estimation
errors. This paper incorporates a suitable transformation to
deal with the nonlinear observation equation so that the
approximation is avoided when using Kalman filter to
estimate the traffic density. A numerical study is conducted. It
is shown that the developed method outperforms the existing
methods for traffic density estimation.
Abstract: Text categorization techniques are widely used to many Information Retrieval (IR) applications. In this paper, we proposed a simple but efficient method that can automatically find the relationship between any pair of terms and documents, also an indexing matrix is established for text categorization. We call this method Indexing Matrix Categorization Machine (IMCM). Several experiments are conducted to show the efficiency and robust of our algorithm.
Abstract: Electro-optical devices are increasingly used for
military sea-, land- and air applications to detect, recognize and track
objects. Typically, these devices produce video information that is
presented to an operator. However, with increasing availability of
electro-optical devices the data volume is becoming very large,
creating a rising need for automated analysis. In a military setting,
this typically involves detecting and recognizing objects at a large
distance, i.e. when they are difficult to distinguish from background
and noise. One may consider combining multiple images from a
video stream into a single enhanced image that provides more
information for the operator. In this paper we investigate a simple
algorithm to enhance simulated images from a military context and
investigate how the enhancement is affected by various types of
disturbance.
Abstract: Transferring information developed by other peoples is an ordinary event that happens during daily conversations, for example when employees sea each other in the organization, or when they are having lunch together, or attending a meeting, they use to talk about their experience, and discuss about their current projects, and talk about their successes over some specific problems. Despite the potential value of leveraging organizational memory and expertise by using OMS and ER, still small organizations haven-t been able to capitalize on its promised value. Each organization has its internal knowledge management system, in some of organizations the system face the lack of expert people to save their experience in the repository and in another hand on some other organizations there are lots of expert people but the organization doesn-t have the maximum use of their knowledge.