Abstract: In this study, to compress ECG signals, KLT (Karhunen-
Loeve Transform) method has been used. The purpose of this method is to
perform effective ECG coding by a correlation between the length of frames
and the number of vectors of ECG signals.
Abstract: The promises of component-based technology can only be fully realized when the system contains in its design a necessary level of separation of concerns. The authors propose to focus on the concerns that emerge throughout the life cycle of the system and use them as an architectural foundation for the design of a component-based framework. The proposed model comprises a set of superimposed views of the system describing its functional and non-functional concerns. This approach is illustrated by the design of a specific framework for data analysis and data acquisition and supplemented with experiences from using the systems developed with this framework at the Fermi National Accelerator Laboratory.
Abstract: This paper presents an approach based on the
adoption of a distributed cognition framework and a non parametric
multicriteria evaluation methodology (DEA) designed specifically to
compare e-commerce websites from the consumer/user viewpoint. In
particular, the framework considers a website relative efficiency as a
measure of its quality and usability. A website is modelled as a black
box capable to provide the consumer/user with a set of
functionalities. When the consumer/user interacts with the website to
perform a task, he/she is involved in a cognitive activity, sustaining a
cognitive cost to search, interpret and process information, and
experiencing a sense of satisfaction. The degree of ambiguity and
uncertainty he/she perceives and the needed search time determine
the effort size – and, henceforth, the cognitive cost amount – he/she
has to sustain to perform his/her task. On the contrary, task
performing and result achievement induce a sense of gratification,
satisfaction and usefulness. In total, 9 variables are measured,
classified in a set of 3 website macro-dimensions (user experience,
site navigability and structure). The framework is implemented to
compare 40 websites of businesses performing electronic commerce
in the information technology market. A questionnaire to collect
subjective judgements for the websites in the sample was purposely
designed and administered to 85 university students enrolled in
computer science and information systems engineering
undergraduate courses.
Abstract: Traditional higher-education classrooms allow lecturers to observe students- behaviours and responses to a particular pedagogy during learning in a way that can influence changes to the pedagogical approach. Within current e-learning systems it is difficult to perform continuous analysis of the cohort-s behavioural tendency, making real-time pedagogical decisions difficult. This paper presents a Virtual Learning Process Environment (VLPE) based on the Business Process Management (BPM) conceptual framework. Within the VLPE, course designers can model various education pedagogies in the form of learning process workflows using an intuitive flow diagram interface. These diagrams are used to visually track the learning progresses of a cohort of students. This helps assess the effectiveness of the chosen pedagogy, providing the information required to improve course design. A case scenario of a cohort of students is presented and quantitative statistical analysis of their learning process performance is gathered and displayed in realtime using dashboards.
Abstract: The rapid adoption of Internet has turned the Millennial Teens- life like a lightning speed. Empirical evidence has illustrated that Pathological Internet Use (PIU) among them ensure long-term success to the market players in the children industry. However, it creates concerns among their care takers as it generates mental disorder among some of them. The purpose of this paper is to examine the determinants of PIU and identify its outcomes among urban Millennial Teens. It aims to develop a theoretical framework based on a modified Media System Dependency (MSD) Theory that integrates important systems and components that determine and resulted from PIU.
Abstract: In this paper we propose a multi-agent architecture for web information retrieval using fuzzy logic based result fusion mechanism. The model is designed in JADE framework and takes advantage of JXTA agent communication method to allow agent communication through firewalls and network address translators. This approach enables developers to build and deploy P2P applications through a unified medium to manage agent-based document retrieval from multiple sources.
Abstract: Omni directional mobile robots have been popularly
employed in several applications especially in soccer player robots
considered in Robocup competitions. However, Omni directional
navigation system, Omni-vision system and solenoid kicking
mechanism in such mobile robots have not ever been combined. This
situation brings the idea of a robot with no head direction into
existence, a comprehensive Omni directional mobile robot. Such a
robot can respond more quickly and it would be capable for more
sophisticated behaviors with multi-sensor data fusion algorithm for
global localization base on the data fusion. This paper has tried to
focus on the research improvements in the mechanical, electrical and
software design of the robots of team ADRO Iran. The main
improvements are the world model, the new strategy framework,
mechanical structure, Omni-vision sensor for object detection, robot
path planning, active ball handling mechanism and the new kicker
design, , and other subjects related to mobile robot
Abstract: This paper introduces a new signal denoising based on the Empirical mode decomposition (EMD) framework. The method is a fully data driven approach. Noisy signal is decomposed adaptively into oscillatory components called Intrinsic mode functions (IMFs) by means of a process called sifting. The EMD denoising involves filtering or thresholding each IMF and reconstructs the estimated signal using the processed IMFs. The EMD can be combined with a filtering approach or with nonlinear transformation. In this work the Savitzky-Golay filter and shoftthresholding are investigated. For thresholding, IMF samples are shrinked or scaled below a threshold value. The standard deviation of the noise is estimated for every IMF. The threshold is derived for the Gaussian white noise. The method is tested on simulated and real data and compared with averaging, median and wavelet approaches.
Abstract: IEEE 802.11e is the enhanced version of the IEEE
802.11 MAC dedicated to provide Quality of Service of wireless
network. It supports QoS by the service differentiation and
prioritization mechanism. Data traffic receives different priority
based on QoS requirements. Fundamentally, applications are divided
into four Access Categories (AC). Each AC has its own buffer queue
and behaves as an independent backoff entity. Every frame with a
specific priority of data traffic is assigned to one of these access
categories. IEEE 802.11e EDCA (Enhanced Distributed Channel
Access) is designed to enhance the IEEE 802.11 DCF (Distributed
Coordination Function) mechanisms by providing a distributed
access method that can support service differentiation among
different classes of traffic. Performance of IEEE 802.11e MAC layer
with different ACs is evaluated to understand the actual benefits
deriving from the MAC enhancements.
Abstract: This paper tests the level of market integration between Malaysia and Singapore stock markets with the world market. Kalman Filter (KF) methodology is used on the International Capital Asset Pricing Model (ICAPM) and the pricing errors estimated within the framework of ICAPM are used as a measure of market integration or segmentation. The advantage of the KF technique is that it allows for time-varying coefficients in estimating ICAPM and hence able to capture the varying degree of market integration. Empirical results show clear evidence of varying degree of market integration for both case of Malaysia and Singapore. Furthermore, the results show that the changes in the level of market integration are found to coincide with certain economic events that have taken placed. The findings certainly provide evidence on the practicability of the KF technique to estimate stock markets integration. In the comparison between Malaysia and Singapore stock market, the result shows that the trends of the market integration indices for Malaysia and Singapore look similar through time but the magnitude is notably different with the Malaysia stock market showing greater degree of market integration. Finally, significant evidence of varying degree of market integration shows the inappropriate use of OLS in estimating the level of market integration.
Abstract: The Norwegian Military Academy (Army) has
initiated a project with the main ambition to explore possible avenues
to enhancing operational effectiveness through an increased use of
simulation-based training and exercises. Within a cost/benefit
framework, we discuss opportunities and limitations of vertical and
horizontal integration of the existing tactical training system. Vertical
integration implies expanding the existing training system to span the
full range of training from tactical level (platoon, company) to
command and staff level (battalion, brigade). Horizontal integration
means including other domains than army tactics and staff
procedures in the training, such as military ethics, foreign languages,
leadership and decision making. We discuss each of the integration
options with respect to purpose and content of training, "best
practice" for organising and conducting simulation-based training,
and suggest how to evaluate training procedures and measure
learning outcomes. We conclude by giving guidelines towards further
explorative work and possible implementation.
Abstract: In this paper, we propose a robust face relighting
technique by using spherical space properties. The proposed method
is done for reducing the illumination effects on face recognition.
Given a single 2D face image, we relight the face object by
extracting the nine spherical harmonic bases and the face spherical
illumination coefficients. First, an internal training illumination
database is generated by computing face albedo and face normal
from 2D images under different lighting conditions. Based on the
generated database, we analyze the target face pixels and compare
them with the training bootstrap by using pre-generated tiles. In this
work, practical real time processing speed and small image size were
considered when designing the framework. In contrast to other works,
our technique requires no 3D face models for the training process
and takes a single 2D image as an input. Experimental results on
publicly available databases show that the proposed technique works
well under severe lighting conditions with significant improvements
on the face recognition rates.
Abstract: Imprecision is a long-standing problem in CAD design
and high accuracy image-based reconstruction applications. The visual
hull which is the closed silhouette equivalent shape of the objects
of interest is an important concept in image-based reconstruction.
We extend the domain-theoretic framework, which is a robust and
imprecision capturing geometric model, to analyze the imprecision in
the output shape when the input vertices are given with imprecision.
Under this framework, we show an efficient algorithm to generate the
2D partial visual hull which represents the exact information of the
visual hull with only basic imprecision assumptions. We also show
how the visual hull from polyhedra problem can be efficiently solved
in the context of imprecise input.
Abstract: The connection between solar activity and adverse phenomena in the Earth’s environment that can affect space and ground based technologies has spurred interest in Space Weather (SW) research. A great effort has been put on the development of suitable models that can provide advanced forecast of SW events. With the progress in computational technology, it is becoming possible to develop operational large scale physics based models which can incorporate the most important physical processes and domains of the Sun-Earth system. In order to enhance our SW prediction capabilities we are developing advanced numerical tools. With operational requirements in mind, our goal is to develop a modular simulation framework of propagation of the disturbances from the Sun through interplanetary space to the Earth. Here, we report and discuss on the development of coronal field and solar wind components for a large scale MHD code. The model for these components is based on a potential field source surface model and an empirical Wang-Sheeley-Arge solar wind relation.
Abstract: An application framework provides a reusable design
and implementation for a family of software systems. Frameworks
are introduced to reduce the cost of a product line (i.e., a family of
products that shares the common features). Software testing is a timeconsuming
and costly ongoing activity during the application
software development process. Generating reusable test cases for the
framework applications during the framework development stage,
and providing and using the test cases to test part of the framework
application whenever the framework is used reduces the application
development time and cost considerably. This paper introduces the
Framework Interface State Transition Tester (FIST2), a tool for
automated unit testing of Java framework applications. During the
framework development stage, given the formal descriptions of the
framework hooks, the specifications of the methods of the
framework-s extensible classes, and the illegal behavior description
of the Framework Interface Classes (FICs), FIST2 generates unitlevel
test cases for the classes. At the framework application
development stage, given the customized method specifications of
the implemented FICs, FIST2 automates the use, execution, and
evaluation of the already generated test cases to test the implemented
FICs. The paper illustrates the use of the FIST2 tool for testing
several applications that use the SalesPoint framework.
Abstract: This paper argues that increased uncertainty, in certain
situations, may actually encourage investment. Since earlier studies
mostly base their arguments on the assumption of geometric Brownian
motion, the study extends the assumption to alternative stochastic
processes, such as mixed diffusion-jump, mean-reverting process, and
jump amplitude process. A general approach of Monte Carlo
simulation is developed to derive optimal investment trigger for the
situation that the closed-form solution could not be readily obtained
under the assumption of alternative process. The main finding is that
the overall effect of uncertainty on investment is interpreted by the
probability of investing, and the relationship appears to be an invested
U-shaped curve between uncertainty and investment. The implication
is that uncertainty does not always discourage investment even under
several sources of uncertainty. Furthermore, high-risk projects are not
always dominated by low-risk projects because the high-risk projects
may have a positive realization effect on encouraging investment.
Abstract: There has been a growing emphasis in
communication management from simple coordination of
promotional tools to a complex strategic process. This study will
examine the current marketing communications and engagement
strategies used in addressing the key stakeholders. In the case of
fertilizer industry in Malaysia, there has been little empirical
research on stakeholder communication when major challenges
facing the modern corporation is the need to communicate its
identity, its values and products in order to distinguish itself from
competitors. The study will employ both quantitative and qualitative
methods and the use of Structural Equation Modeling (SEM) to
establish a causal relationship amongst the key factors of stakeholder
communication strategies and increment in consumers-
choice/acceptance and impact on financial performance. One of the
major contributions is a conceptual framework for communication
strategies and engagement in increasing consumers- acceptance level
and the firm-s financial performance.
Abstract: The aim of the work presented here was to either use
existing forest dynamic simulation models or calibrate a new one
both within the SYMFOR framework with the purpose of examining
changes in stand level basal area and functional composition in
response to selective logging considering trees > 10 cm d.b.h for two
areas of undisturbed Amazonian non flooded tropical forest in Brazil
and one in Peru. Model biological realism was evaluated for forest in
the undisturbed and selectively logged state and it was concluded that
forest dynamics were realistically represented. Results of the logging
simulation experiments showed that in relation to undisturbed forest
simulation subject to no form of harvesting intervention there was a
significant amount of change over a 90 year simulation period that
was positively proportional to the intensity of logging. Areas which
had in the dynamic equilibrium of undisturbed forest a greater
proportion of a specific ecological guild of trees known as the light
hardwoods (LHW’s) seemed to respond more favorably in terms of
less deviation but only within a specific range of baseline forest
composition beyond which compositional diversity became more
important. These finds are in line partially with practical management
experience and partiality basic systematics theory respectively.
Abstract: An image texture analysis and target recognition approach of using an improved image texture feature coding method (TFCM) and Support Vector Machine (SVM) for target detection is presented. With our proposed target detection framework, targets of interest can be detected accurately. Cascade-Sliding-Window technique was also developed for automated target localization. Application to mammogram showed that over 88% of normal mammograms and 80% of abnormal mammograms can be correctly identified. The approach was also successfully applied to Synthetic Aperture Radar (SAR) and Ground Penetrating Radar (GPR) images for target detection.
Abstract: In this paper, we present an improved fast and robust
search algorithm for copy detection using histogram-based features for
short MPEG video clips from large video database. There are two
types of histogram features used to generate more robust features. The
first one is based on the adjacent pixel intensity difference quantization
(APIDQ) algorithm, which had been reliably applied to human face
recognition previously. An APIDQ histogram is utilized as the feature
vector of the frame image. Another one is ordinal histogram feature
which is robust to color distortion. Furthermore, by Combining with a
temporal division method, the spatial and temporal features of the
video sequence are integrated to realize fast and robust video search
for copy detection. Experimental results show the proposed algorithm
can detect the similar video clip more accurately and robust than
conventional fast video search algorithm.