Abstract: Nowadays there are more than thirty maturity models
in different knowledge areas. Maturity model is an area of interest
that contributes organizations to find out where they are in a specific
knowledge area and how to improve it. As Information Resource
Management (IRM) is the concept that information is a major
corporate resource and must be managed using the same basic
principles used to manage other assets, assessment of the current
IRM status and reveal the improvement points can play a critical role
in developing an appropriate information structure in organizations.
In this paper we proposed a framework for information resource
management maturity model (IRM3) that includes ten best practices
for the maturity assessment of the organizations' IRM.
Abstract: This paper presents an algorithm which extends the rapidly-exploring random tree (RRT) framework to deal with change of the task environments. This algorithm called the Retrieval RRT Strategy (RRS) combines a support vector machine (SVM) and RRT and plans the robot motion in the presence of the change of the surrounding environment. This algorithm consists of two levels. At the first level, the SVM is built and selects a proper path from the bank of RRTs for a given environment. At the second level, a real path is planned by the RRT planners for the given environment. The suggested method is applied to the control of KUKA™,, a commercial 6 DOF robot manipulator, and its feasibility and efficiency are demonstrated via the cosimulatation of MatLab™, and RecurDyn™,.
Abstract: Computed tomography and laminography are heavily investigated in a compressive sensing based image reconstruction framework to reduce the dose to the patients as well as to the radiosensitive devices such as multilayer microelectronic circuit boards. Nowadays researchers are actively working on optimizing the compressive sensing based iterative image reconstruction algorithm to obtain better quality images. However, the effects of the sampled data’s properties on reconstructed the image’s quality, particularly in an insufficient sampled data conditions have not been explored in computed laminography. In this paper, we investigated the effects of two data properties i.e. sampling density and data incoherence on the reconstructed image obtained by conventional computed laminography and a recently proposed method called spherical sinusoidal scanning scheme. We have found that in a compressive sensing based image reconstruction framework, the image quality mainly depends upon the data incoherence when the data is uniformly sampled.
Abstract: A great deal of research works in the field information
systems security has been based on a positivist paradigm. Applying
the reductionism concept of the positivist paradigm for information
security means missing the bigger picture and thus, the lack of holism
which could be one of the reasons why security is still overlooked,
comes as an afterthought or perceived from a purely technical
dimension. We need to reshape our thinking and attitudes towards
security especially in a complex and dynamic environment such as e-
Business to develop a holistic understanding of e-Business security in
relation to its context as well as considering all the stakeholders in
the problem area. In this paper we argue the suitability and need for
more inductive interpretive approach and qualitative research method
to investigate e-Business security. Our discussion is based on a
holistic framework of enquiry, nature of the research problem, the
underling theoretical lens and the complexity of e-Business
environment. At the end we present a research strategy for
developing a holistic framework for understanding of e-Business
security problems in the context of developing countries based on an
interdisciplinary inquiry which considers their needs and
requirements.
Abstract: In this paper, we propose a morphing method by which face color images can be freely transformed. The main focus of this work is the transformation of one face image to another. This method is fully automatic in that it can morph two face images by automatically detecting all the control points necessary to perform the morph. A face detection neural network, edge detection and medium filters are employed to detect the face position and features. Five control points, for both the source and target images, are then extracted based on the facial features. Triangulation method is then used to match and warp the source image to the target image using the control points. Finally color interpolation is done using a color Gaussian model that calculates the color for each particular frame depending on the number of frames used. A real coded Genetic algorithm is used in both the image warping and color blending steps to assist in step size decisions and speed up the morphing. This method results in ''very smooth'' morphs and is fast to process.
Abstract: The study aims to develop a framework of social
network management to enhance customer relationship. Social
network management of this research is derived from social network
site management, individual and organization social network usage
motivation. The survey was conducted with organization employees
who have used social network to interact with customers. The results
reveal that content, link, privacy and security, page design and
interactivity are the major issues of social network site management.
Content, link, privacy and security, individual and organization
motivation have major impacts on encouraging business knowledge
sharing among employees. Moreover, Page design and interactivity,
content, organization motivation and knowledge sharing can improve
customer relationships.
Abstract: This research documents a qualitative study of
selected Native Americans who have successfully graduated from
mainstream higher education institutions. The research framework
explored the Bicultural Identity Formation Model as a means of
understanding the expressions of the students' adaptations to
mainstream education. This approach lead to an awareness of how
the participants in the study used specific cultural and social
strategies to enhance their educational success and also to an
awareness of how they coped with cultural dissonance to achieve a
new academic identity. Research implications impact a larger
audience of bicultural, foreign, or international students experiencing
cultural dissonance.
Abstract: This paper propose a new circuit design which
monitor total leakage current during standby mode and generates the
optimal reverse body bias voltage, by using the adaptive body bias
(ABB) technique to compensate die-to-die parameter variations.
Design details of power monitor are examined using simulation
framework in 65nm and 32nm BTPM model CMOS process.
Experimental results show the overhead of proposed circuit in terms
of its power consumption is about 10 μW for 32nm technology and
about 12 μW for 65nm technology at the same power supply voltage
as the core power supply. Moreover the results show that our
proposed circuit design is not far sensitive to the temperature
variations and also process variations. Besides, uses the simple
blocks which offer good sensitivity, high speed, the continuously
feedback loop.
Abstract: From the perspective of system of systems (SoS) and
emergent behaviors, this paper describes large scale application
software systems, and proposes framework methods to further depict
systems- functional and non-functional characteristics. Besides, this
paper also specifically discusses some functional frameworks. In the
end, the framework-s applications in system disintegrations, system
architecture and stable intermediate forms are additionally dealt with
in this in building, deployment and maintenance of large scale
software applications.
Abstract: Recent scientific investigations indicate that
multimodal biometrics overcome the technical limitations of
unimodal biometrics, making them ideally suited for everyday life
applications that require a reliable authentication system. However,
for a successful adoption of multimodal biometrics, such systems
would require large heterogeneous datasets with complex multimodal
fusion and privacy schemes spanning various distributed
environments. From experimental investigations of current
multimodal systems, this paper reports the various issues related to
speed, error-recovery and privacy that impede the diffusion of such
systems in real-life. This calls for a robust mechanism that caters to
the desired real-time performance, robust fusion schemes,
interoperability and adaptable privacy policies.
The main objective of this paper is to present a framework that
addresses the abovementioned issues by leveraging on the
heterogeneous resource sharing capacities of Grid services and the
efficient machine learning capabilities of artificial neural networks
(ANN). Hence, this paper proposes a Grid-based neural network
framework for adopting multimodal biometrics with the view of
overcoming the barriers of performance, privacy and risk issues that
are associated with shared heterogeneous multimodal data centres.
The framework combines the concept of Grid services for reliable
brokering and privacy policy management of shared biometric
resources along with a momentum back propagation ANN (MBPANN)
model of machine learning for efficient multimodal fusion and
authentication schemes. Real-life applications would be able to adopt
the proposed framework to cater to the varying business requirements
and user privacies for a successful diffusion of multimodal
biometrics in various day-to-day transactions.
Abstract: The purpose of this paper is to propose an integrated
consumer health informatics utilization framework that can be used
to gauge the online health information needs and usage patterns
among Malaysian women. The proposed framework was developed
based on four different theories/models: Use and Gratification
Theory, Technology Acceptance 3 Model, Health Belief Model, and
Multi-level Model of Information Seeking. The relevant constructs
and research hypotheses are also presented in this paper. The
framework will be tested in order for it to be used successfully to
identify Malaysian women-s preferences of online health information
resources and health information seeking activities.
Abstract: Producing IT products/services required carefully
designed. IT development process is intangible and labour intensive.
Making optimal use of available resources, both soft (knowledge,
skill-set etc.) and hard (computer system, ancillary equipment etc.),
is vital if IT development is to achieve sensible economical
advantages. Apart from the norm of Project Life Cycle and System
Development Life Cycle (SDLC), there is an urgent need to establish
a general yet widely acceptable guideline on the most effective and
efficient way to precede an IT project in the broader view of Product
Life Cycle. The current paper proposes such a framework with two
major areas of concern: (1) an integration of IT Products and IT
Services within an existing IT Process architecture and; (2) how IT
Product and IT Services are built into the framework of Product Life
Cycle, Project Life Cycle and SDLC.
Abstract: A sophisticated simulator provides a cost-effective measure to carry out preliminary mission testing and diagnostic while reducing potential failures for real life at sea trials. The presented simulation framework covers three key areas: AUV modeling, sensor modeling, and environment modeling. AUV modeling mainly covers the area of AUV dynamics. Sensor modeling deals with physics and mathematical models that govern each sensor installed onto the AUV. Environment model incorporates the hydrostatic, hydrodynamics, and ocean currents that will affect the AUV in a real-time mission. Based on this designed simulation framework, custom scenarios provided by the user can be modeled and its corresponding behaviors can be observed. This paper focuses on the accuracy of the simulated data from AUV model and environmental model derived from a developed AUV test-bed which was jointly upgraded by DSTO and the University of Adelaide. The main contribution of this paper is to experimentally verify the accuracy of the proposed simulation framework.
Abstract: This paper presents findings from the evaluation study carried out to review the UAE national ID card software. The paper consults the relevant literature to explain many of the concepts and frameworks explained herein. The findings of the evaluation work that was primarily based on the ISO 9126 standard for system quality measurement highlighted many practical areas that if taken into account is argued to more likely increase the success chances of similar system implementation projects.
Abstract: Low frequency power oscillations may be triggered
by many events in the system. Most oscillations are damped by the
system, but undamped oscillations can lead to system collapse.
Oscillations develop as a result of rotor acceleration/deceleration
following a change in active power transfer from a generator. Like
the operations limits, the monitoring of power system oscillating
modes is a relevant aspect of power system operation and control.
Unprevented low-frequency power swings can be cause of cascading
outages that can rapidly extend effect on wide region. On this regard,
a Wide Area Monitoring, Protection and Control Systems
(WAMPCS) help in detecting such phenomena and assess power
system dynamics security. The monitoring of power system
electromechanical oscillations is very important in the frame of
modern power system management and control. In first part, this
paper compares the different technique for identification of power
system oscillations. Second part analyzes possible identification
some power system dynamics behaviors Using Wide Area
Monitoring Systems (WAMS) based on Phasor Measurement Units
(PMUs) and wavelet technique.
Abstract: In this work, we present a novel active learning approach
for learning a visual object detection system. Our system
is composed of an active learning mechanism as wrapper around
a sub-algorithm which implement an online boosting-based learning
object detector. In the core is a combination of a bootstrap procedure
and a semi automatic learning process based on the online boosting
procedure. The idea is to exploit the availability of classifier during
learning to automatically label training samples and increasingly
improves the classifier. This addresses the issue of reducing labeling
effort meanwhile obtain better performance. In addition, we propose
a verification process for further improvement of the classifier.
The idea is to allow re-update on seen data during learning for
stabilizing the detector. The main contribution of this empirical study
is a demonstration that active learning based on an online boosting
approach trained in this manner can achieve results comparable or
even outperform a framework trained in conventional manner using
much more labeling effort. Empirical experiments on challenging data
set for specific object deteciton problems show the effectiveness of
our approach.
Abstract: In this study, a classification-based video
super-resolution method using artificial neural network (ANN) is
proposed to enhance low-resolution (LR) to high-resolution (HR)
frames. The proposed method consists of four main steps:
classification, motion-trace volume collection, temporal adjustment,
and ANN prediction. A classifier is designed based on the edge
properties of a pixel in the LR frame to identify the spatial information.
To exploit the spatio-temporal information, a motion-trace volume is
collected using motion estimation, which can eliminate unfathomable
object motion in the LR frames. In addition, temporal lateral process is
employed for volume adjustment to reduce unnecessary temporal
features. Finally, ANN is applied to each class to learn the complicated
spatio-temporal relationship between LR and HR frames. Simulation
results show that the proposed method successfully improves both
peak signal-to-noise ratio and perceptual quality.
Abstract: A fault detection and identification (FDI) technique is
presented to create a fault tolerant control system (FTC). The fault
detection is achieved by monitoring the position of the light source
using an array of light sensors. When a decision is made about the
presence of a fault an identification process is initiated to locate the
faulty component and reconfigure the controller signals. The signals
provided by the sensors are predictable; therefore the existence of a
fault is easily identified. Identification of the faulty sensor is based on
the dynamics of the frame. The technique is not restricted to a
particular type of controllers and the results show consistency.
Abstract: In this paper, a framework is presented trying to make
the most secure web system out of the available generic and web
security technology which can be used as a guideline for
organizations building their web sites. The framework is designed to
provide necessary security services, to address the known security
threats, and to provide some cover to other security problems
especially unknown threats. The requirements for the design are
discussed which guided us to the design of secure web system. The
designed security framework is then simulated and various quality of
service (QoS) metrics are calculated to measure the performance of
this system.
Abstract: The world wide web coupled with the ever-increasing
sophistication of online technologies and software applications puts
greater emphasis on the need of even more sophisticated and
consistent quality requirements modeling than traditional software
applications. Web sites and Web applications (WebApps) are
becoming more information driven and content-oriented raising the
concern about their information quality (InQ). The consistent and
consolidated modeling of InQ requirements for WebApps at different
stages of the life cycle still poses a challenge. This paper proposes an
approach to specify InQ requirements for WebApps by reusing and
extending the ISO 25012:2008(E) data quality model. We also
discuss learnability aspect of information quality for the WebApps.
The proposed ISO 25012 based InQ framework is a step towards a
standardized approach to evaluate WebApps InQ.