Abstract: This paper discusses about an intelligent system to be
installed in ambulances providing professional support to the paramedics on board. A video conferencing device over mobile 4G services enables specialists virtually attending the patient being transferred to the hospital. The data centre holds detailed databases
on the patients past medical history and hospitals with the specialists. It also hosts various software modules that compute the shortest traffic –less path to the closest hospital with the required facilities, on inputting the symptoms of the patient, on a real time basis.
Abstract: Like any sentient organism, a smart environment
relies first and foremost on sensory data captured from the real
world. The sensory data come from sensor nodes of different
modalities deployed on different locations forming a Wireless Sensor
Network (WSN). Embedding smart sensors in humans has been a
research challenge due to the limitations imposed by these sensors
from computational capabilities to limited power. In this paper, we
first propose a practical WSN application that will enable blind
people to see what their neighboring partners can see. The challenge
is that the actual mapping between the input images to brain pattern
is too complex and not well understood. We also study the
connectivity problem in 3D/2D wireless sensor networks and propose
distributed efficient algorithms to accomplish the required
connectivity of the system. We provide a new connectivity algorithm
CDCA to connect disconnected parts of a network using cooperative
diversity. Through simulations, we analyze the connectivity gains
and energy savings provided by this novel form of cooperative
diversity in WSNs.
Abstract: A virtualized and virtual approach is presented on
academically preparing students to successfully engage at a strategic
perspective to understand those concerns and measures that are both
structured and not structured in the area of cyber security and
information assurance. The Master of Science in Cyber Security and
Information Assurance (MSCSIA) is a professional degree for those
who endeavor through technical and managerial measures to ensure
the security, confidentiality, integrity, authenticity, control,
availability and utility of the world-s computing and information
systems infrastructure. The National University Cyber Security and
Information Assurance program is offered as a Master-s degree. The
emphasis of the MSCSIA program uniquely includes hands-on
academic instruction using virtual computers. This past year, 2011,
the NU facility has become fully operational using system
architecture to provide a Virtual Education Laboratory (VEL)
accessible to both onsite and online students. The first student cohort
completed their MSCSIA training this past March 2, 2012 after
fulfilling 12 courses, for a total of 54 units of college credits. The
rapid pace scheduling of one course per month is immensely
challenging, perpetually changing, and virtually multifaceted. This
paper analyses these descriptive terms in consideration of those
globalization penetration breaches as present in today-s world of
cyber security. In addition, we present current NU practices to
mitigate risks.
Abstract: In Virtual organization, Knowledge Discovery (KD)
service contains distributed data resources and computing grid nodes.
Computational grid is integrated with data grid to form Knowledge
Grid, which implements Apriori algorithm for mining association
rule on grid network. This paper describes development of parallel
and distributed version of Apriori algorithm on Globus Toolkit using
Message Passing Interface extended with Grid Services (MPICHG2).
The creation of Knowledge Grid on top of data and
computational grid is to support decision making in real time
applications. In this paper, the case study describes design and
implementation of local and global mining of frequent item sets. The
experiments were conducted on different configurations of grid
network and computation time was recorded for each operation. We
analyzed our result with various grid configurations and it shows
speedup of computation time is almost superlinear.
Abstract: This paper proposes a novel multi-format stream grid
architecture for real-time image monitoring system. The system, based
on a three-tier architecture, includes stream receiving unit, stream
processor unit, and presentation unit. It is a distributed computing and
a loose coupling architecture. The benefit is the amount of required
servers can be adjusted depending on the loading of the image
monitoring system. The stream receive unit supports multi capture
source devices and multi-format stream compress encoder. Stream
processor unit includes three modules; they are stream clipping
module, image processing module and image management module.
Presentation unit can display image data on several different platforms.
We verified the proposed grid architecture with an actual test of image
monitoring. We used a fast image matching method with the
adjustable parameters for different monitoring situations. Background
subtraction method is also implemented in the system. Experimental
results showed that the proposed architecture is robust, adaptive, and
powerful in the image monitoring system.
Abstract: In this note, we consider a family of iterative formula for computing the weighted Minskowski inverses AM,N in Minskowski space, and give two kinds of iterations and the necessary and sufficient conditions of the convergence of iterations.
Abstract: In this paper an algorithm based on the adaptive
neuro-fuzzy controller is provided to enhance the tipover stability of
mobile manipulators when they are subjected to predefined
trajectories for the end-effector and the vehicle. The controller
creates proper configurations for the manipulator to prevent the robot
from being overturned. The optimal configuration and thus the most
favorable control are obtained through soft computing approaches
including a combination of genetic algorithm, neural networks, and
fuzzy logic. The proposed algorithm, in this paper, is that a look-up
table is designed by employing the obtained values from the genetic
algorithm in order to minimize the performance index and by using
this data base, rule bases are designed for the ANFIS controller and
will be exerted on the actuators to enhance the tipover stability of the
mobile manipulator. A numerical example is presented to
demonstrate the effectiveness of the proposed algorithm.
Abstract: Program slicing is the task of finding all statements in a program that directly or indirectly influence the value of a variable occurrence. The set of statements that can affect the value of a variable at some point in a program is called a program slice. In several software engineering applications, such as program debugging and measuring program cohesion and parallelism, several slices are computed at different program points. In this paper, algorithms are introduced to compute all backward and forward static slices of a computer program by traversing the program representation graph once. The program representation graph used in this paper is called Program Dependence Graph (PDG). We have conducted an experimental comparison study using 25 software modules to show the effectiveness of the introduced algorithm for computing all backward static slices over single-point slicing approaches in computing the parallelism and functional cohesion of program modules. The effectiveness of the algorithm is measured in terms of time execution and number of traversed PDG edges. The comparison study results indicate that using the introduced algorithm considerably saves the slicing time and effort required to measure module parallelism and functional cohesion.
Abstract: Using state space technique and GF(2) theory, a
simulation model for external exclusive NOR type LFSR structures is
developed. Through this tool a systematic procedure is devised for
computing pseudo-random binary sequences from such structures.
Abstract: With the advent of emerging personal computing paradigms such as ubiquitous and mobile computing, Web contents are becoming accessible from a wide range of mobile devices. Since these devices do not have the same rendering capabilities, Web contents need to be adapted for transparent access from a variety of client agents. Such content adaptation results in better rendering and faster delivery to the client device. Nevertheless, Web content adaptation sets new challenges for semantic markup. This paper presents an advanced components platform, called MorfeoSMC, enabling the development of mobility applications and services according to a channel model based on Services Oriented Architecture (SOA) principles. It then goes on to describe the potential for integration with the Semantic Web through a novel framework of external semantic annotation of mobile Web contents. The role of semantic annotation in this framework is to describe the contents of individual documents themselves, assuring the preservation of the semantics during the process of adapting content rendering, as well as to exploit these semantic annotations in a novel user profile-aware content adaptation process. Semantic Web content adaptation is a way of adding value to and facilitates repurposing of Web contents (enhanced browsing, Web Services location and access, etc).
Abstract: In this paper, we propose a fuzzy aggregate
production planning (APP) model for blending problem in a brass
factory which is the problem of computing optimal amounts of raw
materials for the total production of several types of brass in a
period. The model has deterministic and imprecise parameters
which follows triangular possibility distributions. The brass casting
APP model can not always be solved by using common approaches
used in the literature. Therefore a mathematical model is presented
for solving this problem. In the proposed model, the Lai and
Hwang-s fuzzy ranking concept is relaxed by using one constraint
instead of three constraints. An application of the brass casting
APP model in a brass factory shows that the proposed model
successfully solves the multi-blend problem in casting process and
determines the optimal raw material purchasing policies.
Abstract: The implementation of the new software and hardware-s technologies for tritium processing nuclear plants, and especially those with an experimental character or of new technology developments shows a coefficient of complexity due to issues raised by the implementation of the performing instrumentation and equipment into a unitary monitoring system of the nuclear technological process of tritium removal. Keeping the system-s flexibility is a demand of the nuclear experimental plants for which the change of configuration, process and parameters is something usual. The big amount of data that needs to be processed stored and accessed for real time simulation and optimization demands the achievement of the virtual technologic platform where the data acquiring, control and analysis systems of the technological process can be integrated with a developed technological monitoring system. Thus, integrated computing and monitoring systems needed for the supervising of the technological process will be executed, to be continued with the execution of optimization system, by choosing new and performed methods corresponding to the technological processes within the tritium removal processing nuclear plants. The developing software applications is executed with the support of the program packages dedicated to industrial processes and they will include acquisition and monitoring sub-modules, named “virtually" as well as the storage sub-module of the process data later required for the software of optimization and simulation of the technological process for tritium removal. The system plays and important role in the environment protection and durable development through new technologies, that is – the reduction of and fight against industrial accidents in the case of tritium processing nuclear plants. Research for monitoring optimisation of nuclear processes is also a major driving force for economic and social development.
Abstract: Resource Discovery in Grids is critical for efficient
resource allocation and management. Heterogeneous nature and
dynamic availability of resources make resource discovery a
challenging task. As numbers of nodes are increasing from tens to
thousands, scalability is essentially desired. Peer-to-Peer (P2P)
techniques, on the other hand, provide effective implementation of
scalable services and applications. In this paper we propose a model
for resource discovery in Condor Middleware by using the four axis
framework defined in P2P approach. The proposed model enhances
Condor to incorporate functionality of a P2P system, thus aim to
make Condor more scalable, flexible, reliable and robust.
Abstract: Due to memory leaks, often-valuable system memory
gets wasted and denied for other processes thereby affecting the
computational performance. If an application-s memory usage
exceeds virtual memory size, it can leads to system crash. Current
memory leak detection techniques for clusters are reactive and
display the memory leak information after the execution of the
process (they detect memory leak only after it occur).
This paper presents a Dynamic Memory Monitoring Agent
(DMMA) technique. DMMA framework is a dynamic memory leak
detection, that detects the memory leak while application is in
execution phase, when memory leak in any process in the cluster is
identified by DMMA it gives information to the end users to enable
them to take corrective actions and also DMMA submit the affected
process to healthy node in the system. Thus provides reliable service
to the user. DMMA maintains information about memory
consumption of executing processes and based on this information
and critical states, DMMA can improve reliability and
efficaciousness of cluster computing.
Abstract: The purpose of this research is to study motivation
factors and also to study factors relation to job performance to
compare motivation factors under the personal factor classification
such as gender, age, income, educational level, marital status, and
working duration; and to study the relationship between Motivation
Factors and Job Performance with job satisfactions. The sample
groups utilized in this research were 400 Suan Sunandha Rajabhat
University employees. This research is a quantitative research using
questionnaires as research instrument. The statistics applied for data
analysis including percentage, mean, and standard deviation. In
addition, the difference analysis was conducted by t value computing,
one-way analysis of variance and Pearson’s correlation coefficient
computing. The findings of the study results were as follows the
findings showed that the aspects of job promotion and salary were at
the moderate levels. Additionally, the findings also showed that the
motivations that affected the revenue branch chiefs’ job performance
were job security, job accomplishment, policy and management, job
promotion, and interpersonal relation.
Abstract: Wavelet transforms are multiresolution
decompositions that can be used to analyze signals and images.
Image compression is one of major applications of wavelet
transforms in image processing. It is considered as one of the most
powerful methods that provides a high compression ratio. However,
its implementation is very time-consuming. At the other hand,
parallel computing technologies are an efficient method for image
compression using wavelets. In this paper, we propose a parallel
wavelet compression algorithm based on quadtrees. We implement
the algorithm using MatlabMPI (a parallel, message passing version
of Matlab), and compute its isoefficiency function, and show that it is
scalable. Our experimental results confirm the efficiency of the
algorithm also.
Abstract: In this paper, we present an algorithm for computing a
Schur factorization of a real nonsymmetric matrix with ordered diagonal
blocks such that upper left blocks contains the largest magnitude
eigenvalues. Especially in case of multiple eigenvalues, when matrix
is non diagonalizable, we construct an invariant subspaces with few
additional tricks which are heuristic and numerical results shows the
stability and accuracy of the algorithm.
Abstract: Over the past few years, a number of efforts have
been exerted to build parallel processing systems that utilize the idle
power of LAN-s and PC-s available in many homes and corporations.
The main advantage of these approaches is that they provide cheap
parallel processing environments for those who cannot afford the
expenses of supercomputers and parallel processing hardware.
However, most of the solutions provided are not very flexible in the
use of available resources and very difficult to install and setup.
In this paper, a multi-level web-based parallel processing system
(MWPS) is designed (appendix). MWPS is based on the idea of
volunteer computing, very flexible, easy to setup and easy to use.
MWPS allows three types of subscribers: simple volunteers (single
computers), super volunteers (full networks) and end users. All of
these entities are coordinated transparently through a secure web site.
Volunteer nodes provide the required processing power needed by
the system end users. There is no limit on the number of volunteer
nodes, and accordingly the system can grow indefinitely. Both
volunteer and system users must register and subscribe. Once, they
subscribe, each entity is provided with the appropriate MWPS
components. These components are very easy to install.
Super volunteer nodes are provided with special components that
make it possible to delegate some of the load to their inner nodes.
These inner nodes may also delegate some of the load to some other
lower level inner nodes .... and so on. It is the responsibility of the
parent super nodes to coordinate the delegation process and deliver
the results back to the user.
MWPS uses a simple behavior-based scheduler that takes into
consideration the current load and previous behavior of processing
nodes. Nodes that fulfill their contracts within the expected time get a
high degree of trust. Nodes that fail to satisfy their contract get a
lower degree of trust.
MWPS is based on the .NET framework and provides the minimal
level of security expected in distributed processing environments.
Users and processing nodes are fully authenticated. Communications
and messages between nodes are very secure. The system has been
implemented using C#.
MWPS may be used by any group of people or companies to
establish a parallel processing or grid environment.
Abstract: In general fuzzy sets are used to analyze the fuzzy
system reliability. Here intuitionistic fuzzy set theory for analyzing
the fuzzy system reliability has been used. To analyze the fuzzy
system reliability, the reliability of each component of the system as
a triangular intuitionistic fuzzy number is considered. Triangular
intuitionistic fuzzy number and their arithmetic operations are
introduced. Expressions for computing the fuzzy reliability of a
series system and a parallel system following triangular intuitionistic
fuzzy numbers have been described. Here an imprecise reliability
model of an electric network model of dark room is taken. To
compute the imprecise reliability of the above said system, reliability
of each component of the systems is represented by triangular
intuitionistic fuzzy numbers. Respective numerical example is
presented.
Abstract: In this paper, based on the work in [1], we further give
a general model for acquiring knowledge, which first focuses on the
research of how and when things involved in problems are made
then describes the goals, the energy and the time to give an optimum
model to decide how many related things are supposed to be involved
in. Finally, we acquire knowledge from this model in which there are
the attributes, actions and connections of the things involved at the
time when they are born and the time in their life. This model not
only improves AI theories, but also surely brings the effectiveness
and accuracy for AI system because systems are given more
knowledge when reasoning or computing is used to bring about
results.