Abstract: In order to maximize efficiency of an information management platform and to assist in decision making, the collection, storage and analysis of performance-relevant data has become of fundamental importance. This paper addresses the merits and drawbacks provided by the OLAP paradigm for efficiently navigating large volumes of performance measurement data hierarchically. The system managers or database administrators navigate through adequately (re)structured measurement data aiming to detect performance bottlenecks, identify causes for performance problems or assessing the impact of configuration changes on the system and its representative metrics. Of particular importance is finding the root cause of an imminent problem, threatening availability and performance of an information system. Leveraging OLAP techniques, in contrast to traditional static reporting, this is supposed to be accomplished within moderate amount of time and little processing complexity. It is shown how OLAP techniques can help improve understandability and manageability of measurement data and, hence, improve the whole Performance Analysis process.
Abstract: Organizational communication is an administrative
function crucial especially for executives in the implementation of
organizational and administrative functions. Executives spend a
significant part of their time on communicative activities. Doing his or her daily routine, arranging meeting schedules, speaking on the telephone, reading or replying to business correspondence, or
fulfilling the control functions within the organization, an executive typically engages in communication processes.
Efficient communication is the principal device for the adequate implementation of administrative and organizational activities. For
this purpose, management needs to specify the kind of
communication system to be set up and the kind of communication
devices to be used. Communication is vital for any organization.
In conventional offices, communication takes place within the hierarchical pyramid called the organizational structure, and is known as formal or informal communication. Formal communication
is the type that works in specified structures within the organizational rules and towards the organizational goals. Informal communication, on the other hand, is the unofficial type taking place among staff as
face-to-face or telephone interaction.
Communication in virtual as well as conventional offices is
essential for obtaining the right information in administrative
activities and decision-making. Virtual communication technologies
increase the efficiency of communication especially in virtual teams.
Group communication is strengthened through an inter-group central
channel. Further, ease of information transmission makes it possible
to reach the information at the source, allowing efficient and correct decisions. Virtual offices can present as a whole the elements of information which conventional offices produce in different
environments.
At present, virtual work has become a reality with its pros and
cons, and will probably spread very rapidly in coming years, in line
with the growth in information technologies.
Abstract: An integrated vehicle dynamics control system is developed in this paper by a combination of active front steering (AFS) and direct yaw-moment control (DYC) based on fuzzy logic control. The control system has a hierarchical structure consisting of two layers. A fuzzy logic controller is used in the upper layer (yaw rate controller) to keep the yaw rate in its desired value. The yaw rate error and its rate of change are applied to the upper controlling layer as inputs, where the direct yaw moment control signal and the steering angle correction of the front wheels are the outputs. In the lower layer (fuzzy integrator), a fuzzy logic controller is designed based on the working region of the lateral tire forces. Depending on the directions of the lateral forces at the front wheels, a switching function is activated to adjust the scaling factor of the fuzzy logic controller. Using a nonlinear seven degrees of freedom vehicle model, the simulation results illustrate considerable improvements which are achieved in vehicle handling through the integrated AFS/DYC control system in comparison with the individual AFS or DYC controllers.
Abstract: Image clustering is a process of grouping images
based on their similarity. The image clustering usually uses the color
component, texture, edge, shape, or mixture of two components, etc.
This research aims to explore image clustering using color
composition. In order to complete this image clustering, three main
components should be considered, which are color space, image
representation (feature extraction), and clustering method itself. We
aim to explore which composition of these factors will produce the
best clustering results by combining various techniques from the
three components. The color spaces use RGB, HSV, and L*a*b*
method. The image representations use Histogram and Gaussian
Mixture Model (GMM), whereas the clustering methods use KMeans
and Agglomerative Hierarchical Clustering algorithm. The
results of the experiment show that GMM representation is better
combined with RGB and L*a*b* color space, whereas Histogram is
better combined with HSV. The experiments also show that K-Means
is better than Agglomerative Hierarchical for images clustering.
Abstract: Among various HLM techniques, the Multivariate Hierarchical Linear Model (MHLM) is desirable to use, particularly when multivariate criterion variables are collected and the covariance structure has information valuable for data analysis. In order to reflect prior information or to obtain stable results when the sample size and the number of groups are not sufficiently large, the Bayes method has often been employed in hierarchical data analysis. In these cases, although the Markov Chain Monte Carlo (MCMC) method is a rather powerful tool for parameter estimation, Procedures regarding MCMC have not been formulated for MHLM. For this reason, this research presents concrete procedures for parameter estimation through the use of the Gibbs samplers. Lastly, several future topics for the use of MCMC approach for HLM is discussed.
Abstract: Nowadays, the pace of business change is such that,
increasingly, new functionality has to be realized and reliably
installed in a matter of days, or even hours. Consequently, more and
more business processes are prone to a continuous change. The
objective of the research in progress is to use the MAP model, in a
conceptual modeling method for flexible and adaptive business
process. This method can be used to capture the flexibility
dimensions of a business process; it takes inspiration from
modularity concept in the object oriented paradigm to establish a
hierarchical construction of the BP modeling. Its intent is to provide
a flexible modeling that allows companies to quickly adapt their
business processes.
Abstract: Bone material is treated as heterogeneous and hierarchical in nature therefore appropriate size of bone specimen is required to analyze its tensile properties at a particular hierarchical level. Tensile properties of cortical bone are important to investigate the effect of drug treatment, disease and aging as well as for development of computational and analytical models. In the present study tensile properties of buffalo as well as goat femoral and tibiae cortical bone are analyzed using sub-size tensile specimens. Femoral cortical bone was found to be stronger in tension as compared to the tibiae cortical bone and the tensile properties obtained using sub-size specimens show close resemblance with the tensile properties of full-size cortical specimens. A two dimensional finite element (FE) modal was also applied to simulate the tensile behavior of sub-size specimens. Good agreement between experimental and FE model was obtained for sub-size tensile specimens of cortical bone.
Abstract: A spatial classification technique incorporating a State of Art Feature Extraction algorithm is proposed in this paper for classifying a heterogeneous classes present in hyper spectral images. The classification accuracy can be improved if and only if both the feature extraction and classifier selection are proper. As the classes in the hyper spectral images are assumed to have different textures, textural classification is entertained. Run Length feature extraction is entailed along with the Principal Components and Independent Components. A Hyperspectral Image of Indiana Site taken by AVIRIS is inducted for the experiment. Among the original 220 bands, a subset of 120 bands is selected. Gray Level Run Length Matrix (GLRLM) is calculated for the selected forty bands. From GLRLMs the Run Length features for individual pixels are calculated. The Principle Components are calculated for other forty bands. Independent Components are calculated for next forty bands. As Principal & Independent Components have the ability to represent the textural content of pixels, they are treated as features. The summation of Run Length features, Principal Components, and Independent Components forms the Combined Features which are used for classification. SVM with Binary Hierarchical Tree is used to classify the hyper spectral image. Results are validated with ground truth and accuracies are calculated.
Abstract: Biologically human brain processes information in both unimodal and multimodal approaches. In fact, information is progressively abstracted and seamlessly fused. Subsequently, the fusion of multimodal inputs allows a holistic understanding of a problem. The proliferation of technology has exponentially produced various sources of data, which could be likened to being the state of multimodality in human brain. Therefore, this is an inspiration to develop a methodology for exploring multimodal data and further identifying multi-view patterns. Specifically, we propose a brain inspired conceptual model that allows exploration and identification of patterns at different levels of granularity, different types of hierarchies and different types of modalities. A structurally adaptive neural network is deployed to implement the proposed model. Furthermore, the acquisition of multi-view patterns with the proposed model is demonstrated and discussed with some experimental results.
Abstract: Automated discovery of Rule is, due to its applicability, one of the most fundamental and important method in KDD. It has been an active research area in the recent past. Hierarchical representation allows us to easily manage the complexity of knowledge, to view the knowledge at different levels of details, and to focus our attention on the interesting aspects only. One of such efficient and easy to understand systems is Hierarchical Production rule (HPRs) system. A HPR, a standard production rule augmented with generality and specificity information, is of the following form: Decision If < condition> Generality Specificity . HPRs systems are capable of handling taxonomical structures inherent in the knowledge about the real world. This paper focuses on the issue of mining Quantified rules with crisp hierarchical structure using Genetic Programming (GP) approach to knowledge discovery. The post-processing scheme presented in this work uses Quantified production rules as initial individuals of GP and discovers hierarchical structure. In proposed approach rules are quantified by using Dempster Shafer theory. Suitable genetic operators are proposed for the suggested encoding. Based on the Subsumption Matrix(SM), an appropriate fitness function is suggested. Finally, Quantified Hierarchical Production Rules (HPRs) are generated from the discovered hierarchy, using Dempster Shafer theory. Experimental results are presented to demonstrate the performance of the proposed algorithm.
Abstract: In this paper, we propose an efficient hierarchical DNA
sequence search method to improve the search speed while the
accuracy is being kept constant. For a given query DNA sequence,
firstly, a fast local search method using histogram features is used as a
filtering mechanism before scanning the sequences in the database.
An overlapping processing is newly added to improve the robustness
of the algorithm. A large number of DNA sequences with low
similarity will be excluded for latter searching. The Smith-Waterman
algorithm is then applied to each remainder sequences. Experimental
results using GenBank sequence data show the proposed method
combining histogram information and Smith-Waterman algorithm is
more efficient for DNA sequence search.
Abstract: This study was to search for the desirable direction of
the sidewalk planning in Korea by establishing the concepts of
walking and pedestrian space, and analyzing the advanced precedents
in and out of country. Also, based on the precedent studies and
relevant laws, regulations, and systems, it aimed for the following
sequential process: firstly, to derive design elements from the
functions and characteristics of sidewalk and cluster the similar
elements by each characteristics, sampling representative
characteristics and making them hierarchical; then, to analyze their
significances via the first questionnaire survey, and the relative
weights and priorities of each elements via the Analytic Hierarchy
Process(AHP); finally, based on the analysis result, to establish the
frame of suggesting the direction of policy to improve the pedestrian
environment of sidewalk in urban commercial district for the future
planning and design of pedestrian space.
Abstract: Operational safety of critical systems, such as nuclear power plants, industrial chemical processes and means of transportation, is a major concern for system engineers and operators. A means to assure that is on-line safety monitors that deliver three safety tasks; fault detection and diagnosis, alarm annunciation and fault controlling. While current monitors deliver these tasks, benefits and limitations in their approaches have at the same time been highlighted. Drawing from those benefits, this paper develops a distributed monitor based on semi-independent agents, i.e. a multiagent system, and monitoring knowledge derived from a safety assessment model of the monitored system. Agents are deployed hierarchically and provided with knowledge portions and collaboration protocols to reason and integrate over the operational conditions of the components of the monitored system. The monitor aims to address limitations arising from the large-scale, complicated behaviour and distributed nature of monitored systems and deliver the aforementioned three monitoring tasks effectively.
Abstract: Disposal of health-care waste (HCW) is considered as
an important environmental problem especially in large cities.
Multiple criteria decision making (MCDM) techniques are apt to deal
with quantitative and qualitative considerations of the health-care
waste management (HCWM) problems. This research proposes a
fuzzy multi-criteria group decision making approach with a multilevel
hierarchical structure including qualitative as well as
quantitative performance attributes for evaluating HCW disposal
alternatives for Istanbul. Using the entropy weighting method,
objective weights as well as subjective weights are taken into account
to determine the importance weighting of quantitative performance
attributes. The results obtained using the proposed methodology are
thoroughly analyzed.
Abstract: Over the past few years, a number of efforts have
been exerted to build parallel processing systems that utilize the idle
power of LAN-s and PC-s available in many homes and corporations.
The main advantage of these approaches is that they provide cheap
parallel processing environments for those who cannot afford the
expenses of supercomputers and parallel processing hardware.
However, most of the solutions provided are not very flexible in the
use of available resources and very difficult to install and setup.
In this paper, a multi-level web-based parallel processing system
(MWPS) is designed (appendix). MWPS is based on the idea of
volunteer computing, very flexible, easy to setup and easy to use.
MWPS allows three types of subscribers: simple volunteers (single
computers), super volunteers (full networks) and end users. All of
these entities are coordinated transparently through a secure web site.
Volunteer nodes provide the required processing power needed by
the system end users. There is no limit on the number of volunteer
nodes, and accordingly the system can grow indefinitely. Both
volunteer and system users must register and subscribe. Once, they
subscribe, each entity is provided with the appropriate MWPS
components. These components are very easy to install.
Super volunteer nodes are provided with special components that
make it possible to delegate some of the load to their inner nodes.
These inner nodes may also delegate some of the load to some other
lower level inner nodes .... and so on. It is the responsibility of the
parent super nodes to coordinate the delegation process and deliver
the results back to the user.
MWPS uses a simple behavior-based scheduler that takes into
consideration the current load and previous behavior of processing
nodes. Nodes that fulfill their contracts within the expected time get a
high degree of trust. Nodes that fail to satisfy their contract get a
lower degree of trust.
MWPS is based on the .NET framework and provides the minimal
level of security expected in distributed processing environments.
Users and processing nodes are fully authenticated. Communications
and messages between nodes are very secure. The system has been
implemented using C#.
MWPS may be used by any group of people or companies to
establish a parallel processing or grid environment.
Abstract: Quantitative trait loci (QTL) experiments have yielded
important biological and biochemical information necessary for
understanding the relationship between genetic markers and
quantitative traits. For many years, most QTL algorithms only
allowed one observation per genotype. Recently, there has been an
increasing demand for QTL algorithms that can accommodate more
than one observation per genotypic distribution. The Bayesian
hierarchical model is very flexible and can easily incorporate this
information into the model. Herein a methodology is presented that
uses a Bayesian hierarchical model to capture the complexity of the
data. Furthermore, the Markov chain Monte Carlo model composition
(MC3) algorithm is used to search and identify important markers. An
extensive simulation study illustrates that the method captures the
true QTL, even under nonnormal noise and up to 6 QTL.
Abstract: Video-on-demand (VOD) is designed by using content delivery networks (CDN) to minimize the overall operational cost and to maximize scalability. Estimation of the viewing pattern (i.e., the relationship between the number of viewings and the ranking of VOD contents) plays an important role in minimizing the total operational cost and maximizing the performance of the VOD systems. In this paper, we have analyzed a large body of commercial VOD viewing data and found that the viewing rank distribution fits well with the parabolic fractal distribution. The weighted linear model fitting function is used to estimate the parameters (coefficients) of the parabolic fractal distribution. This paper presents an analytical basis for designing an optimal hierarchical VOD contents distribution system in terms of its cost and performance.
Abstract: Automated discovery of hierarchical structures in
large data sets has been an active research area in the recent past.
This paper focuses on the issue of mining generalized rules with crisp
hierarchical structure using Genetic Programming (GP) approach to
knowledge discovery. The post-processing scheme presented in this
work uses flat rules as initial individuals of GP and discovers
hierarchical structure. Suitable genetic operators are proposed for the
suggested encoding. Based on the Subsumption Matrix(SM), an
appropriate fitness function is suggested. Finally, Hierarchical
Production Rules (HPRs) are generated from the discovered
hierarchy. Experimental results are presented to demonstrate the
performance of the proposed algorithm.
Abstract: In this paper we present a way of controlling the
concurrent access to data in a distributed application using the
Pessimistic Offline Lock design pattern. In our case, the application
processes a complex entity, which contains in a hierarchical structure
different other entities (objects). It will be shown how the complex
entity and the contained entities must be locked in order to control
the concurrent access to data.
Abstract: This paper describes the design and results of FROID,
an outbound intrusion detection system built with agent technology
and supported by an attacker-centric ontology. The prototype
features a misuse-based detection mechanism that identifies remote
attack tools in execution. Misuse signatures composed of attributes
selected through entropy analysis of outgoing traffic streams and
process runtime data are derived from execution variants of attack
programs. The core of the architecture is a mesh of self-contained
detection cells organized non-hierarchically that group agents in a
functional fashion. The experiments show performance gains when
the ontology is enabled as well as an increase in accuracy achieved
when correlation cells combine detection evidence received from
independent detection cells.