Abstract: A high performance computer includes a fast
processor and millions bytes of memory. During the data processing,
huge amount of information are shuffled between the memory and
processor. Because of its small size and its effectiveness speed, cache
has become a common feature of high performance computers.
Enhancing cache performance proved to be essential in the speed up
of cache-based computers. Most enhancement approaches can be
classified as either software based or hardware controlled. The
performance of the cache is quantified in terms of hit ratio or miss
ratio. In this paper, we are optimizing the cache performance based
on enhancing the cache hit ratio. The optimum cache performance is
obtained by focusing on the cache hardware modification in the way
to make a quick rejection to the missed line's tags from the hit-or
miss comparison stage, and thus a low hit time for the wanted line in
the cache is achieved. In the proposed technique which we called
Even- Odd Tabulation (EOT), the cache lines come from the main
memory into cache are classified in two types; even line's tags and
odd line's tags depending on their Least Significant Bit (LSB). This
division is exploited by EOT technique to reject the miss match line's
tags in very low time compared to the time spent by the main
comparator in the cache, giving an optimum hitting time for the
wanted cache line. The high performance of EOT technique against
the familiar mapping technique FAM is shown in the simulated
results.
Abstract: In this paper, we present a new algorithm for clustering data in large datasets using image processing approaches. First the dataset is mapped into a binary image plane. The synthesized image is then processed utilizing efficient image processing techniques to cluster the data in the dataset. Henceforth, the algorithm avoids exhaustive search to identify clusters. The algorithm considers only a small set of the data that contains critical boundary information sufficient to identify contained clusters. Compared to available data clustering techniques, the proposed algorithm produces similar quality results and outperforms them in execution time and storage requirements.
Abstract: Concept maps can be generated manually or
automatically. It is important to recognize differences of the two
types of concept maps. The automatically generated concept maps
are dynamic, interactive, and full of associations between the terms
on the maps and the underlying documents. Through a specific
concept mapping system, Visual Concept Explorer (VCE), this paper
discusses how automatically generated concept maps are different
from manually generated concept maps and how different
applications and learning opportunities might be created with the
automatically generated concept maps. The paper presents several
examples of learning strategies that take advantages of the
automatically generated concept maps for concept learning and
exploration.
Abstract: Previous the 3D model texture generation from multi-view images and mapping algorithms has issues in the texture chart generation which are the self-intersection and the concentration of the texture in texture space. Also we may suffer from some problems due to the occluded areas, such as inside parts of thighs. In this paper we propose a texture mapping technique for 3D models using multi-view images on the GPU. We do texture mapping directly on the GPU fragment shader per pixel without generation of the texture map. And we solve for the occluded area using the 3D model depth information. Our method needs more calculation on the GPU than previous works, but it has shown real-time performance and previously mentioned problems do not occur.
Abstract: Mapping between local and global coordinates is an
important issue in finite element method, as all calculations are
performed in local coordinates. The concern arises when subparametric
are used, in which the shape functions of the field variable
and the geometry of the element are not the same. This is particularly
the case for C* elements in which the extra degrees of freedoms
added to the nodes make the elements sub-parametric. In the present
work, transformation matrix for C1* (an 8-noded hexahedron
element with 12 degrees of freedom at each node) is obtained using
equivalent C0 elements (with the same number of degrees of
freedom). The convergence rate of 8-noded C1* element is nearly
equal to its equivalent C0 element, while it consumes less CPU time
with respect to the C0 element. The existence of derivative degrees
of freedom at the nodes of C1* element along with excellent
convergence makes it superior compared with it equivalent C0
element.
Abstract: Cosmic showers, during the transit through space, produce
sub - products as a result of interactions with the intergalactic
or interstellar medium which after entering earth generate secondary
particles called Extensive Air Shower (EAS). Detection and analysis
of High Energy Particle Showers involve a plethora of theoretical and
experimental works with a host of constraints resulting in inaccuracies
in measurements. Therefore, there exist a necessity to develop a
readily available system based on soft-computational approaches
which can be used for EAS analysis. This is due to the fact that soft
computational tools such as Artificial Neural Network (ANN)s can be
trained as classifiers to adapt and learn the surrounding variations. But
single classifiers fail to reach optimality of decision making in many
situations for which Multiple Classifier System (MCS) are preferred
to enhance the ability of the system to make decisions adjusting
to finer variations. This work describes the formation of an MCS
using Multi Layer Perceptron (MLP), Recurrent Neural Network
(RNN) and Probabilistic Neural Network (PNN) with data inputs
from correlation mapping Self Organizing Map (SOM) blocks and
the output optimized by another SOM. The results show that the setup
can be adopted for real time practical applications for prediction
of primary energy and location of EAS from density values captured
using detectors in a circular grid.
Abstract: An effective approach for realizing the binary tree structure, representing a combinational logic functionality with enhanced throughput, is discussed in this paper. The optimization in maximum operating frequency was achieved through delay minimization, which in turn was possible by means of reducing the depth of the binary network. The proposed synthesis methodology has been validated by experimentation with FPGA as the target technology. Though our proposal is technology independent, yet the heuristic enables better optimization in throughput even after technology mapping for such Boolean functionality; whose reduced CNF form is associated with a lesser literal cost than its reduced DNF form at the Boolean equation level. For cases otherwise, our method converges to similar results as that of [12]. The practical results obtained for a variety of case studies demonstrate an improvement in the maximum throughput rate for Spartan IIE (XC2S50E-7FT256) and Spartan 3 (XC3S50-4PQ144) FPGA logic families by 10.49% and 13.68% respectively. With respect to the LUTs and IOBUFs required for physical implementation of the requisite non-regenerative logic functionality, the proposed method enabled savings to the tune of 44.35% and 44.67% respectively, over the existing efficient method available in literature [12].
Abstract: Cluster analysis is the name given to a diverse collection of techniques that can be used to classify objects (e.g. individuals, quadrats, species etc). While Kohonen's Self-Organizing Feature Map (SOFM) or Self-Organizing Map (SOM) networks have been successfully applied as a classification tool to various problem domains, including speech recognition, image data compression, image or character recognition, robot control and medical diagnosis, its potential as a robust substitute for clustering analysis remains relatively unresearched. SOM networks combine competitive learning with dimensionality reduction by smoothing the clusters with respect to an a priori grid and provide a powerful tool for data visualization. In this paper, SOM is used for creating a toroidal mapping of two-dimensional lattice to perform cluster analysis on results of a chemical analysis of wines produced in the same region in Italy but derived from three different cultivators, referred to as the “wine recognition data" located in the University of California-Irvine database. The results are encouraging and it is believed that SOM would make an appealing and powerful decision-support system tool for clustering tasks and for data visualization.
Abstract: This article is an extension and a practical application
approach of Wheeler-s NEBIC theory (Net Enabled Business
Innovation Cycle). NEBIC theory is a new approach in IS research
and can be used for dynamic environment related to new technology.
Firms can follow the market changes rapidly with support of the IT
resources. Flexible firms adapt their market strategies, and respond
more quickly to customers changing behaviors. When every leading
firm in an industry has access to the same IT resources, the way that
these IT resources are managed will determine the competitive
advantages or disadvantages of firm. From Dynamic Capabilities
Perspective and from newly introduced NEBIC theory by Wheeler,
we know that only IT resources cannot deliver customer value but
good configuration of those resources can guarantee customer value
by choosing the right emerging technology, grasping the economic
opportunities through business innovation and growth. We found
evidences in literature that SOA (Service Oriented Architecture) is a
promising emerging technology which can deliver the desired
economic opportunity through modularity, flexibility and loosecoupling.
SOA can also help firms to connect in network which can
open a new window of opportunity to collaborate in innovation and
right kind of outsourcing
Abstract: In this paper, a semi-fragile watermarking scheme is proposed for color image authentication. In this particular scheme, the color image is first transformed from RGB to YST color space, suitable for watermarking the color media. Each channel is divided into 4×4 non-overlapping blocks and its each 2×2 sub-block is selected. The embedding space is created by setting the two LSBs of selected sub-block to zero, which will hold the authentication and recovery information. For verification of work authentication and parity bits denoted by 'a' & 'p' are computed for each 2×2 subblock. For recovery, intensity mean of each 2×2 sub-block is computed and encoded upto six to eight bits depending upon the channel selection. The size of sub-block is important for correct localization and fast computation. For watermark distribution 2DTorus Automorphism is implemented using a private key to have a secure mapping of blocks. The perceptibility of watermarked image is quite reasonable both subjectively and objectively. Our scheme is oblivious, correctly localizes the tampering and able to recovery the original work with probability of near one.
Abstract: This paper proposes an Interactive Chinese Character
Learning System (ICCLS) based on pictorial evolution as an
edutainment concept in computer-based learning of language. The
advantage of the language origination itself is taken as a learning
platform due to the complexity in Chinese language as compared to
other types of languages. Users especially children enjoy more by
utilize this learning system because they are able to memories the
Chinese Character easily and understand more of the origin of the
Chinese character under pleasurable learning environment, compares
to traditional approach which children need to rote learning Chinese
Character under un-pleasurable environment. Skeletonization is used
as the representation of Chinese character and object with an animated
pictograph evolution to facilitate the learning of the language. Shortest
skeleton path matching technique is employed for fast and accurate
matching in our implementation. User is required to either write a
word or draw a simple 2D object in the input panel and the matched
word and object will be displayed as well as the pictograph evolution
to instill learning. The target of computer-based learning system is for
pre-school children between 4 to 6 years old to learn Chinese
characters in a flexible and entertaining manner besides utilizing
visual and mind mapping strategy as learning methodology.
Abstract: Genetic Folding (GF) a new class of EA named as is
introduced for the first time. It is based on chromosomes composed
of floating genes structurally organized in a parent form and
separated by dots. Although, the genotype/phenotype system of GF
generates a kernel expression, which is the objective function of
superior classifier. In this work the question of the satisfying
mapping-s rules in evolving populations is addressed by analyzing
populations undergoing either Mercer-s or none Mercer-s rule. The
results presented here show that populations undergoing Mercer-s
rules improve practically models selection of Support Vector
Machine (SVM). The experiment is trained multi-classification
problem and tested on nonlinear Ionosphere dataset. The target of this
paper is to answer the question of evolving Mercer-s rule in SVM
addressed using either genetic folding satisfied kernel-s rules or not
applied to complicated domains and problems.
Abstract: CIM is the standard formalism for modeling management
information developed by the Distributed Management Task
Force (DMTF) in the context of its WBEM proposal, designed to
provide a conceptual view of the managed environment. In this
paper, we propose the inclusion of formal knowledge representation
techniques, based on Description Logics (DLs) and the Web Ontology
Language (OWL), in CIM-based conceptual modeling, and then we
examine the benefits of such a decision. The proposal is specified as a
CIM metamodel level mapping to a highly expressive subset of DLs
capable of capturing all the semantics of the models. The paper shows
how the proposed mapping can be used for automatic reasoning
about the management information models, as a design aid, by means
of new-generation CASE tools, thanks to the use of state-of-the-art
automatic reasoning systems that support the proposed logic and use
algorithms that are sound and complete with respect to the semantics.
Such a CASE tool framework has been developed by the authors and
its architecture is also introduced. The proposed formalization is not
only useful at design time, but also at run time through the use of
rational autonomous agents, in response to a need recently recognized
by the DMTF.
Abstract: This paper examines the problem of designing robust H controllers for for HIV/AIDS infection system with dual drug dosages described by a Takagi-Sugeno (S) fuzzy model. Based on a linear matrix inequality (LMI) approach, we develop an H controller which guarantees the L2-gain of the mapping from the exogenous input noise to the regulated output to be less than some prescribed value for the system. A sufficient condition of the controller for this system is given in term of Linear Matrix Inequalities (LMIs). The effectiveness of the proposed controller design methodology is finally demonstrated through simulation results. It has been shown that the anti-HIV vaccines are critically important in reducing the infected cells.
Abstract: Functional Magnetic Resonance Imaging(fMRI) is a
noninvasive imaging technique that measures the hemodynamic
response related to neural activity in the human brain. Event-related
functional magnetic resonance imaging (efMRI) is a form of
functional Magnetic Resonance Imaging (fMRI) in which a series of
fMRI images are time-locked to a stimulus presentation and averaged
together over many trials. Again an event related potential (ERP) is a
measured brain response that is directly the result of a thought or
perception. Here the neuronal response of human visual cortex in
normal healthy patients have been studied. The patients were asked
to perform a visual three choice reaction task; from the relative
response of each patient corresponding neuronal activity in visual
cortex was imaged. The average number of neurons in the adult
human primary visual cortex, in each hemisphere has been estimated
at around 140 million. Statistical analysis of this experiment was
done with SPM5(Statistical Parametric Mapping version 5) software.
The result shows a robust design of imaging the neuronal activity of
human visual cortex.
Abstract: In this paper, we propose the pre-processor based on
the Evidence Supporting Measure of Similarity (ESMS) filter and also
propose the unified fusion approach (UFA) based on the general
fusion machine coupled with ESMS filter, which improve the
correctness and precision of information fusion in any fields of
application. Here we mainly apply the new approach to Simultaneous
Localization And Mapping (SLAM) of Pioneer II mobile robots. A
simulation experiment was performed, where an autonomous virtual
mobile robot with sonar sensors evolves in a virtual world map with
obstacles. By comparing the result of building map according to the
general fusion machine (here DSmT-based fusing machine and
PCR5-based conflict redistributor considereded) coupling with ESMS
filter and without ESMS filter, it shows the benefit of the selection of
the sources as a prerequisite for improvement of the information
fusion, and also testifies the superiority of the UFA in dealing with
SLAM.
Abstract: A novel low-cost impedance control structure is
proposed for monitoring the contact force between end-effector and
environment without installing an expensive force/torque sensor.
Theoretically, the end-effector contact force can be estimated from the
superposition of each joint control torque. There have a nonlinear
matrix mapping function between each joint motor control input and
end-effector actuating force/torques vector. This new force control
structure can be implemented based on this estimated mapping matrix.
First, the robot end-effector is manipulated to specified positions, then
the force controller is actuated based on the hall sensor current
feedback of each joint motor. The model-free fuzzy sliding mode
control (FSMC) strategy is employed to design the position and force
controllers, respectively. All the hardware circuits and software
control programs are designed on an Altera Nios II embedded
development kit to constitute an embedded system structure for a
retrofitted Mitsubishi 5 DOF robot. Experimental results show that PI
and FSMC force control algorithms can achieve reasonable contact
force monitoring objective based on this hardware control structure.
Abstract: When designing information systems that deal with
large amount of domain knowledge, system designers need to consider
ambiguities of labeling termsin domain vocabulary for navigating
users in the information space. The goal of this study is to develop a
methodology for system designers to label navigation items, taking
account of ambiguities stems from synonyms or polysemes of labeling
terms. In this paper, we propose a method for concept labeling based
on mappings between domain ontology andthesaurus, and report
results of an empirical evaluation.
Abstract: A straightforward and intuitive combination of single simulations into an aggregated master-simulation is not trivial. There are lots of problems, which trigger-specific difficulties during the modeling and execution of such a simulation. In this paper we identify these problems and aim to solve them by mapping the task to the field of multi agent systems. The solution is a new meta-model named AGENTMAP, which is able to mitigate most of the problems and to support intuitive modeling at the same time. This meta-model will be introduced and explained on basis of an example from the e-commerce domain.
Abstract: In distributed resource allocation a set of agents must assign their resources to a set of tasks. This problem arises in many real-world domains such as distributed sensor networks, disaster rescue, hospital scheduling and others. Despite the variety of approaches proposed for distributed resource allocation, a systematic formalization of the problem, explaining the different sources of difficulties, and a formal explanation of the strengths and limitations of key approaches is missing. We take a step towards this goal by using a formalization of distributed resource allocation that represents both dynamic and distributed aspects of the problem. In this paper we present a new idea for target tracking in sensor networks and compare it with previous approaches. The central contribution of the paper is a generalized mapping from distributed resource allocation to DDCSP. This mapping is proven to correctly perform resource allocation problems of specific difficulty. This theoretical result is verified in practice by a simulation on a realworld distributed sensor network.