Abstract: Many problems in computer vision and image
processing present potential for parallel implementations through one
of the three major paradigms of geometric parallelism, algorithmic
parallelism and processor farming. Static process scheduling
techniques are used successfully to exploit geometric and algorithmic
parallelism, while dynamic process scheduling is better suited to
dealing with the independent processes inherent in the process
farming paradigm. This paper considers the application of parallel or
multi-computers to a class of problems exhibiting spatial data
characteristic of the geometric paradigm. However, by using
processor farming paradigm, a dynamic scheduling technique is
developed to suit the MIMD structure of the multi-computers. A
hybrid scheme of scheduling is also developed and compared with
the other schemes. The specific problem chosen for the investigation
is the Hough transform for line detection.
Abstract: A new fast correlation algorithm for calibrating the
wavelength of Optical Spectrum Analyzers (OSAs) was introduced
in [1]. The minima of acetylene gas spectra were measured and
correlated with saved theoretical data [2]. So it is possible to find the
correct wavelength calibration data using a noisy reference spectrum.
First tests showed good algorithmic performance for gas line spectra
with high noise. In this article extensive performance tests were made
to validate the noise resistance of this algorithm. The filter and
correlation parameters of the algorithm were optimized for improved
noise performance. With these parameters the performance of this
wavelength calibration was simulated to predict the resulting
wavelength error in real OSA systems. Long term simulations were
made to evaluate the performance of the algorithm over the lifetime
of a real OSA.
Abstract: The paper investigates downtrend algorithm and
trading strategy based on chart pattern recognition and technical
analysis in futures market. The proposed chart formation is a pattern
with the lowest low in the middle and one higher low on each side.
The contribution of this paper lies in the reinforcement of statements
about the profitability of momentum trend trading strategies.
Practical benefit of the research is a trading algorithm in falling
markets and back-test analysis in futures markets. When based on
daily data, the algorithm has generated positive results, especially
when the market had downtrend period. Downtrend algorithm can be
applied as a hedge strategy against possible sudden market crashes.
The proposed strategy can be interesting for futures traders, hedge
funds or scientific researchers performing technical or algorithmic
market analysis based on momentum trend trading.
Abstract: In the self-stabilizing algorithmic paradigm, each node has a local view of the system, in a finite amount of time the system converges to a global state with desired property. In a graph G =
(V, E), a subset S C V is a 2-packing if Vi c V: IN[i] n SI
Abstract: Modeling and simulation of biochemical reactions is of great interest in the context of system biology. The central dogma of this re-emerging area states that it is system dynamics and organizing principles of complex biological phenomena that give rise to functioning and function of cells. Cell functions, such as growth, division, differentiation and apoptosis are temporal processes, that can be understood if they are treated as dynamic systems. System biology focuses on an understanding of functional activity from a system-wide perspective and, consequently, it is defined by two hey questions: (i) how do the components within a cell interact, so as to bring about its structure and functioning? (ii) How do cells interact, so as to develop and maintain higher levels of organization and functions? In recent years, wet-lab biologists embraced mathematical modeling and simulation as two essential means toward answering the above questions. The credo of dynamics system theory is that the behavior of a biological system is given by the temporal evolution of its state. Our understanding of the time behavior of a biological system can be measured by the extent to which a simulation mimics the real behavior of that system. Deviations of a simulation indicate either limitations or errors in our knowledge. The aim of this paper is to summarize and review the main conceptual frameworks in which models of biochemical networks can be developed. In particular, we review the stochastic molecular modelling approaches, by reporting the principal conceptualizations suggested by A. A. Markov, P. Langevin, A. Fokker, M. Planck, D. T. Gillespie, N. G. van Kampfen, and recently by D. Wilkinson, O. Wolkenhauer, P. S. Jöberg and by the author.
Abstract: One of the mayor problems of programming a cruise
circuit is to decide which destinations to include and which don-t.
Thus a decision problem emerges, that might be solved using a linear
and goal programming approach. The problem becomes more
complex if several boats in the fleet must be programmed in a limited
schedule, trying their capacity matches best a seasonal demand and
also attempting to minimize the operation costs. Moreover, the
programmer of the company should consider the time of the
passenger as a limited asset, and would like to maximize its usage.
The aim of this work is to design a method in which, using linear and
goal programming techniques, a model to design circuits for the
cruise company decision maker can achieve an optimal solution
within the fleet schedule.
Abstract: Moral decisions are considered as an intuitive process,
while conscious reasoning is mostly used only to justify those
intuitions. This problem is described in few different dual-process
theories of mind, that are being developed e.g. by Frederick and
Kahneman, Stanovich and Evans. Those theories recently evolved
into tri-process theories with a proposed process that makes ultimate
decision or allows to paraformal processing with focal bias..
Presented experiment compares the decision patterns to the
implications of those models.
In presented study participants (n=179) considered different
aspects of trolley dilemma or its footbridge version and decided after
that.
Results show that in the control group 70% of people decided to
use the lever to change tracks for the running trolley, and 20% chose
to push the fat man down the tracks. In contrast, after experimental
manipulation almost no one decided to act. Also the decision time
difference between dilemmas disappeared after experimental
manipulation.
The result supports the idea of three co-working processes:
intuitive (TASS), paraformal (reflective mind) and algorithmic
process.
Abstract: This paper attempts to explore a new method to
improve the teaching of algorithmic for beginners. It is well known
that algorithmic is a difficult field to teach for teacher and complex to
assimilate for learner. These difficulties are due to intrinsic
characteristics of this field and to the manner that teachers (the
majority) apprehend its bases. However, in a Technology Enhanced
Learning environment (TEL), assessment, which is important and
indispensable, is the most delicate phase to implement, for all
problems that generate (noise...). Our objective registers in the
confluence of these two axes. For this purpose, EASEL focused
essentially to elaborate an assessment approach of algorithmic
competences in a TEL environment. This approach consists in
modeling an algorithmic solution according to basic and elementary
operations which let learner draw his/her own step with all autonomy
and independently to any programming language. This approach
assures a trilateral assessment: summative, formative and diagnostic
assessment.
Abstract: Biological sequences from different species are called or-thologs if they evolved from a sequence of a common ancestor species and they have the same biological function. Approximations of Kolmogorov complexity or entropy of biological sequences are already well known to be useful in extracting similarity information between such sequences -in the interest, for example, of ortholog detection. As is well known, the exact Kolmogorov complexity is not algorithmically computable. In prac-tice one can approximate it by computable compression methods. How-ever, such compression methods do not provide a good approximation to Kolmogorov complexity for short sequences. Herein is suggested a new ap-proach to overcome the problem that compression approximations may notwork well on short sequences. This approach is inspired by new, conditional computations of Kolmogorov entropy. A main contribution of the empir-ical work described shows the new set of entropy-based machine learning attributes provides good separation between positive (ortholog) and nega-tive (non-ortholog) data - better than with good, previously known alter-natives (which do not employ some means to handle short sequences well).Also empirically compared are the new entropy based attribute set and a number of other, more standard similarity attributes sets commonly used in genomic analysis. The various similarity attributes are evaluated by cross validation, through boosted decision tree induction C5.0, and by Receiver Operating Characteristic (ROC) analysis. The results point to the conclu-sion: the new, entropy based attribute set by itself is not the one giving the best prediction; however, it is the best attribute set for use in improving the other, standard attribute sets when conjoined with them.
Abstract: Modern spatial database management systems require a unique Spatial Access Method (SAM) in order solve complex spatial quires efficiently. In this case the spatial data structure takes a prominent place in the SAM. Inadequate data structure leads forming poor algorithmic choices and forging deficient understandings of algorithm behavior on the spatial database. A key step in developing a better semantic spatial object data structure is to quantify the performance effects of semantic and outlier detections that are not reflected in the previous tree structures (R-Tree and its variants). This paper explores a novel SSRO-Tree on SAM to the Topo-Semantic approach. The paper shows how to identify and handle the semantic spatial objects with outlier objects during page overflow/underflow, using gain/loss metrics. We introduce a new SSRO-Tree algorithm which facilitates the achievement of better performance in practice over algorithms that are superior in the R*-Tree and RO-Tree by considering selection queries.
Abstract: Markov games are a generalization of Markov
decision process to a multi-agent setting. Two-player zero-sum
Markov game framework offers an effective platform for designing
robust controllers. This paper presents two novel controller design
algorithms that use ideas from game-theory literature to produce
reliable controllers that are able to maintain performance in presence
of noise and parameter variations. A more widely used approach for
controller design is the H∞ optimal control, which suffers from high
computational demand and at times, may be infeasible. Our approach
generates an optimal control policy for the agent (controller) via a
simple Linear Program enabling the controller to learn about the
unknown environment. The controller is facing an unknown
environment, and in our formulation this environment corresponds to
the behavior rules of the noise modeled as the opponent. Proposed
controller architectures attempt to improve controller reliability by a
gradual mixing of algorithmic approaches drawn from the game
theory literature and the Minimax-Q Markov game solution
approach, in a reinforcement-learning framework. We test the
proposed algorithms on a simulated Inverted Pendulum Swing-up
task and compare its performance against standard Q learning.
Abstract: Factoring Boolean functions is one of the basic operations in algorithmic logic synthesis. A novel algebraic factorization heuristic for single-output combinatorial logic functions is presented in this paper and is developed based on the set theory paradigm. The impact of factoring is analyzed mainly from a low power design perspective for standard cell based digital designs in this paper. The physical implementation of a number of MCNC/IWLS combinational benchmark functions and sub-functions are compared before and after factoring, based on a simple technology mapping procedure utilizing only standard gate primitives (readily available as standard cells in a technology library) and not cells corresponding to optimized complex logic. The power results were obtained at the gate-level by means of an industry-standard power analysis tool from Synopsys, targeting a 130nm (0.13μm) UMC CMOS library, for the typical case. The wire-loads were inserted automatically and the simulations were performed with maximum input activity. The gate-level simulations demonstrate the advantage of the proposed factoring technique in comparison with other existing methods from a low power perspective, for arbitrary examples. Though the benchmarks experimentation reports mixed results, the mean savings in total power and dynamic power for the factored solution over a non-factored solution were 6.11% and 5.85% respectively. In terms of leakage power, the average savings for the factored forms was significant to the tune of 23.48%. The factored solution is expected to better its non-factored counterpart in terms of the power-delay product as it is well-known that factoring, in general, yields a delay-efficient multi-level solution.
Abstract: In the proposed method for Web page-ranking, a
novel theoretic model is introduced and tested by examples of order
relationships among IP addresses. Ranking is induced using a
convexity feature, which is learned according to these examples
using a self-organizing procedure. We consider the problem of selforganizing
learning from IP data to be represented by a semi-random
convex polygon procedure, in which the vertices correspond to IP
addresses. Based on recent developments in our regularization
theory for convex polygons and corresponding Euclidean distance
based methods for classification, we develop an algorithmic
framework for learning ranking functions based on a Computational
Geometric Theory. We show that our algorithm is generic, and
present experimental results explaining the potential of our approach.
In addition, we explain the generality of our approach by showing its
possible use as a visualization tool for data obtained from diverse
domains, such as Public Administration and Education.
Abstract: In this paper, we propose novel algorithmic models
based on information fusion and feature transformation in crossmodal
subspace for different types of residue features extracted from
several intra-frame and inter-frame pixel sub-blocks in video
sequences for detecting digital video tampering or forgery. An
evaluation of proposed residue features – the noise residue features
and the quantization features, their transformation in cross-modal
subspace, and their multimodal fusion, for emulated copy-move
tamper scenario shows a significant improvement in tamper detection
accuracy as compared to single mode features without transformation
in cross-modal subspace.
Abstract: Nowadays, Gene Ontology has been used widely by many researchers for biological data mining and information retrieval, integration of biological databases, finding genes, and incorporating knowledge in the Gene Ontology for gene clustering. However, the increase in size of the Gene Ontology has caused problems in maintaining and processing them. One way to obtain their accessibility is by clustering them into fragmented groups. Clustering the Gene Ontology is a difficult combinatorial problem and can be modeled as a graph partitioning problem. Additionally, deciding the number k of clusters to use is not easily perceived and is a hard algorithmic problem. Therefore, an approach for solving the automatic clustering of the Gene Ontology is proposed by incorporating cohesion-and-coupling metric into a hybrid algorithm consisting of a genetic algorithm and a split-and-merge algorithm. Experimental results and an example of modularized Gene Ontology in RDF/XML format are given to illustrate the effectiveness of the algorithm.
Abstract: Breast carcinoma is the most common form of cancer
in women. Multicolour fluorescent in-situ hybridisation (m-FISH) is
a common method for staging breast carcinoma. The interpretation
of m-FISH images is complicated due to two effects: (i) Spectral
overlap in the emission spectra of fluorochrome marked DNA probes
and (ii) tissue autofluorescence. In this paper hyper-spectral images of
m-FISH samples are used and spectral unmixing is applied to produce
false colour images with higher contrast and better information
content than standard RGB images. The spectral unmixing is realised
by combinations of: Orthogonal Projection Analysis (OPA), Alterating
Least Squares (ALS), Simple-to-use Interactive Self-Modeling
Mixture Analysis (SIMPLISMA) and VARIMAX. These are applied
on the data to reduce tissue autofluorescence and resolve the spectral
overlap in the emission spectra. The results show that spectral unmixing
methods reduce the intensity caused by tissue autofluorescence by
up to 78% and enhance image contrast by algorithmically reducing
the overlap of the emission spectra.
Abstract: This paper proposes new algorithms for the computeraided
design and manufacture (CAD/CAM) of 3D woven multi-layer
textile structures. Existing commercial CAD/CAM systems are often
restricted to the design and manufacture of 2D weaves. Those
CAD/CAM systems that do support the design and manufacture of
3D multi-layer weaves are often limited to manual editing of design
paper grids on the computer display and weave retrieval from stored
archives. This complex design activity is time-consuming, tedious
and error-prone and requires considerable experience and skill of a
technical weaver. Recent research reported in the literature has
addressed some of the shortcomings of commercial 3D multi-layer
weave CAD/CAM systems. However, earlier research results have
shown the need for further work on weave specification, weave
generation, yarn path editing and layer binding. Analysis of 3D
multi-layer weaves in this research has led to the design and
development of efficient and robust algorithms for the CAD/CAM of
3D woven multi-layer textile structures. The resulting algorithmically
generated weave designs can be used as a basis for lifting plans that
can be loaded onto looms equipped with electronic shedding
mechanisms for the CAM of 3D woven multi-layer textile structures.
Abstract: This paper covers the present situation and problem of experimental teaching of mathematics specialty in recent years, puts
forward and demonstrates experimental teaching methods for different
education. From the aspects of content and experimental teaching
approach, uses as an example the course “Experiment for Program
Designing & Algorithmic Language" and discusses teaching practice
and laboratory course work. In addition a series of successful methods
and measures are introduced in experimental teaching.
Abstract: The paper proposes a novel technique for iris
recognition using texture and phase features. Texture features are
extracted on the normalized iris strip using Haar Wavelet while phase
features are obtained using LOG Gabor Wavelet. The matching
scores generated from individual modules are combined using sum of
score technique. The system is tested on database obtained from Bath
University and Indian Institute of Technology Kanpur and is giving
an accuracy of 95.62% and 97.66% respectively. The FAR and FRR
of the combined system is also reduced comparatively.
Abstract: With the rapid growth in business size, today's businesses orient towards electronic technologies. Amazon.com and e-bay.com are some of the major stakeholders in this regard. Unfortunately the enormous size and hugely unstructured data on the web, even for a single commodity, has become a cause of ambiguity for consumers. Extracting valuable information from such an everincreasing data is an extremely tedious task and is fast becoming critical towards the success of businesses. Web content mining can play a major role in solving these issues. It involves using efficient algorithmic techniques to search and retrieve the desired information from a seemingly impossible to search unstructured data on the Internet. Application of web content mining can be very encouraging in the areas of Customer Relations Modeling, billing records, logistics investigations, product cataloguing and quality management. In this paper we present a review of some very interesting, efficient yet implementable techniques from the field of web content mining and study their impact in the area specific to business user needs focusing both on the customer as well as the producer. The techniques we would be reviewing include, mining by developing a knowledge-base repository of the domain, iterative refinement of user queries for personalized search, using a graphbased approach for the development of a web-crawler and filtering information for personalized search using website captions. These techniques have been analyzed and compared on the basis of their execution time and relevance of the result they produced against a particular search.