Abstract: t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embedding. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic, and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n2) to O(n2/k), and the memory requirement from n2 to 2(n/k)2 which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.
Abstract: Analysis of the human microbiome using metagenomic
sequencing data has demonstrated high ability in discriminating
various human diseases. Raw metagenomic sequencing data require
multiple complex and computationally heavy bioinformatics steps
prior to data analysis. Such data contain millions of short sequences
read from the fragmented DNA sequences and stored as fastq files.
Conventional processing pipelines consist in multiple steps including
quality control, filtering, alignment of sequences against genomic
catalogs (genes, species, taxonomic levels, functional pathways,
etc.). These pipelines are complex to use, time consuming and
rely on a large number of parameters that often provide variability
and impact the estimation of the microbiome elements. Training
Deep Neural Networks directly from raw sequencing data is a
promising approach to bypass some of the challenges associated with
mainstream bioinformatics pipelines. Most of these methods use the
concept of word and sentence embeddings that create a meaningful
and numerical representation of DNA sequences, while extracting
features and reducing the dimensionality of the data. In this paper
we present an end-to-end approach that classifies patients into disease
groups directly from raw metagenomic reads: metagenome2vec. This
approach is composed of four steps (i) generating a vocabulary of
k-mers and learning their numerical embeddings; (ii) learning DNA
sequence (read) embeddings; (iii) identifying the genome from which
the sequence is most likely to come and (iv) training a multiple
instance learning classifier which predicts the phenotype based on
the vector representation of the raw data. An attention mechanism
is applied in the network so that the model can be interpreted,
assigning a weight to the influence of the prediction for each genome.
Using two public real-life data-sets as well a simulated one, we
demonstrated that this original approach reaches high performance,
comparable with the state-of-the-art methods applied directly on
processed data though mainstream bioinformatics workflows. These
results are encouraging for this proof of concept work. We believe
that with further dedication, the DNN models have the potential to
surpass mainstream bioinformatics workflows in disease classification
tasks.
Abstract: This paper presents and benchmarks a number of
end-to-end Deep Learning based models for metaphor detection in
Greek. We combine Convolutional Neural Networks and Recurrent
Neural Networks with representation learning to bear on the metaphor
detection problem for the Greek language. The models presented
achieve exceptional accuracy scores, significantly improving the
previous state-of-the-art results, which had already achieved accuracy
0.82. Furthermore, no special preprocessing, feature engineering or
linguistic knowledge is used in this work. The methods presented
achieve accuracy of 0.92 and F-score 0.92 with Convolutional
Neural Networks (CNNs) and bidirectional Long Short Term Memory
networks (LSTMs). Comparable results of 0.91 accuracy and 0.91
F-score are also achieved with bidirectional Gated Recurrent Units
(GRUs) and Convolutional Recurrent Neural Nets (CRNNs). The
models are trained and evaluated only on the basis of training tuples,
the related sentences and their labels. The outcome is a state-of-the-art
collection of metaphor detection models, trained on limited labelled
resources, which can be extended to other languages and similar
tasks.
Abstract: Regularity has often been present in the form of regular
polyhedra or tessellations; classical examples are the nine regular
polyhedra consisting of the five Platonic solids (regular convex
polyhedra) and the four Kleper-Poinsot polyhedra. These polytopes
can be seen as regular maps. Maps are cellular embeddings of
graphs (with possibly multiple edges, loops or dangling edges) on
compact connected (closed) surfaces with or without boundary. The
n-dimensional abstract polytopes, particularly the regular ones, have
gained popularity over recent years. The main focus of research
has been their symmetries and regularity. Planification of polyhedra
helps its spatial construction, yet it destroys its symmetries. To our
knowledge there is no “planification” for n-dimensional polytopes.
However we show that it is possible to make a “surfacification”
of the n-dimensional polytope, that is, it is possible to construct a
restrictedly-marked map representation of the abstract polytope on
some surface that describes its combinatorial structures as well as
all of its symmetries. We also show that there are infinitely many
ways to do this; yet there is one that is more natural that describes
reflections on the sides ((n−1)-faces) of n-simplices with reflections
on the sides of n-polygons. We illustrate this construction with the
4-tetrahedron (a regular 4-polytope with automorphism group of size
120) and the 4-cube (a regular 4-polytope with automorphism group
of size 384).