Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the wp-statistics domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/html/wp-includes/functions.php on line 6114 Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the wp-table-builder domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/html/wp-includes/functions.php on line 6114 Abstracts – Workshop on Geometry and Machine Learning

Abstracts

Keynotes

Representation Learning and Generative Modeling on Manifolds

Speaker: Maximilian Nickel
Abstract: Representation learning and generative modeling on Riemannian manifolds has received increasing attention in the Machine Learning community in recent years. In the first part of this talk, I will provide an overview of geometric representation learning on the example of hyperbolic embeddings of graphs. I will discuss the advantages of a geometric approach in terms of representational efficiency as well as for capturing and preserving desired semantics in a latent space. In the second part of the talk, I will connect these results with a recent approach to generative modeling on manifolds, i.e., Riemannian CNFs, and discuss its applications for modeling scientific data. To overcome computational issues of Riemannian CNFs (which need to solve ODEs during training), I will also discuss how Moser Flow allows to compute the normalizing equation in closed form and show how this leads to substantial improvements on Euclidean and Non-Euclidean manifolds.

Geometric Statistics for Computational Anatomy

Speaker: Xavier Pennec
Abstract: At the interface of geometry, statistics, image analysis and medicine, computational anatomy aims at analysing and modelling the biological variability of the organs shapes and their dynamics at the population level. The goal is to model the mean anatomy, its normal variation, its motion / evolution and to discover morphological differences between normal and pathological groups. However, shapes are usually described by equivalence classes of sets of points, curves, surfaces or images under the action of a transformation group, or directly by the diffeomorphic deformation of a template in diffeomorphometry. This implies that they live in non-linear spaces, while statistics where essentially developed in a Euclidean framework. For instance, adding or subtracting curves or surfaces does not really make sense. Thus, there is a need for redefining a consistent statistical framework for objects living in manifolds and Lie groups, a field which is now called geometric statistics. The objective of this talk is to give an overview of the Riemannian computational tools and of simple statistics in these spaces. The talk is motivated and illustrated by applications in medical image analysis, such as the regression of simple and efficient models of the atrophy of the brain in Alzheimer’s disease and the groupwise analysis of the motion of the heart in sequences of images using the parallel transport of surface and image deformations.

[slides]

From Sound to Metric Priors: A New Paradigm for Shape Generation

Speaker: Emanuele Rodolà
Abstract: Spectral and metric geometry are at the heart of various problems in computer vision, graphics, pattern recognition, and machine learning. Ultimately, the core reason for their success can be traced down to questions of stability and to the informativeness of the eigenvalues of certain operators. In this talk, I will discuss and show tangible examples of such properties and showcase some dramatic implications on a selection of notoriously hard problems in computer vision and graphics. First, I will address the question of whether one can recover the shape of a geometric object from its vibration frequencies (‘hear the shape of the drum’); while theoretically the answer to this question is negative, little is known about the practical possibility of using the spectrum for shape reconstruction and optimization. I will introduce a numerical procedure called isospectralization, as well as a data-driven variant, showing how this *practical* problem is solvable. Then, I will discuss the increasingly popular task of designing an effective generative model for deformable 3D shapes. I will demonstrate how injecting metric distortion priors into a simple geometric reconstruction loss can lead to the formation of a very informative latent space, which can be trained with extremely scarce data (less than 10 examples) and still yield competitive generation quality as well as aiding geometric disentanglement.

Talks

Geometric and Physical Quantities improve E(3) Equivariant Message Passing

Speaker: Erik Bekkers
Abstract: Including covariant information, such as position, force, velocity or spin is important in many tasks in computational physics and chemistry. We introduce Steerable E(3) Equivariant Graph Neural Networks (SEGNNs) that generalise equivariant graph networks, such that node and edge attributes are not restricted to invariant scalars, but can contain covariant information, such as vectors or tensors. Our model, composed of steerable MLPs, is able to incorporate geometric and physical information in both the message and update functions. Through the definition of steerable node attributes, the MLPs provide a new class of activation functions for general use with steerable feature fields. We discuss ours and related work through the lens of equivariant non-linear convolutions, which further allows us to pinpoint the successful components of SEGNNs: non-linear message aggregation improves upon classic linear (steerable) point convolutions; steerable messages improve upon recent equivariant graph networks that send invariant messages. We demonstrate the effectiveness of our method on several tasks in computational physics and chemistry and provide extensive ablation studies.

Ref:

Brandstetter, J., Hesselink, R., van der Pol, E., Bekkers, E., & Welling, M. (2021). Geometric and Physical Quantities improve E (3) Equivariant Message Passing. In ICLR 2022

https://openreview.net/forum?id=_xwr8gOBeV1

Gauge Equivariant Mesh Convolutional Neural Networks

Speaker: Pim de Haan
Abstract: Convolutional neural networks are widely successful in deep learning on image datasets. However, some data, like that resulting from MRI scans, do not reside on a square grid, but instead live on curved manifolds, discretized as meshes. A key issue on such meshes is that they lack a local notion of direction and hence the convolutional kernel cannot be canonically oriented. By doing message passing on the mesh and defining a groupoid of similar messages that should share weights, we propose a gauge equivariant method of building a CNN on such meshes that is direction-aware, yet agnostic to how the directions are chosen. It is scalable, invariant to how the mesh is rotated, and performs state-of-the-art on a medical application for estimating blood flow through human arteries.

[slides][GitHub]

Embedding Guarantees for Representations by Small Probabilistic Graph Transformers

Speaker: Anastasis Kratsios
Abstract: The problem of representing a finite dataset equipped with a dissimilarity metric using a trainable feature map is one of the basic questions in machine learning.  A hallmark of deep learning models is the capacity of a model’s hidden layers to efficiently encode discrete Euclidean data into low-dimensional Euclidean feature spaces, from which their last layer can readout predictions.  However, when the finite dataset is not Euclidean, it has been empirically confirmed that representations in non-Euclidean spaces systematically outperform traditional Euclidean feature representations.  In this paper, we prove that given any n-point metric space X there exists a probabilistic graph transformer (PGT) which can bi-Hölder embed into univariate Gaussian mixtures MG(R) of [Delon et al 2020] with small distortion to Xs metric for any 1/2<α<1.  Moreover, this PGT’s depth and width are approximately linear in n.   We then show that, for any “distortion level” D>2, there is a PGT which can represent X in MG(R) such that any uniformly sampled pair of points in X are bi-Lipschitz embedded with distortion at most $D^2$, with probability O(n-4e/D). We show that if it has a suitable geometric prior (e.g. X is a combinatorial tree or a finite subspace of a suitable Riemann manifold), then the PGT architecture can deterministically bi-Lipschitz embed X into MG(R) with low metric distortion.  As applications, we consider PGT embeddings of 2-Hop combinatorial graphs (such as friendship graphs, cocktail graphs, complete bipartite, etc…), trees, and n-point subsets of Riemannian manifolds.

[slides]

Deep Metric and Representation Learning

Speaker: Björn Ommer
Abstract: The ultimate goal of computer vision and AI, in general, are models that help to understand our (visual) world. A key challenge of this inverse problem is to learn a metric in data space that reflects semantic relations in the real world. Visual similarity learning is, therefore, crucial for numerous other tasks such as content-based retrieval, clustering, or detection. The currently predominant approach to learning representations that capture similarity is Deep Metric Learning (DML), which specifically aims at establishing relations for novel, unseen classes. Moreover, similarity learning is closely related to contrastive learning, which is the leading approach to self-supervised learning, respectively transfer learning.
In this talk, I will review the leading learning paradigms for DML and highlight the main directions of current research in the field. Thereafter, I will present novel approaches to address open challenges such as out-of-distribution generalization.

Graph Embeddings in Symmetric Spaces

Speaker: Beatrice Pozzetti
Abstract: Learning faithful graph representations as sets of vertex embeddings has become a fundamental intermediary step in a wide range of machine learning applications. I will discuss joint work with Lopez, Trettel, Strube and Wienhard in which we propose the systematic use of symmetric spaces in representation learning, a versatile class of Riemannian manifolds generalising both Euclidean and Hyperbolic spaces that I will introduce during my talk, and illustrate through examples. This enables us to introduce new methods, the use of Finsler metrics integrated in a Riemannian optimization scheme, that better adapts to dissimilar structures in the graph, and the use of a vector-valued distance that allows to visualise and analyze embeddings. I will also discuss applications to graph reconstruction tasks on various synthetic and real-world datasets as well as on some downstream tasks, recommender systems and node classification. Time permitting I will discuss how gyrovector calculus can be adapted to some symmetric spaces giving rise to analogs of vector space operations.

[slides]

Mini-Courses

Introduction to Geometric Statistics with Geomstats

Speaker: Nicolas Guigui
Abstract: Geomstats is an open-source Python package for computations, statistics, and machine learning on nonlinear manifolds. Data from many application fields are elements of usual manifolds such as the sphere, the space of rotation matrices, of positive definite matrices, shape spaces, etc… In the first session, I will first give a general introduction to the topic, then introduce the fundamental notions needed from Riemannian Geometry and the most common learning models for data on manifolds. All the notions will be exemplified with the package. The second session will be dedicated to running hands-on tutorials. Participants are welcome to bring their own dataset to work on.

[slides]

Hyperbolic Manifolds in Deep Learning

Speaker: Maxim Kochurov
Abstract: Hyperbolic manifolds are quite new in deep learning. Mathematical elegance and theoretical advantages are very attractive properties for dimensionality reduction and rich representations. Moreover, a lot of research was done to investigate opportunities in graph-based deep learning or language models. In the talk I’ll give an overview of what are the main advances in the area, highlighting the most problematic theory and motivation. During the practical session, we’ll get familiar with models and implementations that make use of the hyperbolic space to their fullest potential.

[slides]