Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the wp-statistics domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/html/wp-includes/functions.php on line 6114 Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the wp-table-builder domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/html/wp-includes/functions.php on line 6114 Warning: Cannot modify header information - headers already sent by (output started at /var/www/html/wp-includes/functions.php:6114) in /var/www/html/wp-includes/rest-api/class-wp-rest-server.php on line 1893 {"id":137,"date":"2022-06-07T15:12:34","date_gmt":"2022-06-07T15:12:34","guid":{"rendered":"https:\/\/gaml.mathi.uni-heidelberg.de\/?page_id=137"},"modified":"2023-03-15T15:39:21","modified_gmt":"2023-03-15T15:39:21","slug":"abstracts","status":"publish","type":"page","link":"https:\/\/gaml.mathi.uni-heidelberg.de\/abstracts\/","title":{"rendered":"Abstracts"},"content":{"rendered":"\n

Keynotes<\/h2>\n\n\n\n

Representation Learning and Generative Modeling on Manifolds<\/h3>\n\n\n\n

Speaker<\/strong>: Maximilian Nickel
Abstract<\/strong>: Representation learning and generative modeling on Riemannian manifolds has received increasing attention in the Machine Learning community in recent years. In the first part of this talk, I will provide an overview of geometric representation learning<\/em> on the example of hyperbolic embeddings of graphs. I will discuss the advantages of a geometric approach in terms of representational efficiency as well as for capturing and preserving desired semantics in a latent space. In the second part of the talk, I will connect these results with a recent approach to generative modeling on manifolds, i.e., Riemannian CNFs<\/em>, and discuss its applications for modeling scientific data. To overcome computational issues of Riemannian CNFs (which need to solve ODEs during training), I will also discuss how Moser Flow<\/em> allows to compute the normalizing equation in closed form and show how this leads to substantial improvements on Euclidean and Non-Euclidean manifolds.<\/p>\n\n\n\n

Geometric Statistics for Computational Anatomy<\/h3>\n\n\n\n

Speaker<\/strong>: Xavier Pennec
Abstract<\/strong>: At the interface of geometry, statistics, image analysis and medicine, computational anatomy aims at analysing and modelling the biological variability of the organs shapes and their dynamics at the population level. The goal is to model the mean anatomy, its normal variation, its motion \/ evolution and to discover morphological differences between normal and pathological groups. However, shapes are usually described by equivalence classes of sets of points, curves, surfaces or images under the action of a transformation group, or directly by the diffeomorphic deformation of a template in diffeomorphometry. This implies that they live in non-linear spaces, while statistics where essentially developed in a Euclidean framework. For instance, adding or subtracting curves or surfaces does not really make sense. Thus, there is a need for redefining a consistent statistical framework for objects living in manifolds and Lie groups, a field which is now called geometric statistics. The objective of this talk is to give an overview of the Riemannian computational tools and of simple statistics in these spaces. The talk is motivated and illustrated by applications in medical image analysis, such as the regression of simple and efficient models of the atrophy of the brain in Alzheimer’s disease and the groupwise analysis of the motion of the heart in sequences of images using the parallel transport of surface and image deformations.<\/p>\n\n\n\n

[slides<\/a>]<\/p>\n\n\n\n

From Sound to Metric Priors: A New Paradigm for Shape Generation<\/h3>\n\n\n\n

Speaker<\/strong>: Emanuele Rodol\u00e0
Abstract<\/strong>: Spectral and metric geometry are at the heart of various problems in computer vision, graphics, pattern recognition, and machine learning. Ultimately, the core reason for their success can be traced down to questions of stability and to the informativeness of the eigenvalues of certain operators. In this talk, I will discuss and show tangible examples of such properties and showcase some dramatic implications on a selection of notoriously hard problems in computer vision and graphics. First, I will address the question of whether one can recover the shape of a geometric object from its vibration frequencies (\u2018hear the shape of the drum\u2019); while theoretically the answer to this question is negative, little is known about the practical possibility of using the spectrum for shape reconstruction and optimization. I will introduce a numerical procedure called isospectralization, as well as a data-driven variant, showing how this *practical* problem is solvable. Then, I will discuss the increasingly popular task of designing an effective generative model for deformable 3D shapes. I will demonstrate how injecting metric distortion priors into a simple geometric reconstruction loss can lead to the formation of a very informative latent space, which can be trained with extremely scarce data (less than 10 examples) and still yield competitive generation quality as well as aiding geometric disentanglement.<\/p>\n\n\n\n

Talks<\/h2>\n\n\n\n

Geometric and Physical Quantities improve E(3) Equivariant Message Passing<\/h3>\n\n\n\n

Speaker<\/strong>: Erik Bekkers
Abstract<\/strong>: Including covariant information, such as position, force, velocity or spin is important in many tasks in computational physics and chemistry. We introduce Steerable E(3) Equivariant Graph Neural Networks (SEGNNs) that generalise equivariant graph networks, such that node and edge attributes are not restricted to invariant scalars, but can contain covariant information, such as vectors or tensors. Our model, composed of steerable MLPs, is able to incorporate geometric and physical information in both the message and update functions. Through the definition of steerable node attributes, the MLPs provide a new class of activation functions for general use with steerable feature fields. We discuss ours and related work through the lens of equivariant non-linear convolutions, which further allows us to pinpoint the successful components of SEGNNs: non-linear message aggregation improves upon classic linear (steerable) point convolutions; steerable messages improve upon recent equivariant graph networks that send invariant messages. We demonstrate the effectiveness of our method on several tasks in computational physics and chemistry and provide extensive ablation studies.<\/p>\n\n\n\n

Ref:<\/p>\n\n\n\n

Brandstetter, J., Hesselink, R., van der Pol, E., Bekkers, E., & Welling, M. (2021). Geometric and Physical Quantities improve E (3) Equivariant Message Passing.<\/strong> In ICLR 2022<\/em><\/p>\n\n\n\n

https:\/\/openreview.net\/forum?id=_xwr8gOBeV1<\/a><\/p>\n\n\n\n

Gauge Equivariant Mesh Convolutional Neural Networks<\/h3>\n\n\n\n

Speaker<\/strong>: Pim de Haan
Abstract<\/strong>: Convolutional neural networks are widely successful in deep learning on image datasets. However, some data, like that resulting from MRI scans, do not reside on a square grid, but instead live on curved manifolds, discretized as meshes. A key issue on such meshes is that they lack a local notion of direction and hence the convolutional kernel cannot be canonically oriented. By doing message passing on the mesh and defining a groupoid of similar messages that should share weights, we propose a gauge equivariant method of building a CNN on such meshes that is direction-aware, yet agnostic to how the directions are chosen. It is scalable, invariant to how the mesh is rotated, and performs state-of-the-art on a medical application for estimating blood flow through human arteries.<\/p>\n\n\n\n

[slides<\/a>][GitHub<\/a>]<\/p>\n\n\n\n

Embedding Guarantees for Representations by Small Probabilistic Graph Transformers<\/h3>\n\n\n\n

Speaker<\/strong>: Anastasis Kratsios
Abstract<\/strong>: The problem of representing a finite dataset equipped with a dissimilarity metric using a trainable feature map is one of the basic questions in machine learning.  A hallmark of deep learning models is the capacity of a model’s hidden layers to efficiently encode discrete Euclidean data into low-dimensional Euclidean feature spaces, from which their last layer can readout predictions.  However, when the finite dataset is not Euclidean, it has been empirically confirmed that representations in non-Euclidean spaces systematically outperform traditional Euclidean feature representations.  In this paper, we prove that given any n-point metric space X there exists a probabilistic graph transformer (PGT) which can bi-H\u00f6lder embed into univariate Gaussian mixtures MG(R) of [Delon et al 2020] with small distortion to Xs metric for any 1\/2<\u03b1<1.  Moreover, this PGT’s depth and width are approximately linear in n.   We then show that, for any “distortion level” D>2, there is a PGT which can represent X in MG(R) such that any uniformly sampled pair of points in X are bi-Lipschitz embedded with distortion at most $D^2$, with probability O(n-4e\/D<\/sup>). We show that if it has a suitable geometric prior (e.g. X is a combinatorial tree or a finite subspace of a suitable Riemann manifold), then the PGT architecture can deterministically bi-Lipschitz embed X into MG(R) with low metric distortion.  As applications, we consider PGT embeddings of 2-Hop combinatorial graphs (such as friendship graphs, cocktail graphs, complete bipartite, etc…), trees, and n-point subsets of Riemannian manifolds.<\/p>\n\n\n\n

[slides<\/a>]<\/p>\n\n\n\n

Deep Metric and Representation Learning<\/h3>\n\n\n\n

Speaker<\/strong>: Bj\u00f6rn Ommer
Abstract<\/strong>: The ultimate goal of computer vision and AI, in general, are models that help to understand our (visual) world. A key challenge of this inverse problem is to learn a metric in data space that reflects semantic relations in the real world. Visual similarity learning is, therefore, crucial for numerous other tasks such as content-based retrieval, clustering, or detection. The currently predominant approach to learning representations that capture similarity is Deep Metric Learning (DML), which specifically aims at establishing relations for novel, unseen classes. Moreover, similarity learning is closely related to contrastive learning, which is the leading approach to self-supervised learning, respectively transfer learning.
In this talk, I will review the leading learning paradigms for DML and highlight the main directions of current research in the field. Thereafter, I will present novel approaches to address open challenges such as out-of-distribution generalization.<\/p>\n\n\n\n

Graph Embeddings in Symmetric Spaces<\/h3>\n\n\n\n

Speaker<\/strong>: Beatrice Pozzetti
Abstract<\/strong>: Learning faithful graph representations as sets of vertex embeddings has become a fundamental intermediary step in a wide range of machine learning applications. I will discuss joint work with Lopez, Trettel, Strube and Wienhard in which we propose the systematic use of symmetric spaces in representation learning, a versatile class of Riemannian manifolds generalising both Euclidean and Hyperbolic spaces that I will introduce during my talk, and illustrate through examples. This enables us to introduce new methods, the use of Finsler metrics integrated in a Riemannian optimization scheme, that better adapts to dissimilar structures in the graph, and the use of a vector-valued distance that allows to visualise and analyze embeddings. I will also discuss applications to graph reconstruction tasks on various synthetic and real-world datasets as well as on some downstream tasks, recommender systems and node classification. Time permitting I will discuss how gyrovector calculus can be adapted to some symmetric spaces giving rise to analogs of vector space operations.<\/p>\n\n\n\n

[slides<\/a>]<\/p>\n\n\n\n

Mini-Courses<\/h2>\n\n\n\n

Introduction to Geometric Statistics with Geomstats<\/h3>\n\n\n\n

Speaker<\/strong>: Nicolas Guigui
Abstract<\/strong>: Geomstats is an open-source Python package for computations, statistics, and machine learning on nonlinear manifolds. Data from many application fields are elements of usual manifolds such as the sphere, the space of rotation matrices, of positive definite matrices, shape spaces, etc… In the first session, I will first give a general introduction to the topic, then introduce the fundamental notions needed from Riemannian Geometry and the most common learning models for data on manifolds. All the notions will be exemplified with the package. The second session will be dedicated to running hands-on tutorials. Participants are welcome to bring their own dataset to work on.<\/p>\n\n\n\n

[slides<\/a>]<\/p>\n\n\n\n

Hyperbolic Manifolds in Deep Learning<\/h3>\n\n\n\n

Speaker<\/strong>: Maxim Kochurov
Abstract<\/strong>: Hyperbolic manifolds are quite new in deep learning. Mathematical elegance and theoretical advantages are very attractive properties for dimensionality reduction and rich representations. Moreover, a lot of research was done to investigate opportunities in graph-based deep learning or language models. In the talk I’ll give an overview of what are the main advances in the area, highlighting the most problematic theory and motivation. During the practical session, we’ll get familiar with models and implementations that make use of the hyperbolic space to their fullest potential.<\/p>\n\n\n\n

[slides<\/a>]<\/p>\n","protected":false},"excerpt":{"rendered":"

Keynotes Representation Learning and Generative Modeling on Manifolds Speaker: Maximilian NickelAbstract: Representation learning and generative modeling on Riemannian manifolds has received increasing attention in the Machine Learning community in recent years. In the first part of this talk, I will provide an overview of geometric representation learning on the example of hyperbolic embeddings of graphs. I will discuss the advantages of a geometric approach in terms of representational efficiency as well as for capturing and preserving desired semantics in a latent space. In the second part of the talk, I will connect these results with a recent approach to generative\u2026<\/p>\n","protected":false},"author":2,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-137","page","type-page","status-publish","hentry"],"blocksy_meta":{"styles_descriptor":{"styles":{"desktop":"","tablet":"","mobile":""},"google_fonts":[],"version":5}},"_links":{"self":[{"href":"https:\/\/gaml.mathi.uni-heidelberg.de\/wp-json\/wp\/v2\/pages\/137","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/gaml.mathi.uni-heidelberg.de\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/gaml.mathi.uni-heidelberg.de\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/gaml.mathi.uni-heidelberg.de\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/gaml.mathi.uni-heidelberg.de\/wp-json\/wp\/v2\/comments?post=137"}],"version-history":[{"count":23,"href":"https:\/\/gaml.mathi.uni-heidelberg.de\/wp-json\/wp\/v2\/pages\/137\/revisions"}],"predecessor-version":[{"id":395,"href":"https:\/\/gaml.mathi.uni-heidelberg.de\/wp-json\/wp\/v2\/pages\/137\/revisions\/395"}],"wp:attachment":[{"href":"https:\/\/gaml.mathi.uni-heidelberg.de\/wp-json\/wp\/v2\/media?parent=137"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}