Transport of Algebraic Structure

Transport of Algebraic Structure to Latent Embeddings

   Samuel Pfrommer,   Brendon G. Anderson,   Somayeh Sojoudi  

University of California, Berkeley   

ICML 2024 (Spotlight)    

Paper | Code

Machine learning often aims to produce latent embeddings of inputs which lie in a larger, abstract mathematical space. For example, in the field of 3D modeling, subsets of Euclidean space can be embedded as vectors using implicit neural representations. Such subsets also have a natural algebraic structure including operations (e.g., union) and corresponding laws (e.g., associativity). How can we learn to “union” two sets using only their latent embeddings while respecting associativity? We propose a general procedure for parameterizing latent space operations that are provably consistent with the laws on the input space. This is achieved by learning a bijection from the latent space to a carefully designed mirrored algebra which is constructed on Euclidean space in accordance with desired laws. We evaluate these structural transport nets for a range of mirrored algebras against baselines that operate directly on the latent space. Our experiments provide strong evidence that respecting the underlying algebraic structure of the input space is key for learning accurate and self-consistent operations.


Algebraic structure and embedding spaces

Learned embeddings often represent mathematical objects with algebraic structure. Consider the space of subsets of $\mathbb{R}^n$, which can be endowed with operations (e.g., $\cap$, $\cup$) and laws (e.g., $A \cup B = B \cup A$). Implicit Neural Representations (INRs) capture subsets as the sublevel sets of learned networks; we can then extract latent embeddings $z$ of sets via an autoencoder-style architecture on INR weights $\phi$. Specifically, we assume that we are provided with encoder/decoder maps $E$ and $D$ connecting latent embeddings in $L=\mathbb{R}^l$ to elements of the power set $\mathcal{P}(\mathbb{R}^n)$, where the latter source space carries algebraic structure.

INR autoencoder

The source algebra operations (e.g., $\cup$ and $\cap$) are often foundational for downstream tasks where we may only have access to latent embeddings. We thus intuitively want to learn latent-space operations which yield the correct results in set space. Informally, for the union operation this would mean that $$ A \cup^{\mathcal{S}} B \approx D(E(A) \cup^{\mathcal{L}} E(B)),$$ where $\cup^\mathcal{S}: \mathcal{P}(\mathbb{R}^n) \times \mathcal{P}(\mathbb{R}^n) \to \mathcal{P}(\mathbb{R}^n)$ is the standard set union and $\cup^{\mathcal{L}}: L \times L \to L$ is a learned latent-space analog. We could imagine directly parameterizing maps $\cup^{\mathcal{L}}: L \times L \to L$ and $\cap^{\mathcal{L}}: L \times L \to L$ as MLPs:

Learned operations

But such a naive parameterization would not capture the symmetries provided by the laws of the source algebra; $D(E(A) \cup^{\mathcal{L}} E(B))$ and $D(E(B) \cup^{\mathcal{L}} E(A))$ could yield completely different sets! This leads us to ask a key question:

Can we learn operations on the latent space which provably respect the algebraic laws of the source algebra?

While we focus our work on sets for exposition, we note that this connection between latent embeddings and source-space algebraic structure is quite general. A similar idea applies to learned (continuous) functions, probability distributions, and textual embeddings.

Algebraic structure overview

Transport of structure

A precise formulation of our method relies on machinery from universal algebra. We provide an informal explanation here and refer to the full paper for additional details.

Our key idea is to parameterize a learned bijection $\varphi: L \to M$, where the mirrored space $M = \mathbb{R}^l$ is of the same dimensionality as $L$. We endow $M$ with one operation for each operation on the source algebra, attempting to ensure that these operations satisfy all required laws. For instance, we can define mirrored-space operations $\cup^{\mathcal{M}}: M \times M \to M, \cup^{\mathcal{M}}: M \times M \to M$ which form a Riesz space: $$ \begin{aligned} a \cup^{\mathcal{M}} b &= \mathrm{max}(a, b), \\ a \cap^{\mathcal{M}} b &= \mathrm{min}(a, b). \end{aligned} $$ This choice of operations satisfies all the same laws as $\cup^{\mathcal{S}}$ and $\cap^{\mathcal{S}}$ do for the space of sets: commutativity, associativity, absorption, and distributivity. We can then transfer the operations $f \in \{\cup, \cap\}$ to the latent space via $\varphi$: $$ f^{\mathcal{L}}(z_1,\dots,z_n) := \varphi^{-1}\big(f^{\mathcal{M}}(\varphi(z_1),\dots,\varphi(z_n))\big). $$ Crucially, since $\varphi$ is a learned bijection, a good choice of operations on $M$ yields operations on $L$ which provably satisfy the desired source algebra laws. Details on how we train $\varphi$ and choose mirrored-space operations are deferred to the full paper.

Algebraic structure overview

Experimental results

Our problem setting concerns learning $\cap$ and $\cup$ over a dataset of synthetic sets in $\mathbb{R}^2$. We evaluate a variety of mirrored space operation combinations and two directly parameterized MLP references.

Results

The left-hand figure above plots the performance of learned operations against the number of satisfied source algebra laws. Each scatter point represents one combination of learned operations, with the solid line capturing the mean. The increasing trend confirm our primary hypothesis: learned latent-space operations achieve higher performance when constructed to satisfy source algeba laws.

The right-hand figure examines the self-consistency of learned operations; do they yield similar results for equivalent terms? For instance, does the prediction for $A \cup B$ match $B \cup A$? The Riesz space transported algebra is perfectly self-consistent as it satisfies all source algebra laws. However, all other learned operations degrade as more random laws are applied.


Reference

@inproceedings{pfrommer2024transport,
   title={Transport of Algebraic Structure to Latent Embeddings},
   author={Pfrommer, Samuel and Anderson, Brendon G and Sojoudi, Somayeh},
   booktitle={International Conference on Machine Learning},
   year={2024}
}