From https://www.youtube.com/watch?v=bIZB1hIJ4u8&t=6445s
The developers of Geometric Deep Learning argue that nature is symmetric. Symmetry is, in my terms, a repetition of a finite number of unique objects. This implies information about a symmetric system/representation can be identified by knowing just the unique objects.
## Category
![[Pasted image 20220621130313.png]]
Category theory studies the relationships between objects, independent from the identity of the objects. Consider we have a graph of sentences where each node contains a different sentence. Suppose we're interested in translating a sentence written in language A to one in language B. Semantically both sentences, although they are written in a different language are equivalent. Machine translation is a transformation from the syntax of language A to syntax of language B, where its semantics are preserved; we can view the two sentences are symmetric under semantics.
Are modern neural nets capable of identifying this "transformation"? A neural net takes a sentence and maps it to some position in a high-dimensional latent space, which can be viewed as the semantic space. It's too high-dimensional and abstract for us to understand the latent space, but given that sentences that contain unique semantics are mapped one-to-one to the semantic domain, we can map from the space of sentences of language A to latent space, and then from the latent space to the space of sentences of language B. Finding this unique mapping is the hard part, but to me geometric deep learning attempts to rigidify the notion of "semantic symmetry" such that these relational mappings can be discovered easily.