Transformers for Higher-Order Logic

The usage of automated theorem provers is on the rise, and they have recently had a hand in aiding in proving new mathematical results. With the rise of AI, would it be possible to extend these models to help humans solve complicated mathematical theorems?

Mathematical theorems are but logical sequences of symbols, making them a perfect target for Natural Language Processing techniques. In this work, I collaborate with two other team members to implement attention-based mechanisms for learning higher-order logic theorem proving. We implemented attention-based models and trained them on the HolStep dataset in an attempt to study automatic theorem proving.

First-Order Theorem from the HolStep dataset

GitHub: Code