Perspectives on neural proof nets

Richard Moot

Submitted on 8 November 2022


In this paper I will present a novel way of combining proof net proof search with neural networks. It contrasts with the 'standard' approach which has been applied to proof search in type-logical grammars in various different forms. In the standard approach, we first transform words to formulas (supertagging) then match atomic formulas to obtain a proof. I will introduce an alternative way to split the task into two: first, we generate the graph structure in a way which guarantees it corresponds to a lambda-term, then we obtain the detailed structure using vertex labelling. Vertex labelling is a well-studied task in graph neural networks, and different ways of implementing graph generation using neural networks will be explored.


Comment: This is an extended version of an invited talk for the workshop End-to-End Compositional Models of Vector-Based Semantics

Subject: Computer Science - Computation and Language