Skip to Main content Skip to Navigation

Neural-Symbolic Learning for Semantic Parsing

Abstract : Our goal in this thesis is to build a system that answers a natural language question (NL) by representing its semantics as a logical form (LF) and then computing the answer by executing the LF over a knowledge base. The core part of such a system is the semantic parser that maps questions to logical forms. Our focus is how to build high-performance semantic parsers by learning from (NL, LF) pairs. We propose to combine recurrent neural networks (RNNs) with symbolic prior knowledge expressed through context-free grammars (CFGs) and automata. By integrating CFGs over LFs into the RNN training and inference processes, we guarantee that the generated logical forms are well-formed; by integrating, through weighted automata, prior knowledge over the presence of certain entities in the LF, we further enhance the performance of our models. Experimentally, we show that our approach achieves better performance than previous semantic parsers not using neural networks as well as RNNs not informed by such prior knowledge
Complete list of metadata

Cited literature [103 references]  Display  Hide  Download
Contributor : ABES STAR :  Contact
Submitted on : Friday, February 2, 2018 - 3:05:06 PM
Last modification on : Wednesday, November 3, 2021 - 7:09:11 AM
Long-term archiving on: : Thursday, May 3, 2018 - 1:24:54 PM


Version validated by the jury (STAR)


  • HAL Id : tel-01699569, version 1


Chunyang Xiao. Neural-Symbolic Learning for Semantic Parsing. Computation and Language [cs.CL]. Université de Lorraine, 2017. English. ⟨NNT : 2017LORR0268⟩. ⟨tel-01699569⟩



Record views


Files downloads