Skip to Main content Skip to Navigation

Prise en compte du contexte inter-phrastique pour l'extraction d'événements supervisée

Abstract : The extraction of structured information from a document is one of the main parts of natural language processing (NLP). This extraction usually consists in three steps: named entities recognition relation extraction and event extraction. This last step is considered to be the most challenging. The notion of event covers a broad list of different phenomena which are characterized through a varying number of roles. Thereupon, Event extraction consists in detecting the occurrence of an event then determining its argument, that is, the different entities filling specific roles. These two steps are usually done one after the other. In this case, the first step revolves around detecting triggers indicating the occurrence of events.The current best approaches, based on neural networks, focus on the direct neighborhood of the target word in the sentence. Information in the rest of the document is then usually ignored. This thesis presents different approaches aiming at exploiting this document-level context.We begin by reproducing a state of the art convolutional neural network and analyze some of its parameters. We then present an experiment showing that, despite its good performances, our model only exploit a narrow context at the intra-sentential level.Subsequently, we present two methods to generate and integrate a representation of the inter-sentential context in a neural network operating on an intra-sentential context.The first contribution consists in producing a task-specific representation of the inter-sentential context through the aggregation of the predictions of a first intra-sentential model. This representation is then integrated in a second model, allowing it to use the document level distribution of event to improve its performances. We also show that this task-specific representation is better than an existing generic representation of the inter-sentential context.Our second contribution, in response to the limitations of the first one, allows for the dynamic generation of a specific context for each target word. This method yields the best performances for a single model on multiples datasets.Finally, we take a different tack on the exploitation of the inter-sentential context. We try a more direct modelisation of the dependencies between multiple event instances inside a document in order to produce a joint prediction. To do so, we use the PSL (Probabilistic Soft Logic) framework which allows to model such dependencies through logic formula.
Complete list of metadatas

Cited literature [134 references]  Display  Hide  Download
Contributor : Abes Star :  Contact
Submitted on : Wednesday, July 1, 2020 - 4:51:13 PM
Last modification on : Friday, October 9, 2020 - 10:02:38 AM
Long-term archiving on: : Wednesday, September 23, 2020 - 5:08:40 PM


Version validated by the jury (STAR)


  • HAL Id : tel-02886672, version 1



Dorian Kodelja Bonan. Prise en compte du contexte inter-phrastique pour l'extraction d'événements supervisée. Machine Learning [stat.ML]. Université Paris-Saclay, 2020. Français. ⟨NNT : 2020UPASS005⟩. ⟨tel-02886672⟩



Record views


Files downloads