Learning Dependency-Based Compositional Semantics Learning Dependency-Based Compositional Semantics
Paper summary * Supervised semantic parsers * First must map questiosn into logical forms and this requires data with manually labeled semantic forms * all we really care about is resulting denotation for a given input, so are free to choose how we represent logical forms * introduce new semantic representation: dependency-based compositional semantics * represent logical forms as DCS trees where nodes represent predicates (State, Country, Genus, ...) and edges represent relations * such a form allows for a transparency between syntactics and semantics and hence a streamlined framework for program induction * denotation at root node * trees mirror syntactic dependency structure, facilitating parsing but also enable efficient computation of denotations defined on a given tree * to handle divergence between syntactic and semantic scope in some more complicated expressions, mark nodes low in tree with *mark* relation (E, Q, or C) and then invoke it higher up with *execute* relation to create desired semantic scope * discriminative semantic parsing model placing a log-linear distribution over the set of permissible DCS trees given an utterance

Summary by Mihail Eric 5 months ago
Loading...
Your comment:


ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
About

Sponsored by: and