Self-training with Weak Supervision [Code]
State-of-the-art deep neural networks require large-scale labeled training data that is often either expensive to obtain or not available for many tasks. Weak supervision in the form of domain-specific rules has been shown to be useful in such settings to automatically generate weakly labeled data for learning. However, learning with weak rules is challenging due to their inherent heuristic and noisy nature. An additional challenge is rule coverage and overlap, where prior work on weak supervision only considers instances to which domain-specific rules apply. In contrast, we develop a weak supervision framework WST that leverages all available data for a given task. To this end, we leverage task-specific unlabeled data that allows us to harness contextualized representations for instances where weak rules do not apply. In order to integrate this knowledge with domain-specific heuristic rules, we develop a rule attention network that learns how to aggregate them conditioned on their fidelity and the underlying context of an instance. Finally, we develop a semi-supervised learning objective for training this framework with small labeled data, domain-specific rules, and unlabeled data. Extensive experiments on six benchmark datasets demonstrate the effectiveness of our approach with significant improvements over state-of-the-art baselines.