Chrome Extension
WeChat Mini Program
Use on ChatGLM

Semantic Supervision: Enabling Generalization over Output Spaces.

CoRR(2022)

Cited 0|Views13
No score
Abstract
In this paper, we propose Semantic Supervision (S EM S UP ) – a unified paradigm for training classifiers that generalize over output spaces. In contrast to standard classification, which treats classes as discrete symbols, S EM S UP represents them as dense vector features obtained from descriptions of classes (e.g., “The cat is a small carnivorous mammal”). This allows the output space to be unbounded (in the space of descriptions) and enables models to generalize both over unseen inputs and unseen outputs (e.g. “The aardvark is a nocturnal burrowing mammal with long ears”). Specif-ically, S EM S UP enables four types of generalization, to – (1) unseen class descriptions, (2) unseen classes, (3) unseen super-classes, and (4) unseen tasks. Through experiments on four classification datasets across two variants (multi-class and multi-label), two input modalities (text and images), and two output description modalities (text and JSON), we show that our S EM S UP models significantly outperform standard supervised models and existing models that leverage word embeddings over class names. For instance, our model outperforms baselines by 40% and 20% precision points on unseen descriptions and classes, respectively, on a news categorization dataset (RCV1). S EM - S UP can serve as a pathway for scaling neural models to large unbounded output spaces and enabling better generalization and model reuse for unseen tasks and domains. 1
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined