基本信息
浏览量:0
职业迁徙
个人简介
My research interests lie at the intersection of computer vision and natural language processing. I believe that, like humans (and other animals), AI systems should have a holistic understanding of the world around them. This means working with multiple sensory modalities, among which vision and language arise as particularly interesting. On one hand, they are complementary: vision is a low-level perceptual modality, while language is an abstract human construct. On the other hand, they are believed to be two essential modalities for solving AI-complete problems.
I am generally interested in multimodal vision-language generative models, i.e. models capable of generating images and/or text conditioned on multimodal inputs. Generating new content requires learning and composing patterns from existing data, i.e. modeling the underlying data distribution. When this data represents the real world, generative models become effective “world models”. This idea has numerous applications. For example, text-conditioned image generation models can synthesize data on demand for training recognition/representation learning models on new tasks/skills. Furthermore, given the semantic and compositional nature of language, (large) language models can serve as reasoning engines. By aligning language models with vision encoders, we can build powerful multimodal systems capable of both perceiving and reasoning, which can be deployed as multimodal assistants (e.g. to aid visually-impaired users).
I am generally interested in multimodal vision-language generative models, i.e. models capable of generating images and/or text conditioned on multimodal inputs. Generating new content requires learning and composing patterns from existing data, i.e. modeling the underlying data distribution. When this data represents the real world, generative models become effective “world models”. This idea has numerous applications. For example, text-conditioned image generation models can synthesize data on demand for training recognition/representation learning models on new tasks/skills. Furthermore, given the semantic and compositional nature of language, (large) language models can serve as reasoning engines. By aligning language models with vision encoders, we can build powerful multimodal systems capable of both perceiving and reasoning, which can be deployed as multimodal assistants (e.g. to aid visually-impaired users).
研究兴趣
论文共 3 篇作者统计合作学者相似作者
按年份排序按引用量排序主题筛选期刊级别筛选合作者筛选合作机构筛选
时间
引用量
主题
期刊级别
合作者
合作机构
Oscar Mañas,Pietro Astolfi,Melissa Hall,Candace Ross, Jack Urbanek,Adina Williams,Aishwarya Agrawal, Adriana Romero-Soriano,Michal Drozdzal
CoRR (2024)
引用0浏览0EI引用
0
0
AAAI 2024no. 5 (2024): 4171-4179
17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023pp.2523-2548, (2023)
作者统计
合作学者
合作机构
D-Core
- 合作者
- 学生
- 导师
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn