基本信息
浏览量:0
职业迁徙
个人简介
My research interests lie at the intersection of computer vision and natural language processing. I believe that, like humans (and other animals), AI systems should have a holistic understanding of the world around them. This means working with multiple sensory modalities, among which vision and language arise as particularly interesting. On one hand, they are complementary: vision is a low-level perceptual modality, while language is an abstract human construct. On the other hand, they are believed to be two essential modalities for solving AI-complete problems.
I am generally interested in multimodal vision-language generative models, i.e. models capable of generating images and/or text conditioned on multimodal inputs. Generating new content requires learning and composing patterns from existing data, i.e. modeling the underlying data distribution. When this data represents the real world, generative models become effective “world models”. This idea has numerous applications. For example, text-conditioned image generation models can synthesize data on demand for training recognition/representation learning models on new tasks/skills. Furthermore, given the semantic and compositional nature of language, (large) language models can serve as reasoning engines. By aligning language models with vision encoders, we can build powerful multimodal systems capable of both perceiving and reasoning, which can be deployed as multimodal assistants (e.g. to aid visually-impaired users).
I am generally interested in multimodal vision-language generative models, i.e. models capable of generating images and/or text conditioned on multimodal inputs. Generating new content requires learning and composing patterns from existing data, i.e. modeling the underlying data distribution. When this data represents the real world, generative models become effective “world models”. This idea has numerous applications. For example, text-conditioned image generation models can synthesize data on demand for training recognition/representation learning models on new tasks/skills. Furthermore, given the semantic and compositional nature of language, (large) language models can serve as reasoning engines. By aligning language models with vision encoders, we can build powerful multimodal systems capable of both perceiving and reasoning, which can be deployed as multimodal assistants (e.g. to aid visually-impaired users).
研究兴趣
论文共 8 篇作者统计合作学者相似作者
按年份排序按引用量排序主题筛选期刊级别筛选合作者筛选合作机构筛选
时间
引用量
主题
期刊级别
合作者
合作机构
arXiv (Cornell University) (2024)
Florian Bordes,Richard Yuanzhe Pang,Anurag Ajay,Alexander C. Li,Adrien Bardes,Suzanne Petryk,Oscar Mañas,Zhiqiu Lin,Anas Mahmoud,Bargav Jayaraman,Mark Ibrahim,Melissa Hall,Yunyang Xiong,Jonathan Lebensold,Candace Ross, Srihari Jayakumar,Chuan Guo,Diane Bouchacourt,Haider Al-Tahan, Karthik Padthe,Vasu Sharma,Hu Xu,Xiaoqing Ellen Tan,Megan Richards,Samuel Lavoie,Pietro Astolfi,Reyhane Askari Hemmat,Jun Chen,Kushal Tirumala,Rim Assouel,Mazda Moayeri, Arjang Talattof,Kamalika Chaudhuri,Zechun Liu,Xilun Chen,Quentin Garrido,Karen Ullrich,Aishwarya Agrawal,Kate Saenko,Asli Celikyilmaz,Vikas Chandra
CoRR (2024)
引用0浏览0EI引用
0
0
Melissa Hall,Oscar Mañas, Reyhane Askari-Hemmat,Mark Ibrahim,Candace Ross, Pietro Astolfi, Tariq Berrada Ifriqi, Marton Havasi, Yohann Benchetrit,Karen Ullrich, Carolina Braga, Abhishek Charnalia, Maeve Ryan, Mike Rabbat,Michal Drozdzal,Jakob Verbeek,Adriana Romero-Soriano
arxiv(2024)
引用0浏览0引用
0
0
THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 5pp.4171-4179, (2024)
17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023pp.2523-2548, (2023)
作者统计
#Papers: 8
#Citation: 271
H-Index: 2
G-Index: 2
Sociability: 4
Diversity: 1
Activity: 2
合作学者
合作机构
D-Core
- 合作者
- 学生
- 导师
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn