Holistic Safety and Responsibility Evaluations of Advanced AI Models
CoRR(2024)
摘要
Safety and responsibility evaluations of advanced AI models are a critical
but developing field of research and practice. In the development of Google
DeepMind's advanced AI models, we innovated on and applied a broad set of
approaches to safety evaluation. In this report, we summarise and share
elements of our evolving approach as well as lessons learned for a broad
audience. Key lessons learned include: First, theoretical underpinnings and
frameworks are invaluable to organise the breadth of risk domains, modalities,
forms, metrics, and goals. Second, theory and practice of safety evaluation
development each benefit from collaboration to clarify goals, methods and
challenges, and facilitate the transfer of insights between different
stakeholders and disciplines. Third, similar key methods, lessons, and
institutions apply across the range of concerns in responsibility and safety -
including established and emerging harms. For this reason it is important that
a wide range of actors working on safety evaluation and safety research
communities work together to develop, refine and implement novel evaluation
approaches and best practices, rather than operating in silos. The report
concludes with outlining the clear need to rapidly advance the science of
evaluations, to integrate new evaluations into the development and governance
of AI, to establish scientifically-grounded norms and standards, and to promote
a robust evaluation ecosystem.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要