We introduce RecurrentGemma, an open language model which uses Google's novel
Griffin architecture. Griffin combines linear recurrences with local attention
to achieve excellent performance on language. It has a fixed-sized state, which
reduces memory use and enables efficient inference on long sequences. We
provide a pre-trained model with 2B non-embedding parameters, and an
instruction tuned variant. Both models achieve comparable performance to
Gemma-2B despite being trained on fewer tokens.