Physical Time: A Model for Generating Rhythmic Gestures Based on Time Metaphors

MOCO '20: 7th International Conference on Movement and Computing Jersey City/Virtual NJ USA July, 2020(2020)

引用 0|浏览3
暂无评分
摘要
Possibilities for cross disciplinary interactive performance continue to grow as new tools are developed and adapted. Yet, the qualitative aspects of cross disciplinary interaction has not advanced at the same rate. We suggest that new models for understanding gesture in different media will support the development of nuanced interaction for interactive performance. We have explored this premise by considering models for generating musical rhythmic gestures that enable implicit interaction between the gestures of a dancer and the generated music. We create a model that focuses on understanding rhythms as dynamic gestures that flow in, around, or out of goal points. Goal points can be layered and quantized to a meter, providing the rhythmic structure expected in music, while the figurations enable the generated rhythms to flow with the performer responding to the more qualitative aspects of performer. We have made a simple implementation of this model to test the conceptual and technical viability. We discuss both the model and our implementations suggesting that the model, even with a simple implementation, affords a unique ability to reflect the dynamic flow of gestures in movement paradigms while still providing a sense of structured time indicative of a musical paradigm.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要