Sangam: A Confluence of Knowledge Streams

Universal Motion Generator: Trajectory Autocompletion by Motion Prompts

Show simple item record

dc.creator Wang, Yanwei
dc.creator Shah, Julie
dc.date 2022-06-15T14:42:32Z
dc.date 2022-06-15T14:42:32Z
dc.date 2022-06-15
dc.date.accessioned 2023-02-17T20:00:21Z
dc.date.available 2023-02-17T20:00:21Z
dc.identifier https://hdl.handle.net/1721.1/143430
dc.identifier.uri http://localhost:8080/xmlui/handle/CUHPOERS/242081
dc.description Foundation models, which are large neural networks trained on massive datasets, have shown impressive generalization in both the language and the vision domain. While fine-tuning foundation models for new tasks at test-time is impractical due to billions of parameters in those models, prompts have been employed to re-purpose models for test-time tasks on the fly. In this report, we ideate the equivalent foundation model for motion generation and the corresponding formats of prompt that can condition such a model. The central goal is to learn a behavior prior for motion generation that can be re-used in a novel scene.
dc.description CSAIL NSF MI project – 6939398
dc.format application/pdf
dc.language en_US
dc.rights Attribution-NonCommercial-NoDerivs 3.0 United States
dc.rights http://creativecommons.org/licenses/by-nc-nd/3.0/us/
dc.subject Robot Learning, Large Language Models, Motion Generation
dc.title Universal Motion Generator: Trajectory Autocompletion by Motion Prompts
dc.type Working Paper


Files in this item

Files Size Format View
Universal_Motion_Generator.pdf 4.934Mb application/pdf View/Open

This item appears in the following Collection(s)

  • DSpace@MIT [2699]
    DSpace@MIT is a digital repository for MIT's research, including peer-reviewed articles, technical reports, working papers, theses, and more.

Show simple item record

Search DSpace


Advanced Search

Browse