|
|
|
|
|
    We focus on full-body human interactions with large-sized daily objects and aim to predict the future states of objects and humans given a sequential observation of human-object interaction. As there is no such dataset dedicated to full-body human interactions with large-sized daily objects, we collected a large-scale dataset containing thousands of interactions for training and evaluation purposes. We also observe that an object's intrinsic physical properties are useful for the object motion prediction, and thus design a set of object dynamic descriptors to encode such intrinsic properties. We treat the object dynamic descriptors as a new modality and propose a graph neural network, HO-GCN, to fuse motion data and dynamic descriptors for the prediction task. |
Learn to Predict How Humans Manipulate Large-Sized Objects From Interactive Motions 2022 IEEE Robotics and Automation Letters |
Citation@article{wan2022learn, title={Learn to Predict How Humans Manipulate Large-Sized Objects From Interactive Motions}, author={Wan, Weilin and Yang, Lei and Liu, Lingjie and Zhang, Zhuoying and Jia, Ruixing and Choi, Yi-King and Pan, Jia and Theobalt, Christian and Komura, Taku and Wang, Wenping}, journal={IEEE Robotics and Automation Letters}, volume={7}, number={2}, pages={4702--4709}, year={2022}, publisher={IEEE} } |