Skip to content

Zhanerd/Emotion_walk_with_videopose3d

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Emotion_walk_with_videopose3d

2D pose use yolov8 and rtmpose. Only supports onnx inference.

3D pose use videopose3d. Only supports torch inference.

Gait Emotion recognition use stgcn. Only supports torch inference.

requirements

pytorch==2.1.0 onnxruntime==1.18.0

model download

U can find rtmpose and yolov8 onnx model in https://drive.google.com/drive/folders/1DfTw0aEpuEyXpo7XJXIvCzDTuZ-wNOy8?usp=drive_link . (In the pose folder! U can put it any where, but need to fill in the corresponding position when initializing class TopDownEstimation.)

The 3d pose model can find in https://github.com/facebookresearch/VideoPose3D. (Pls use the ###coco### model 'pretrained_h36m_detectron_coco.bin'! And put it in videopose3d/checkpoint)

The emotion model can find in https://github.com/PeterZs/take_an_emotion_walk. (And put it in emotion_walk/weights)

About

from a gait video to recognition emotion

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy