Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to export OBJ files instead of the video? #3

Open
hungdche opened this issue Oct 27, 2023 · 7 comments
Open

How to export OBJ files instead of the video? #3

hungdche opened this issue Oct 27, 2023 · 7 comments

Comments

@hungdche
Copy link

hungdche commented Oct 27, 2023

Thanks for such an awesome work! How could I export the obj files of the human meshes? I notice there is this function: extract_mesh_with_marching_cube. Can I use this in the script gen_multiview for mesh extraction?

@Xuanmeng-Zhang
Copy link
Collaborator

Yes, I think we can extract the textureless mesh with marching cude function.

@hungdche
Copy link
Author

Thank you. One more question, the provided command for inference requires the dataset. Can I infer from the model without providing the dataset?

@Xuanmeng-Zhang
Copy link
Collaborator

Yes, during inference, you can define your own SMPL parameters to generate the corresponding human.

@hungdche
Copy link
Author

hungdche commented Oct 30, 2023

If you could guide me on how to do that or give a few pointers I would greatly appreciate it!

From the look of it, this function takes the dataset and generate a dictionary with the dataset path.
image

and gen_interp_video seems to be requiring the dataset.
image

My goal is just to extract a textureless obj mesh of a human pose from the pretrained model without needing a dataset. Thank you so much in advanced!

@Xuanmeng-Zhang
Copy link
Collaborator

Xuanmeng-Zhang commented Nov 2, 2023

We use the dataset to load the labels at here, so you can pass the steps and directly define the labels at here.
The label info is defined at here.

@hungdche
Copy link
Author

Hi @Xuanmeng-Zhang, sorry for reopening this. The only animation I want to use is walking, so I think of getting the walking SMPL from here. But I have no idea how to convert that SMPL to the one defined here. My ultimate goal is to generate a 3D human mesh with walking animation. I hope you can help me out a bit. Thank you so much!

@mayank64ce
Copy link

@hungdche I think I did managed to do that. If I remember correctly, there was some way to render the output of motion diffusion model using human_body_prior library or soemthing. It gave out a .npz file. I got the 72 parametres from each frame and put them in the smpl_array attribute just like the mocap data GETAvatar uses. and saved it in .pkl format.

Then you can load it like a regular animation sequence as mentioned in their readme. Although I could not stop the camera from rotating around the subject, it did perform the motion.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants