-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Custom dataset #8
Comments
Hi, did you make your pipline? I have the task whick look similar to yours. I need just to put the 3D mesh or point cloud to network and get labeling, that's all, but currently seacrhing for the way how to do it with existing model and code. |
Hello mzillag, However, I have a new project starting very soon that will use this, so I'll be getting back to it and making another attempt at using custom data. If I run into anything I'll be sure to post it here. |
Thanks for the answer. So you didn’t know how to test the model on point cloud (which has not labeled) not from their dataset? |
That's right, exactly. I was able to train and test fine with their dataset, but not do either with my own. Hopefully soon though! |
Hi, I'd like to try and make my own dataset of different buildings with parts other than the ones included with BuildingNet. Generating the 3D models (and point clouds) themselves shouldn't be a problem - it's everything else I'm not sure about.
For the minkowski pretrained features, should I simply download their code (https://github.com/NVIDIA/MinkowskiEngine) and run the models through it?
The required json files for each model used to train BuildingNet are fairly extensive (adjacency, containment, support, and similarity) and they don't seem especially trivial to recreate. The code for their creation doesn't appear to be in this repository, and I'm guessing it is part of the labeling application shown in the paper. Can the code for the labeling application be downloaded? I'm hoping in the end to output the label data automatically together with the 3D models when they are generated (that is, a synthetic training data generation pipeline) and being able to see some of that code would be a huge help.
The text was updated successfully, but these errors were encountered: