Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dataset used for training #156

Open
Diksha-Moolchandani opened this issue Nov 11, 2020 · 2 comments
Open

Dataset used for training #156

Diksha-Moolchandani opened this issue Nov 11, 2020 · 2 comments

Comments

@Diksha-Moolchandani
Copy link

Diksha-Moolchandani commented Nov 11, 2020

What is the dataset used for training the NVsmall, NVTiny, and ResNet models present in the stereoDNN/models folder?

@Alexey-Kamenev
Copy link
Collaborator

For training and validation we used KITTI dataset. We used Stereo 2015 benchmark dataset only for evaluation, i.e. we did not train/fine-tune models on that dataset (it's too small anyway, 200 samples AFAIR).

@Diksha-Moolchandani
Copy link
Author

Diksha-Moolchandani commented Nov 12, 2020

What is the significance of input size for these models? The inputs in KITTI 2015 training set are of size (375, 1242). And the input size that is mentioned in ResNet-18_2D is 513x257. How am I suposed to run it?
./nvstereo resnet18_2D 1242 375 stereoDNN/models/ResNet-18_2D/TensorRT/trt_weights.bin left_image_path right_image_path ./disp

or

./nvstereo resnet18_2D 513 257 stereoDNN/models/ResNet-18_2D/TensorRT/trt_weights.bin left_image_path right_image_path ./disp

In any case the ground truth will be 1242x375 and then how will I find the error if I use the second method for inference?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants