-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tfdsv4 fix carla configs #1788
tfdsv4 fix carla configs #1788
Conversation
marking WIP since I still need to add the dataset's .py file |
the only change between |
armory/datasets/adversarial/carla_obj_det_dev/carla_obj_det_dev.py
Outdated
Show resolved
Hide resolved
ready for re-review @davidslater. Some of the preprocessing code this PR touches is being modified in #1797, but I think we can merge this in and make sure to test carla and xview when when reviewing the latter PR |
@lcadalzo can you resolve the merge conflict? |
@davidslater ready to go. The commits from today are just renaming a function and removing code that got duplicated in the handling of the merge conflict. I've tested both edit: the tide error is occurring on this branch but not |
The error may be due to changes in develop that happened after we branched to tfdsv4 |
meaning there have been fixes in |
No, on second look, those occurred before develop was merged into tfdsv4, so I don't know. |
ah ok, the issue is occurring because similar to this issue, the tfdsv4 pipeline outputs labels with a batch dimension e.g.
whereas the code is expecting |
The reason mAP doesn't fail with a similar error is because of these lines. I think this PR should be fine to merge, and we'll need to add a separate issue to tweak tide calculation to be compatible with this form, since the tf preprocessing will always output with a batch dimension |
Throwing error when running File "/workspace/armory/datasets/download.py", line 70, in download
download_file_from_s3(str(key), str(filepath), public=public)
│ │ │ └ True
│ │ └ PosixPath('/armory/datasets/new_builds/cache/carla_obj_det_dev__2.0.0__cache.tar.gz')
│ └ 'datasets/cache/carla_obj_det_dev__2.0.0__cache.tar.gz'
└ <function download_file_from_s3 at 0x7f8c2f2b2c10>
File "/workspace/armory/datasets/download.py", line 38, in download_file_from_s3
raise KeyError(f"File {key} not available in {bucket} bucket.")
KeyError: 'File datasets/cache/carla_obj_det_dev__2.0.0__cache.tar.gz not available in armory-public-data bucket.' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Convention update.
armory/datasets/adversarial/carla_obj_det_dev/carla_obj_det_dev.py
Outdated
Show resolved
Hide resolved
armory/datasets/adversarial/carla_obj_det_dev/carla_obj_det_dev.py
Outdated
Show resolved
Hide resolved
@jprokos26 try again, it looks like the upload to |
Now throwing an error at end of run using File "/workspace/armory/metrics/task.py", line 940, in object_detection_mAP_tide
tidecv_ground_truth_list = armory_to_tide(y, image_id, is_detection=False)
│ │ └ array([69734696, 69734696, 69734696, 69734696, 69734696, 69734696,
│ │ 69734696, 69734696, 69734696, 69734696, 69734696, 6...
│ └ {'area': array([[1869, 3543, 120, 107, 145, 102, 116, 130, 139, 149, 122,
│ 154, 3893, 7951, 172]]), 'boxes'...
└ <function armory_to_tide at 0x7fb2c23231f0>
File "/workspace/armory/metrics/task.py", line 882, in armory_to_tide
x1, y1, x2, y2 = y["boxes"]
└ {'area': array([1869, 3543, 120, 107, 145, 102, 116, 130, 139, 149, 122,
154, 3893, 7951, 172]), 'boxes': a...
ValueError: too many values to unpack (expected 4) |
@jprokos26 see this comment and the ones above it. I still think there's a decision to be made here that applies to various datasets: do we modify scenario code to remove the batch dimension, or do we update metrics to expect the batch dimension. I'm deferring on this decision for now and not including a fix for that in this PR (we're tracking via #1814) |
Gotcha, then I agree this PR is ready to be merged if that issue is being tracked as the dataset is loading as expected now. |
Note that the carla attacks don't currently work with the new tfdsv4 pipeline. This is because the label keys now have a batch dimension, e.g. the old format that works with the attacks is
and the new:
I'm not intending to address that in this PR, so I tested with
--skip-attack
.