-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Usage questions & feedback #121
Comments
I think on each class where this is an issue, we can have a private class variable that lists the names of the fields that shouldn't be serialized if they are |
Thanks for giving it a spin!
I'm afraid the spec doesn't require it, then we won't automatically populate it, in line with our ethos of sticking rigidly to the spec. At least I think this is solved in version 0.5 of the spec now, with version mandated?
Ooh, this is a bug on our part. We claim to not be serialising
What error did you get? As far as I know it should be
Yes please! We haven't got round to working out the best way of creating objects and then using them to write out metadata, so if you have done that a PR to add it to the tutorial would be great! |
With:
I get:
|
I see under "Known Issues" you have: "Any fields set to None in the Python objects are currently not written when they are saved back to the JSON metadata using this package." But it doesn't say how you should be saving the JSON metadata? Maybe this isn't the way to do it?:
|
Ah right, the
I think when we wrote that we were expecting the model to be serialised using Pydantic, which I think you're doing using |
Maybe we should have our own utility method that serlializes the model that way by default? |
I think this is a great idea 👍 - does anyone want to have a go implementing this? For now, you could use the |
if you add a new method, then pydantic will not know about it, and your model will then not serialize correctly. This will impair embedding these models inside other pydantic data structures. I would recommend just copying exactly (or improving) what I did over in pydantic-ome-ngff, which was to hard-code for each model the fields that should not be serialized if they are Just using |
Could we just use the |
yeah I missed this part of the issue -- users should never have to write code like this. |
similarly, users don't need to manually create zarr arrays, or a zarr group. |
@d-v-b Do you mean that So, in my example above, if you missed out the creation of
how would that also handle the array creation (since you've not provided any data, shape, dtype etc)? |
There are utilities in |
@will-moore look at this example in the tests: ome-zarr-models-py/tests/v04/test_multiscales.py Lines 442 to 448 in 9c610fe
I think this is the kind of API we should be aiming for. |
That looks nice, yes. Otherwise everyone will have to write their own helpers to do that. |
it's not possible to calculate the translations from the pixel size, because that requires knowledge about how the image was downsampled. I address this by using xarray for downsampling, and using bindings between xarray and ome-zarr to ensure that the translation information gets propagated. |
My reluctance here was having to re-implement the interface/function signature in How about a model where you create your metadata without any arrays, and then add arrays to it, e.g., in from ome_zarr_models.v04.axes import Axis
from ome_zarr_models.v04.image import Image, ImageAttrs
from ome_zarr_models.v04.multiscales import Multiscale
multiscale = Multiscale(axes=(Axis(name="x")), datasets=())
attrs = ImageAttrs(multiscales=[])
image = Image(attributes=attrs)
image.to_zarr(...) # This just writes out groups, not arrays
image.add_multiscale_array(path=..., arr=zarr.empty(...), transform=...) I haven't thought it this would work for other groups (e.g., HCS), but might be a way forward without us having to re-implement function signatures from |
Oh actually this is just a variation of #121 (comment)... so I think we agree that something like #121 (comment) or my comment above would be useful - would anyone be up for putting together a draft PR for how this could work for |
I'm testing the generation of OME-Zarr metadata with this script (then checking the output in ome-ngff-validator)...
A couple of questions:
version="0.4"
to theMultiscale
. Although the spec doesn't require it (level isSHOULD
), it causes issues in the validator as we don't know which schemas to load etc. Since the class isome_zarr_models.v04.multiscales.Multiscale
I wonder if it should automatically populate the version in this case?/multiscales/0/name
is not a string (since it isnull
in the JSON). Although it is optional, if it's present it must be a string. Alsomultiscale.type
andmultiscale.metadata
are alsonull
. Maybe there's a better way to populate thezarr.attrs
from the model that doesn't includenull
items?ImageAttrs
. I guessedImage
first and got some strange error.Cheers
The text was updated successfully, but these errors were encountered: