Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

investigate effects of masked pixels #214

Open
pmelchior opened this issue Oct 21, 2020 · 1 comment
Open

investigate effects of masked pixels #214

pmelchior opened this issue Oct 21, 2020 · 1 comment

Comments

@pmelchior
Copy link
Owner

Masked pixels arise from cosmic ray hits, artifact removal, saturation ...

The question we ask here is: do we prefer to have masked pixels interpolated upstream or do we fix them by modeling in scarlet.

While we can solve for missing pixels in scarlet, especially when other bands at the same location are unmasked, there is a price to pay if convolution operations are in the forward pass. When we mentally invert the forward model, we apply a convolution (with a deconvolution kernel) to an image with random zeros in it (the masked ones). This is will cause some artifacts because the pixel shape isn’t band-limited.

If the value of the masked pixel is close to zero, the masking will pass through the (de)convolution just fine. So, truncation at the edge of postage stamps/LSST footprints should be fine. Also, any artifact in regions dominated by sky. But saturation will be trouble.

We should investigate under what conditions we prefer upstream interpolation. This will depend on the shape of the masks and the flux level of the masked pixels.

I suspect that most analysis pipelines would prefer e.g. saturated cores of bright objects (mostly stars) to be filled in by e.g. the properly scaled PSF model, as long as the pixels are flagged as suspect. Then they can still be discarded if that's deemed better.

@pmelchior
Copy link
Owner Author

pmelchior commented Oct 28, 2020

Suggestion from Robert was to take care of (some) masked pixels on our end with the following procedure:

  • keep the weights of masked pixels at their nominal (nonzero) value
  • fit the data, including the masked pixels
  • at every epoch: replace the value (of Obervation.images) of the masked pixels with the rendered model

This is akin to gappy PCA and should converge as long as the masked region is not larger than the correlation length of the signal, which is at least of order the PSF width.

This can be done in Observation.render because the convolved model is available there. However, we'd have to store the mask image to determine which pixels to trust and which ones to replace.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant