-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix for LogDensityFunction
#621
Conversation
It looks like a good one for @mhauru to review. |
I was just today reading through DynamicPPL and noticed that Do I understand correctly that functionally the changes here are equivalent to changing
to
and the rest, introducing the |
I think we should unify these contexts eventually, although not necessarily in this PR. I lean towards contextualising a model before passing it to a # check for invalid context composition; note that `contextualising!!` could be called more than once
model_with_context = contextualising!!(model, context)
res = evaluate!!(rng, model_with_context, ...) # remove context argument here If a model is conditioned, when we contextualise it again, it can throw an error in cases where context composition is invalid. This is probably the same as @torfjelde's idea above, removing the |
It's not so much about "not storing redundant data", but rather about "lazily" resolving the context in the case of
We already have this: evaluate!!(contextualize(model, context), varinfo) But, as I said above, this isn't so easy because we use explicit context-passing quite a few places beyond |
Is there any other difficulty other than finding all the places and then updating them? |
It's a question of what you do with methods such as |
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Something weird is happening with Documenter.jl here? Seems like everything is missing some whitespace o.O |
Pull Request Test Coverage Report for Build 9560893923Details
💛 - Coveralls |
This should be ready |
The integration test fails because Turing.jl has code that tries to access the One solution to this would be to go and fix all the dependants to use Another option would be to override Relevant Julia style guide page: https://docs.julialang.org/en/v1/manual/style-guide/#Prefer-exported-methods-over-direct-field-access I lean towards the former solution. Other thoughts? |
@mhauru, can you create a PR for Turing that adopts the suggestion you propose above? For packages without DynamicPPL bounds, that's unfortunate, maybe this is the opportunity that such bounds are added. However, I am not aware of any package depending on DynamicPPL without an explicit version bound. Also, does that mean this PR can be merged as a breaking release? |
Yep, we can make this a breaking release and be fine. I'll make the Turing.jl PR tomorrow. |
Co-authored-by: Hong Ge <[email protected]>
Pull Request Test Coverage Report for Build 9666211295Details
💛 - Coveralls |
1 similar comment
Pull Request Test Coverage Report for Build 9666211295Details
💛 - Coveralls |
Pull Request Test Coverage Report for Build 9666211295Details
💛 - Coveralls |
Thanks for getting this through!
Agree that this would have been overkill:) |
Issue
When evaluating a
Model
, there are two "sources" of contexts provided: 1) explicitly passed toevaluate!!
as an argument, and 2) through the context attached to the model itself inmodel.context
.The latter was introduced because in many scenarios it makes sense to "contextualize" a
Model
, e.g. attach aConditionContext
to aModel
to specify which parameters are consideredconditioned
. The former is present because back in the day when the samplers were heavily tied to DynamicPPL and we passed asampler
argument in almost every place where we now passcontext
.To "bridge" the two approaches, when we call
evaluate!!
, the process of "resolving" the context that eventually ends up as__context__
in the model itself, occurs here:DynamicPPL.jl/src/model.jl
Lines 995 to 997 in d384da2
In short, we do:
model.context
with the leaf context provided bycontext
argument.context
argument to the context resulting from (1).This might be a bit strange, but the result is that
context
takes precedence overmodel.context
, as it's considered to be "more important" due to the "user" explicitly passing it toevaluate!!
.We did this because some samplers were using contexts to specify certain behaviors that had to be respected, e.g.
context
could be aPriorContext
to indicate that the prior should be evaluated whilemodel.context
could be aDefaultContext
, in which case we wanted the result to bePriorContext
.This also means that
LogDensityFunction
, effectively a convenient wrapper aroundevaluate!!
, also has two sources of contexts:f.model.context
andf.context
. By default, i.e. if we callLogDensityFunction(model)
, we specify these to be the same, i.e.f.context === f.model.context
. This is clearly very redundant, since we're just specifiying the same context twice. Moreover, since, as seen above, we effectively concatenatecontext
andmodel.context
, this results inLogDensityFunction
evaluating the model with the context "doubled". In most cases, this still results in the intended behavior, but once you start changing certain fields of theLogDensityFunction
, e.g.LogDensityFunction(model_new, f.varinfo, f.context)
, interesting things can happen. For example, in TuringLang/Turing.jl#2231, I ran into an issue where I'd get twoConditionContext
conditioning the same variable; one fromf.model.context
(what I intended) and one fromf.context
(what I did not intend).Solution
This PR addresses this issue for
LogDensityFunction
by simply allowingnothing
inf.context
, which is resolved toleafcontext(model.context)
if not specified. This addresses the issues I've encountered above, but the proper way of fixing this would, IMO, be to either:nothing
to be passed in place ofcontext
"everywhere", i.e. we make them allOptional{AbstractContext} = Union{Nothing,AbstractContext}
types, and resolve tomodel.context
whenever it'snothing
.context
argument fromevaluate!!
completely and instead always just "attach" thecontext
to theModel
. This seems "nicer" overall, but will require quite a bit of work both here and on the Turing.jl side + it's not quite clear to me that this will indeed quite work (context
is used many other places than just inevaluate!!
, e.g.unflatten
, to allow samplers inSamplingContext
to change behaviors further).Appendum
This entire PR arose from the following scenario:
A
ConditionContext
is such that the "outermost" one takes precedence (since this is the one which was applied last), but in the above scenario this is not respected since we end up using theConditionContext(x = 0, ...)
fromf.context
instead of the outermost one fromf.model.context
.