You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@ShibiHe
Hi, thanks for your great paper, and sorry to bother you.
In the paper, the upper bound and lower bound are incorporated into the algorithm via quadratic penalties. But I cannot find the implementation corresponding to these two quadratic penalties.
It seems that the loss function is defined in the init function of DeepQLearner class. Here no penalties are added.
And some main differences comparing with the original DQN codes are shown in _do_training function of OptimalityTightening class. I am not so sure what is the meaning of targets1 variable. And how can this implementation works as two quadratic penalties in paper?
Please correct me if I'm wrong, and thank you very much!!
The text was updated successfully, but these errors were encountered:
@ShibiHe
Hi, thanks for your great paper, and sorry to bother you.
In the paper, the upper bound and lower bound are incorporated into the algorithm via quadratic penalties. But I cannot find the implementation corresponding to these two quadratic penalties.
It seems that the loss function is defined in the init function of DeepQLearner class. Here no penalties are added.
And some main differences comparing with the original DQN codes are shown in _do_training function of OptimalityTightening class. I am not so sure what is the meaning of targets1 variable. And how can this implementation works as two quadratic penalties in paper?
Please correct me if I'm wrong, and thank you very much!!
The text was updated successfully, but these errors were encountered: