-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Infill a sentence as a continuation of a conditioning token? #6
Comments
Sorry for the delayed response. I don't think this would be a huge job. All you would have to do is create a new mask function which does exactly this masking task of preserving the first word of a sentence but masking out the rest. For example, for the sentence:
You would want to produce a training example of the form:
Does this help? |
Ah, right—a custom mask function... makes sense. I can't dig into this for another week, or so, but will give it a go when I'm back on this task. Thanks! This is a very cool tweak on applied Transformers, btw! 👍 |
Thank you! Also, I just thought about this again and realized that you absolutely don't need a custom mask function for this. You just need to train a standard ILM to perform sentence infilling, and then condition the generation on the initial word. E.g., pass the model |
Ah, okay! I'll give this a go first. I can't remember what I tried, but I think I was just trying things in the Jupyter notebook and wasn't sure how |
I'd like to be able to do a version of sentence infilling that allows for conditioning the generation on a leading token—i.e., semantically prompting the generated infill sentence. In your estimation, would it be a big job to enable this kind of generation? I'm thinking of the way that initial words like "however", "therefore", "further", and so on can have a strong semantic effect on the kind of sentence infill generated.
The text was updated successfully, but these errors were encountered: