Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support Standard Serialization Formats #40

Closed
sidjha1 opened this issue Nov 23, 2024 · 5 comments
Closed

Support Standard Serialization Formats #40

sidjha1 opened this issue Nov 23, 2024 · 5 comments
Labels
feature request New feature or request good first issue Good for newcomers

Comments

@sidjha1
Copy link
Collaborator

sidjha1 commented Nov 23, 2024

Is your feature request related to a problem? Please describe.
Currently the df2text function uses some custom serialization format. It would be good to support standard formats like JSON, XML, etc. since it better fits model training distributions. In fact, Anthropic recommends you use XML.

Describe the solution you'd like
There should be a serialization_format setting that is read inside the df2text function to do the proper formatting. The nice thing about pandas is that it already gives you things like to_xml and other functions to help with serialization.

@sidjha1 sidjha1 added good first issue Good for newcomers feature request New feature or request labels Nov 23, 2024
@dhruviyer
Copy link
Collaborator

@sidjha1 I've started some work on this in #63. It might be good to add some regression tests or other benchmarks as well to see if we're moving the needle, would love some input on how to approach that.

I'm wondering if we will eventually need to propagate this serialization through the prompt construction as well? For example, if the dataframe is encoded in XML, it makes sense that the prompt would be too. That seems like a much bigger lift so it might be good to take that on separately from this issue.

My concern is that I don't know if changing the dataframe serialization alone is enough to see the benefit of fitting the model training distribution, so curious to get your or @liana313's guidance on it.

@sidjha1
Copy link
Collaborator Author

sidjha1 commented Dec 24, 2024

I'm wondering if we will eventually need to propagate this serialization through the prompt construction as well? For example, if the dataframe is encoded in XML, it makes sense that the prompt would be too. That seems like a much bigger lift so it might be good to take that on separately from this issue.

This is a good point. From a research perspective there are also some questions around prompt optimization for these sorts of things (e.g. DSPy), so there's a lot that can be explored on the prompting side that can be left for later.

My concern is that I don't know if changing the dataframe serialization alone is enough to see the benefit of fitting the model training distribution

At some point I did some lit review on what folks are doing for table serialization for LM input but from what I understood, there is no real conclusion. In any case, Anthropic must have done some special things to get the model to favor XML. I suppose we'll have to see empirically if we need to also put some sort of tags around the user instruction. It might be good to have an easy to run benchmark where these output-changing features can be verified against (cc @liana313)

@liana313
Copy link
Collaborator

Thanks for the work on this @dhruviyer! I agree it would be great to start a benchmark directory where we can run these types of tests outside of CI. TabFact (https://github.com/wenhuchen/Table-Fact-Checking) might be a good dataset for us to use for this -- prior work (eg https://arxiv.org/pdf/2305.13062) evaluated table serialization on it, among other datasets

And on prompts, it makes sense to focus on just the table serialization for now, and we will look at prompt optimization separately.

@dhruviyer
Copy link
Collaborator

@liana313 opened #64 to track

@sidjha1
Copy link
Collaborator Author

sidjha1 commented Dec 25, 2024

Issue addressed by #63

@sidjha1 sidjha1 closed this as completed Dec 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request New feature or request good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

3 participants