Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problems when loading data into table blueprint #33

Open
MaxFstl opened this issue Jan 26, 2025 · 4 comments
Open

Problems when loading data into table blueprint #33

MaxFstl opened this issue Jan 26, 2025 · 4 comments

Comments

@MaxFstl
Copy link

MaxFstl commented Jan 26, 2025

Hey,
After caluclating a blueprint strategy i got an error after trying to upload it to the database.
Only the tables metric and encoder got filled with data every other is empty. The program crashed after trying to upload it to the blueprint table.
When I try to upload the data I get the following error:
thread 'main' panicked at src/main.rs:59:53: called Result::unwrap()on anErrvalue: Error { kind: Db, cause: Some(DbError { severity: "ERROR", parsed_severity: Some(Error), code: SqlState(E22P04), message: "row field count is 2048, expected 6", detail: None, hint: None, position: None, where_: Some("COPY blueprint, line 1"), schema: None, table: None, column: None, datatype: None, constraint: None, file: Some("copyfromparse.c"), line: Some(987), routine: Some("NextCopyFrom") }) }

I think it has to do with the encoding of the blueprint file, because it says it expects 6 rows, but got 2048.
This is a psql error btw.

If you need to know my configuration just let me know.

Btw: How is your progress on the api, I would like to help as it is nice for my thesis :)
I have never used Rust but I think I can learn it. It is really confusing for me now tbh

@krukah
Copy link
Owner

krukah commented Jan 28, 2025

ah yeah, certainly an issue with blueprint serialization. i think i've seen this before but haven't had the chance to investigate it too closely. my guess is it might be a silly error in the Save implementation of the Profile struct. the other implementors of the same trait follow very similar pattern without any problems, after all.

@krukah
Copy link
Owner

krukah commented Jan 28, 2025

you might see i made some changes to the upload and api files within the analysis module. the biggest changes are in what indices are created on each table. ultimately, after the raw pgcopy files were generated, it took about ~1hr to do all the COPY FROM and CREATE INDEX commands.

in a separate repo, i have a react frontend for exploring the results of the abstraction clusters, and i built out endpoints necessary to serve that frontend. so i havent' really gotten the opportunity to explore the blueprint generated from MCCFR, since i want to validate the abstraction results (using default lib.rs parameters) against my intuition. doing things like eye-checking equity distributions, distance calculations, efficient index lookup, etc.

Image

@krukah
Copy link
Owner

krukah commented Jan 28, 2025

if you're looking to contribute, we can always make more optimized SQL queries/API routes! and more generally there are several other features i'd like to roadmap that i don't have time to explore fully, so maybe we can find time on the phone or so. super curious about the thesis, btw.

there's a decent amount of not-quite-LLM'able queries to write that we'll need to do things like "lookup what the blueprint strategy should be in this infoset" or "get me all the potential parents of ____ abstraction"

on a longer time horizion, it would be nice to begin development on the search real time search module (or, test-time inference as all the cool LLM kids call it these days)

@MaxFstl
Copy link
Author

MaxFstl commented Jan 28, 2025

Taking a look at the Safe implementation when I have time in the next days.

I would love to talk to you about the project, maybe via Discord or Zoom or whatever you like. Just mail me and we can get in touch.

lookup what the blueprint strategy should be in this infoset

This is basically what I am trying to achieve in my thesis. Because of the complexity of Poker AI, it has shifted more towards a literature review, explaining the concepts to a more general public.

on a longer time horizion, it would be nice to begin development on the search real time search module

I've thought about an implementation attempt at the end of my thesis if there is still time. This research paper https://proceedings.neurips.cc/paper/2018/hash/34306d99c63613fad5b2a140398c0420-Abstract.html suggests, it needs orders of magnitude less computational resources than previous bots. It uses depth-limited subgame solving and was able to outperform previous master-level bots with just a 4 core CPU and 16 GM of memory.
I am not sure how difficult this is to implement, but I guess it will probably take too long :/
Unfortunately very little details are published in these papers...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants