Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to find big data size - lets say 10 million rows and 80 features #6

Open
Sandy4321 opened this issue Sep 12, 2022 · 0 comments
Open

Comments

@Sandy4321
Copy link

great work
Why do tree-based models still outperform deep
learning on tabular data?

but can you recommend data set for mixed continues and categorical features for binary classification
with big data size - lets say 10 million rows and 80 features ?

when
1
features are not independent - for example some features have dependencies on several other features ?
2
unbalanced data - much more NO labels than YES labels

like
https://www.kaggle.com/competitions/amex-default-prediction/data

https://github.com/jxzly/Kaggle-American-Express-Default-Prediction-1st-solution

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant