Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BUG: aggregation of np.float16/np.float32 is wrong for big dataset #47370

Closed
2 of 3 tasks
ghost opened this issue Jun 15, 2022 · 11 comments
Closed
2 of 3 tasks

BUG: aggregation of np.float16/np.float32 is wrong for big dataset #47370

ghost opened this issue Jun 15, 2022 · 11 comments
Labels
Bug Dependencies Required and optional dependencies Duplicate Report Duplicate issue or pull request Numeric Operations Arithmetic, Comparison, and Logical operations

Comments

@ghost
Copy link

ghost commented Jun 15, 2022

Pandas version checks

  • I have checked that this issue has not already been reported.

  • I have confirmed this bug exists on the latest version of pandas.

  • I have confirmed this bug exists on the main branch of pandas.

Reproducible Example

import pandas as pd; print(pd.__version__)
import numpy as np; print(np.__version__)

N = 70_000_000
df = pd.DataFrame({'A': np.random.normal(4,1,N).astype(np.float32)})

print(np.mean(df['A'].values)) # Return 4.0000944 <-- Correct
print(np.mean(df['A'])) # Return 1.917656660079956 <-- Wrong !
print(df['A'].mean()) # Return 1.917656660079956 <-- written like this, it looks like a pandas-related bug

Issue Description

Hi,

It seems that when using float32, pandas mess up mean() or var() function after 34 Millions of rows.
I was suspecting some rounding errors, but it seems to be something way more fundamental than this.

Please note that this bug :

  • is especially nasty since it does not produce warning or raise an Exception, yet gives a statistic absolutely wrong. Consequences for data pipelines and companies can be really big.
  • Mathematically, it seems that all the elements after a certain index (sometimes 2**24, 2**25 ...) are considered as 0 for np.float32 (or NaN for other dtype)
  • happen at least for np.mean() and np.var(), but probably for other functions as well
  • may be, in fact, related to Numpy (or other library) and not Pandas.

In terms of datatype, I manage to reproduce the bug for np.float32 and np.float16 :

  • float64 : works OK at least up to (2**28)
  • float32 : OK up to 1.99 * (2**23), starts bugging at (2**24) (consider last elements as 0)
  • float16 : OK up to 1.99 * (2**15), starts bugging at (2**16) (consider last elements as NaN)
  • np.int8, np.int16, np.int32, np.int64 : works OK at least up to (2**28)

Expected Behavior

In the above example, we should have np.mean(df['A']) returning something around 4.0

Installed Versions

INSTALLED VERSIONS

commit : 66e3805
python : 3.7.10.final.0
python-bits : 64
OS : Linux
OS-release : 4.19.0-18-cloud-amd64
Version : #1 SMP Debian 4.19.208-1 (2021-09-29)
machine : x86_64
processor :
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : en_US.UTF-8

pandas : 1.3.5
numpy : 1.21.6
pytz : 2021.3
dateutil : 2.8.2
pip : 21.2.4
setuptools : 58.2.0
Cython : 0.29.30
pytest : 7.1.1
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 3.0.2
IPython : 7.28.0
pandas_datareader: None
bs4 : None
bottleneck : 1.3.2
fsspec : 2021.10.0
fastparquet : 0.8.1
gcsfs : 2021.10.0
matplotlib : 3.4.3
numexpr : None
odfpy : None
openpyxl : 3.0.9
pandas_gbq : 0.17.4
pyarrow : 5.0.0
pyxlsb : None
s3fs : None
scipy : 1.7.1
sqlalchemy : 1.4.25
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
numba : None

@ghost ghost added Bug Needs Triage Issue that has not been reviewed by a pandas team member labels Jun 15, 2022
@twoertwein
Copy link
Member

On main with numpy==1.22.4 and python 3.10, I get the same result (of almost 4.0)

print(np.mean(df['A'])) # Return 3.999839
print(np.mean(df['A'].values)) # Return 3.999839

@ghost
Copy link
Author

ghost commented Jun 15, 2022

I can confirm on my side,
Problem does not happen on a new environment Python 3.8, numpy==1.22.4 and pandas installed from source (main)

@bluss
Copy link

bluss commented Jun 15, 2022

Testing with pandas 1.4.2, the error does not occur after installing numba, not after Cython, but then I installed bottleneck and it reproduces the problem. (The bug also affects pandas 1.3.5).

Versions

INSTALLED VERSIONS

commit : 4bfe3d0
python : 3.8.10.final.0
python-bits : 64
OS : Linux
OS-release : 5.13.0-51-generic
Version : #58~20.04.1-Ubuntu SMP Tue Jun 14 11:29:12 UTC 2022
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : sv_SE.UTF-8
LOCALE : sv_SE.UTF-8

pandas : 1.4.2
numpy : 1.22.4
pytz : 2022.1
dateutil : 2.8.2
pip : 20.0.2
setuptools : 44.0.0
Cython : 0.29.30
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 3.1.2
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : 1.3.4
brotli : None
fastparquet : None
fsspec : None
gcsfs : None
markupsafe : 2.1.1
matplotlib : None
numba : 0.55.2
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pyreadstat : None
pyxlsb : None
s3fs : None
scipy : None
snappy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
zstandard : None

@twoertwein
Copy link
Member

but then I installed bottleneck and it reproduces the problem.

Can you confirm that your are still using numpy 1.22.4 after installing bottleneck?

@bluss
Copy link

bluss commented Jun 15, 2022

It's in the version list

@ghost ghost changed the title BUG: values aggreationg (mean(), var()) inconsistent between numpy and pandas BUG: aggregation of np.float32 largely deviates from truth for big dataset Jun 15, 2022
@ghost
Copy link
Author

ghost commented Jun 15, 2022

I came up with a simpler example to reproduce...
EDIT : this has been deleted, the example of bluss (next comment) is even better.

@bluss
Copy link

bluss commented Jun 15, 2022

This is shorter

import import numpy as np
import bottleneck
N=1.5 * (2**24)
xs = np.repeat(1., N).astype(np.float32)
np.mean(xs)  # 1.0
bottleneck.nanmean(xs)  # 0.6666666865348816

(without pandas)

Edit: Yet it could be called a pandas bug. If bottleneck doesn't want to fix this, pandas can't use it outside of the area where it works well. Most telling is bottleneck.nansum(xs) # 16777216.0 because np.float32(np.float32(16777216.0) + 1.) == 16777216

@ghost ghost changed the title BUG: aggregation of np.float32 largely deviates from truth for big dataset BUG: aggregation of np.float16/np.float32 is wrong for big dataset Jun 15, 2022
@phofl
Copy link
Member

phofl commented Jun 15, 2022

Thanks for figuring this out. Could you open an issue on their side?

@ghost
Copy link
Author

ghost commented Jun 15, 2022

Will do right now.

@bluss
Copy link

bluss commented Jun 16, 2022

I think this is caused by pydata/bottleneck#379 and this change would mitigate the issue? (not fix) pydata/bottleneck#407 Or a similar change to other methods.

And this is a duplicate of #42878 it looks like

@simonjayhawkins
Copy link
Member

And this is a duplicate of #42878 it looks like

closing this to help keep discussion in less places. Thanks @JeanLescut for the report. Please feel free to add comments to #42878.

@simonjayhawkins simonjayhawkins added Duplicate Report Duplicate issue or pull request Numeric Operations Arithmetic, Comparison, and Logical operations Dependencies Required and optional dependencies and removed Needs Triage Issue that has not been reviewed by a pandas team member labels Jun 17, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug Dependencies Required and optional dependencies Duplicate Report Duplicate issue or pull request Numeric Operations Arithmetic, Comparison, and Logical operations
Projects
None yet
Development

No branches or pull requests

4 participants