-
-
Notifications
You must be signed in to change notification settings - Fork 18.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BUG: aggregation of np.float16/np.float32 is wrong for big dataset #47370
Comments
On main with print(np.mean(df['A'])) # Return 3.999839
print(np.mean(df['A'].values)) # Return 3.999839 |
I can confirm on my side, |
Testing with pandas 1.4.2, the error does not occur after installing numba, not after Cython, but then I installed bottleneck and it reproduces the problem. (The bug also affects pandas 1.3.5). VersionsINSTALLED VERSIONScommit : 4bfe3d0 pandas : 1.4.2 |
Can you confirm that your are still using numpy 1.22.4 after installing bottleneck? |
It's in the version list |
I came up with a simpler example to reproduce... |
This is shorter import import numpy as np
import bottleneck
N=1.5 * (2**24)
xs = np.repeat(1., N).astype(np.float32)
np.mean(xs) # 1.0
bottleneck.nanmean(xs) # 0.6666666865348816 (without pandas) Edit: Yet it could be called a pandas bug. If bottleneck doesn't want to fix this, pandas can't use it outside of the area where it works well. Most telling is |
Thanks for figuring this out. Could you open an issue on their side? |
Will do right now. |
I think this is caused by pydata/bottleneck#379 and this change would mitigate the issue? (not fix) pydata/bottleneck#407 Or a similar change to other methods. And this is a duplicate of #42878 it looks like |
Pandas version checks
I have checked that this issue has not already been reported.
I have confirmed this bug exists on the latest version of pandas.
I have confirmed this bug exists on the main branch of pandas.
Reproducible Example
Issue Description
Hi,
It seems that when using float32, pandas mess up mean() or var() function after 34 Millions of rows.
I was suspecting some rounding errors, but it seems to be something way more fundamental than this.
Please note that this bug :
2**24
,2**25
...) are considered as 0 for np.float32 (or NaN for other dtype)In terms of datatype, I manage to reproduce the bug for
np.float32
andnp.float16
:2**28
)2**23
), starts bugging at (2**24
) (consider last elements as 0)2**15
), starts bugging at (2**16
) (consider last elements as NaN)2**28
)Expected Behavior
In the above example, we should have np.mean(df['A']) returning something around 4.0
Installed Versions
INSTALLED VERSIONS
commit : 66e3805
python : 3.7.10.final.0
python-bits : 64
OS : Linux
OS-release : 4.19.0-18-cloud-amd64
Version : #1 SMP Debian 4.19.208-1 (2021-09-29)
machine : x86_64
processor :
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.3.5
numpy : 1.21.6
pytz : 2021.3
dateutil : 2.8.2
pip : 21.2.4
setuptools : 58.2.0
Cython : 0.29.30
pytest : 7.1.1
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 3.0.2
IPython : 7.28.0
pandas_datareader: None
bs4 : None
bottleneck : 1.3.2
fsspec : 2021.10.0
fastparquet : 0.8.1
gcsfs : 2021.10.0
matplotlib : 3.4.3
numexpr : None
odfpy : None
openpyxl : 3.0.9
pandas_gbq : 0.17.4
pyarrow : 5.0.0
pyxlsb : None
s3fs : None
scipy : 1.7.1
sqlalchemy : 1.4.25
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
numba : None
The text was updated successfully, but these errors were encountered: