Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Install Problem #13

Open
lee-vius opened this issue Jul 28, 2022 · 3 comments
Open

Install Problem #13

lee-vius opened this issue Jul 28, 2022 · 3 comments

Comments

@lee-vius
Copy link

Hi! Appreciated for your work.

Could you provide the environment required to install quadtreeattention module?

I came up with compile errors on both linux and windows system. I have different conda environments including torch1.10 and torch 1.8 (cu11.1), but all envs met different compile problems. Not sure what's the reason.

@Tangshitao
Copy link
Owner

Can you post the error messages? Usually the problem is caused by cuda and pytorch version mismatch. The codes can be compiled under both cuda 10 or cuda 11. Please make sure use the pytorch version is matched cuda version.

@lee-vius
Copy link
Author

Hi, thanks for repy.

I checked the environment following your idea. I found on my Linux machine, I used torch1.10+cu11.1 while the CUDA version is 11.2. I will reinstall torch and try again to see whether it works.

For the windows system, I finally successfully installed the module. First, just like Tang said, we need to check consistency of CUDA and torch version. Second, for anyone using windows and met the problem: error LNK2001: 无法解析的外部符号 "public: long * __cdecl at::Tensor::data<long>(void)const , I recommand you following the guide given by the link: error LNK2001. To summarize, the problem is caused by the "long" type in the source code. "long" is interpreted as "long long"(64bit) on Linux while translated as "int"(32 bit) on Windows system. It is cross-platform problem. Just change every "long" in the code (.cpp .h .cu) to "long long" or "int64_t" should solve the problem for windows system.

@Coronal-Halo
Copy link

Coronal-Halo commented May 25, 2023

Hi, my torch version is 2.0.1 and my CUDA is 11.8.0, which are consistent, and I also changed the "long" type to "int64_t", but I am still getting errors on Windows:

ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File "C:\ProgramData\anaconda3\envs\deeplearning\lib\site-packages\torch\utils\cpp_extension.py", line 1894, in _run_ninja_build
subprocess.run(
File "C:\ProgramData\anaconda3\envs\deeplearning\lib\subprocess.py", line 526, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

Traceback (most recent call last):
......
RuntimeError: Error compiling objects for extension

Do you know how to solve these?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants