-
-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
deps cleanup #162
deps cleanup #162
Changes from all commits
3ba429d
4f00f13
23f624b
d73307f
918746c
2bd8c1a
6b10a3c
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,5 +1,5 @@ | ||
{% set version = "1.13.1" %} | ||
{% set number = 0 %} | ||
{% set number = 1 %} | ||
|
||
{% if cuda_compiler_version != "None" %} | ||
{% set number = number + 200 %} | ||
|
@@ -43,7 +43,6 @@ outputs: | |
- python # [build_platform != target_platform] | ||
- cross-python_{{ target_platform }} # [build_platform != target_platform] | ||
- numpy # [build_platform != target_platform] | ||
- cffi # [build_platform != target_platform] | ||
- sysroot_linux-64 2.17 # [linux64] | ||
- {{ compiler('c') }} | ||
- {{ compiler('cxx') }} | ||
|
@@ -81,12 +80,13 @@ outputs: | |
- requests | ||
- future | ||
- six | ||
- cffi | ||
- mkl-devel {{ mkl }} # [x86] | ||
- libcblas * *_mkl # [x86] | ||
- libcblas # [not x86] | ||
- liblapack # [not x86] | ||
- openblas # [not x86] | ||
- libgomp # [linux] | ||
- llvm-openmp # [osx] | ||
- libprotobuf | ||
- sleef | ||
- typing | ||
|
@@ -100,12 +100,7 @@ outputs: | |
- {{ pin_compatible('magma', max_pin='x.x.x') }} # [cuda_compiler_version != "None"] | ||
# other requirements | ||
- python | ||
- {{ pin_compatible('numpy') }} | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. if we cannot remove numpy from the build time dependneices, this should be moved to a run_constraint There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Sorry, I misunderstood. Do you want us to remove them as build dependency or just run dependencies? I thought run dependencies only... There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Their package doesn't seem to depend on at runtime Their recipe seems to look like: {% set build_variant = environ.get('PYTORCH_BUILD_VARIANT', 'cuda') %}
{% set cross_compile_arm64 = environ.get('CROSS_COMPILE_ARM64', 0) %}
package:
name: pytorch
version: "{{ environ.get('PYTORCH_BUILD_VERSION') }}"
source:
path: "{{ environ.get('PYTORCH_GITHUB_ROOT_DIR') }}"
requirements:
build:
- cmake
- {{ compiler('c') }} # [win]
- pkg-config # [unix]
- libuv # [unix]
host:
- python
- setuptools
- pyyaml
{% if cross_compile_arm64 == 0 %}
- mkl-include # [x86_64]
- mkl=2020.2 # [x86_64 and (not win or py <= 39)]
- mkl=2021.4 # [x86_64 and win and py >= 310]
{% endif %}
- typing_extensions
- ninja
- libuv # [win]
- numpy=1.19 # [py <= 39]
- numpy=1.21.5 # [py >= 310]
- openssl=1.1.1l # [py >= 310 and linux]
{{ environ.get('PYTORCH_LLVM_PACKAGE', ' - llvmdev=9') }}
{{ environ.get('MAGMA_PACKAGE', '') }}
run:
- python
{% if cross_compile_arm64 == 0 %}
- mkl >=2018 # [x86_64]
{% endif %}
- libuv # [win]
- intel-openmp # [win]
- typing_extensions
{% if cross_compile_arm64 == 0 %}
- blas * mkl
{% endif %}
- pytorch-mutex 1.0 {{ build_variant }} # [not osx ]
{{ environ.get('CONDA_CUDATOOLKIT_CONSTRAINT', '') }}
{% if build_variant == 'cpu' %}
run_constrained:
- cpuonly
{% elif not osx %}
run_constrained:
- cpuonly <0
{% endif %}
build:
number: {{ environ.get('PYTORCH_BUILD_NUMBER', '1') }}
detect_binary_files_with_prefix: False
string: "{{ environ.get('PYTORCH_BUILD_STRING') }}"
script_env:
- BUILD_SPLIT_CUDA
- CUDA_VERSION
- CUDNN_VERSION
- CONDA_CUDATOOLKIT_CONSTRAINT
- USE_CUDA
- CMAKE_ARGS
- EXTRA_CAFFE2_CMAKE_FLAGS
- DEVELOPER_DIR
- DEBUG
- USE_FBGEMM
- USE_GLOO_WITH_OPENSSL # [unix]
- USE_SCCACHE # [win]
- USE_DISTRIBUTED # [unix]
- CMAKE_OSX_ARCHITECTURES # [unix]
- USE_MKLDNN # [unix]
- USE_NNPACK # [unix]
- USE_QNNPACK # [unix]
- BUILD_TEST # [unix]
- USE_PYTORCH_METAL_EXPORT # [osx]
- USE_COREML_DELEGATE # [osx]
- _GLIBCXX_USE_CXX11_ABI # [unix]
test:
imports:
- torch
about:
home: http://pytorch.org/
license: BSD 3-Clause
license_family: BSD
license_file: LICENSE
summary: PyTorch is an optimized tensor library for deep learning using GPUs and CPUs. Which means that numpy is required at build time, but not runtime. Therefore, we should move the line you added to the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It seems we can just remove numpy from the run section. Which one?
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm not an expert with llvm. I can't comment on that. |
||
- setuptools | ||
- cffi | ||
- typing_extensions | ||
# Need ninja to load C++ extensions | ||
- ninja | ||
# avoid that people without GPUs needlessly download ~0.5-1GB | ||
- __cuda # [cuda_compiler_version != "None"] | ||
run_constrained: | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i was only going to ask about this line with respect to the CUDA builds.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added this to be consistent with the osx additions. However, I highly doubt it is worth the hassle of building locally and uploading. We never faced any issues with the cuda builds. So yes, imo we could skip building cuda this time around; if there are any problems, we will address them in the next release, and perhaps make a more informed decision about adding/removing this. But ultimately you will have to make a decision on this...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok lets skip the builds this time, and maybe upload them over time later
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ninja was already removed as a dependency so i htink the main issue is resolved via a repo-data patch
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good. I would offer to build them myself, but I just moved jobs (with plenty of access to HPCs) and I need time to navigate the policies 😶🌫️
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Congrats on the move!