Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updated Github & CircleCI actions #134

Open
wants to merge 14 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
30 changes: 23 additions & 7 deletions .circleci/config.yml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Python CircleCI 2.0 configuration file
# Python CircleCI 2.1 configuration file
#
# Check https://circleci.com/docs/2.0/language-python/ for more details
# Check https://circleci.com/docs/2.1/language-python/ for more details
#
version: 2.1

Expand All @@ -9,18 +9,26 @@ version: 2.1
# -------------------------------------------------------------------------------------
cpu: &cpu
docker:
- image: circleci/python:3.7
- image: cimg/python:3.11.4
auth:
username: $DOCKERHUB_USERNAME
password: $DOCKERHUB_TOKEN
resource_class: medium

gpu: &gpu
machine:
image: ubuntu-2004-cuda-11.4:202110-01
image: linux-cuda-12:2023.05.1
docker_layer_caching: true
resource_class: gpu.nvidia.small

version_parameters: &version_parameters
parameters:
cuda_version:
type: string
default: '12.1'
environment:
CUDA_VERSION: << parameters.cuda_version >>

# -------------------------------------------------------------------------------------
# Re-usable commands
# -------------------------------------------------------------------------------------
Expand Down Expand Up @@ -58,6 +66,11 @@ run_unittests: &run_unittests
python -m unittest discover -v -s tests
python -m unittest discover -v -s io_tests

select_cuda: &select_cuda
- run:
name: Select CUDA
command: |
sudo update-alternatives --set cuda /usr/local/cuda-<< parameters.cuda_version >>
# -------------------------------------------------------------------------------------
# Jobs to run
# -------------------------------------------------------------------------------------
Expand All @@ -74,14 +87,14 @@ jobs:
# Cache the venv directory that contains dependencies
- restore_cache:
keys:
- cache-key-{{ .Branch }}-ID-20200130
- cache-key-{{ .Branch }}-ID-20230617

- <<: *install_dep

- save_cache:
paths:
- ~/venv
key: cache-key-{{ .Branch }}-ID-20200130
key: cache-key-{{ .Branch }}-ID-20230617

- <<: *install_fvcore

Expand All @@ -96,11 +109,13 @@ jobs:

gpu_tests:
<<: *gpu
<<: *version_parameters

working_directory: ~/fvcore

steps:
- checkout
- <<: *select_cuda
- run:
name: Install nvidia-docker
working_directory: ~/
Expand Down Expand Up @@ -134,7 +149,7 @@ jobs:

upload_wheel:
docker:
- image: circleci/python:3.7
- image: cimg/python:3.11.4
auth:
username: $DOCKERHUB_USERNAME
password: $DOCKERHUB_TOKEN
Expand Down Expand Up @@ -184,6 +199,7 @@ workflows:
context:
- DOCKERHUB_TOKEN
- gpu_tests:
cuda_version: '12.1'
context:
- DOCKERHUB_TOKEN

Expand Down
12 changes: 6 additions & 6 deletions .github/workflows/workflow.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,23 +9,23 @@ jobs:
strategy:
max-parallel: 4
matrix:
python-version: [3.8, 3.9] # importlib-metadata v5 requires 3.8+
python-version: ["3.8", "3.9", "3.10", "3.11"] # importlib-metadata v5 requires 3.8+
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install flake8==3.8.1 flake8-bugbear flake8-comprehensions isort==4.3.21
pip install black==22.3.0
pip install flake8==6.0.0 flake8-bugbear flake8-comprehensions isort==5.12.0
pip install black==23.3.0
flake8 --version
- name: Lint
run: |
echo "Running isort"
isort -c -sp .
isort -c --sp . .
echo "Running black"
black --check .
echo "Running flake8"
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ Besides some basic utilities, fvcore includes the following features:

## Install:

fvcore requires pytorch and python >= 3.6.
fvcore requires pytorch and python >= 3.8.

Use one of the following ways to install:

Expand Down
2 changes: 1 addition & 1 deletion fvcore/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,4 @@

# This line will be programatically read/write by setup.py.
# Leave them at the bottom of this file and don't touch them.
__version__ = "0.1.6"
__version__ = "0.1.7"
1 change: 0 additions & 1 deletion fvcore/common/checkpoint.py
Original file line number Diff line number Diff line change
Expand Up @@ -291,7 +291,6 @@ def _load_model(self, checkpoint: Any) -> _IncompatibleKeys:
shape_model = tuple(model_param.shape)
shape_checkpoint = tuple(checkpoint_state_dict[k].shape)
if shape_model != shape_checkpoint:

has_observer_base_classes = (
TORCH_VERSION >= (1, 8)
and hasattr(quantization, "ObserverBase")
Expand Down
3 changes: 2 additions & 1 deletion fvcore/nn/jit_analysis.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,10 +13,11 @@
import numpy as np
import torch
import torch.nn as nn
from fvcore.common.checkpoint import _named_modules_with_dup
from torch import Tensor
from torch.jit import _get_trace_graph, TracerWarning

from fvcore.common.checkpoint import _named_modules_with_dup

from .jit_handles import Handle


Expand Down
4 changes: 1 addition & 3 deletions io_tests/test_file_io.py
Original file line number Diff line number Diff line change
Expand Up @@ -201,9 +201,7 @@ def test_open_writes(self) -> None:

def test_bad_args(self) -> None:
with self.assertRaises(NotImplementedError):
PathManager.copy(
self._remote_uri, self._remote_uri, foo="foo" # type: ignore
)
PathManager.copy(self._remote_uri, self._remote_uri, foo="foo") # type: ignore
with self.assertRaises(NotImplementedError):
PathManager.exists(self._remote_uri, foo="foo") # type: ignore
with self.assertRaises(ValueError):
Expand Down
6 changes: 3 additions & 3 deletions linter.sh
Original file line number Diff line number Diff line change
Expand Up @@ -4,14 +4,14 @@
# Run this script at project root by "./linter.sh" before you commit.

{
black --version | grep -E "22.3.0" > /dev/null
black --version | grep -E "23.3.0" > /dev/null
} || {
echo "Linter requires 'black==22.3.0' !"
echo "Linter requires 'black==23.3.0' !"
exit 1
}

echo "Running isort..."
isort -y -sp .
isort --sp . .

echo "Running black..."
black .
Expand Down
4 changes: 2 additions & 2 deletions packaging/build_all_conda.sh
Original file line number Diff line number Diff line change
Expand Up @@ -2,14 +2,14 @@
# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.
set -ex

for PV in 3.6 3.7 3.8
for PV in 3.8 3.9 3.10 3.11
do
PYTHON_VERSION=$PV bash packaging/build_conda.sh
done

ls -Rl packaging

for version in 36 37 38
for version in 38 39 310 311
do
(cd packaging/out && conda convert -p win-64 linux-64/fvcore-*-py$version.tar.bz2)
(cd packaging/out && conda convert -p osx-64 linux-64/fvcore-*-py$version.tar.bz2)
Expand Down
4 changes: 2 additions & 2 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ def get_version():
url="https://github.com/facebookresearch/fvcore",
description="Collection of common code shared among different research "
"projects in FAIR computer vision team",
python_requires=">=3.6",
python_requires=">=3.8",
install_requires=[
"numpy",
"yacs>=0.1.6",
Expand All @@ -51,7 +51,7 @@ def get_version():
"Pillow",
"tabulate",
"iopath>=0.1.7",
"dataclasses; python_version<'3.7'",
"dataclasses; python_version<'3.12'",
],
extras_require={"all": ["shapely"]},
packages=find_packages(exclude=("tests",)),
Expand Down
3 changes: 2 additions & 1 deletion tests/bm_common.py
Original file line number Diff line number Diff line change
@@ -1,8 +1,9 @@
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.

from fvcore.common.benchmark import benchmark
from test_common import TestHistoryBuffer

from fvcore.common.benchmark import benchmark


def bm_history_buffer_update() -> None:
kwargs_list = [
Expand Down
3 changes: 2 additions & 1 deletion tests/bm_focal_loss.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,10 @@
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.

import torch
from fvcore.common.benchmark import benchmark
from test_focal_loss import TestFocalLoss, TestFocalLossStar

from fvcore.common.benchmark import benchmark


def bm_focal_loss() -> None:
if not torch.cuda.is_available():
Expand Down
3 changes: 2 additions & 1 deletion tests/test_activation_count.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,10 @@

import torch
import torch.nn as nn
from numpy import prod

from fvcore.nn.activation_count import activation_count, ActivationCountAnalysis
from fvcore.nn.jit_handles import Handle
from numpy import prod


class SmallConvNet(nn.Module):
Expand Down
8 changes: 4 additions & 4 deletions tests/test_checkpoint.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,10 @@
from unittest.mock import MagicMock

import torch
from fvcore.common.checkpoint import Checkpointer, PeriodicCheckpointer
from torch import nn

from fvcore.common.checkpoint import Checkpointer, PeriodicCheckpointer


TORCH_VERSION: Tuple[int, ...] = tuple(int(x) for x in torch.__version__.split(".")[:2])
if TORCH_VERSION >= (1, 11):
Expand Down Expand Up @@ -118,7 +119,6 @@ def test_from_last_checkpoint_model(self) -> None:
nn.DataParallel(self._create_model()),
),
]:

with TemporaryDirectory() as f:
checkpointer = Checkpointer(trained_model, save_dir=f)
checkpointer.save("checkpoint_file")
Expand Down Expand Up @@ -264,9 +264,9 @@ def __init__(self, has_y: bool) -> None:
)
logger.info.assert_not_called()

@unittest.skipIf( # pyre-fixme[56]
@unittest.skipIf(
not hasattr(nn, "LazyLinear"), "LazyModule not supported"
)
) # pyre-fixme[56]
def test_load_lazy_module(self) -> None:
def _get_model() -> nn.Sequential:
return nn.Sequential(nn.LazyLinear(10))
Expand Down
3 changes: 2 additions & 1 deletion tests/test_common.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,12 @@
import unittest

import numpy as np
from yaml.constructor import ConstructorError

from fvcore.common.config import CfgNode
from fvcore.common.history_buffer import HistoryBuffer
from fvcore.common.registry import Registry
from fvcore.common.timer import Timer
from yaml.constructor import ConstructorError


class TestHistoryBuffer(unittest.TestCase):
Expand Down
6 changes: 4 additions & 2 deletions tests/test_flop_count.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,12 @@

import torch
import torch.nn as nn
from fvcore.nn.flop_count import _DEFAULT_SUPPORTED_OPS, flop_count, FlopCountAnalysis
from fvcore.nn.jit_handles import Handle
from torch.autograd.function import Function
from torch.nn import functional as F

from fvcore.nn.flop_count import _DEFAULT_SUPPORTED_OPS, flop_count, FlopCountAnalysis
from fvcore.nn.jit_handles import Handle


class _CustomOp(Function):
@staticmethod
Expand Down Expand Up @@ -189,6 +190,7 @@ def test_customized_ops(self) -> None:
The second case checks when a new handle for a default operation is
passed. The new handle should overwrite the default handle.
"""

# New handle for a new operation.
def dummy_sigmoid_flop_jit(
inputs: typing.List[Any], outputs: typing.List[Any]
Expand Down
3 changes: 2 additions & 1 deletion tests/test_focal_loss.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,14 @@

import numpy as np
import torch
from torch.nn import functional as F

from fvcore.nn import (
sigmoid_focal_loss,
sigmoid_focal_loss_jit,
sigmoid_focal_loss_star,
sigmoid_focal_loss_star_jit,
)
from torch.nn import functional as F


def logit(p: torch.Tensor) -> torch.Tensor:
Expand Down
1 change: 1 addition & 0 deletions tests/test_giou_loss.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@

import numpy as np
import torch

from fvcore.nn import giou_loss


Expand Down
1 change: 1 addition & 0 deletions tests/test_jit_model_analysis.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@

import torch
import torch.nn as nn

from fvcore.nn.flop_count import FlopCountAnalysis
from fvcore.nn.jit_analysis import JitModelAnalysis
from fvcore.nn.jit_handles import addmm_flop_jit, conv_flop_jit, Handle, linear_flop_jit
Expand Down
1 change: 1 addition & 0 deletions tests/test_layers_squeeze_excitation.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@
from typing import Iterable

import torch

from fvcore.nn.squeeze_excitation import (
ChannelSpatialSqueezeExcitation,
SpatialSqueezeExcitation,
Expand Down
3 changes: 2 additions & 1 deletion tests/test_param_count.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,10 @@

import unittest

from fvcore.nn.parameter_count import parameter_count, parameter_count_table
from torch import nn

from fvcore.nn.parameter_count import parameter_count, parameter_count_table


class NetWithReuse(nn.Module):
def __init__(self, reuse: bool = False) -> None:
Expand Down
3 changes: 2 additions & 1 deletion tests/test_precise_bn.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,10 @@

import numpy as np
import torch
from fvcore.nn import update_bn_stats
from torch import nn

from fvcore.nn import update_bn_stats


class TestPreciseBN(unittest.TestCase):
def setUp(self) -> None:
Expand Down
Loading