Skip to content

Commit

Permalink
Merge pull request #457 from EducationalTestingService/add-strict-mode
Browse files Browse the repository at this point in the history
Add strict mode for tests.
  • Loading branch information
desilinguist authored Aug 4, 2020
2 parents 07d8e5c + 267e531 commit b04230c
Show file tree
Hide file tree
Showing 3 changed files with 70 additions and 37 deletions.
44 changes: 23 additions & 21 deletions doc/internal/release_process.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,13 @@ RSMTool Release Process

This process is only meant for the project administrators, not users and developers.

1. Run the ``tests/update_files.py`` script with the appropriate arguments to make sure that all test data in the new release have correct experiment ids and filenames. If any (non-model) files need to be changed this should be investigated before the branch is released.
1. Make sure any and all tests are passing in ``master``. Make sure you have also run tests locally in strict mode (``STRICT=1 nosetests --nologcapture tests``) to catch any deprecation warnings in the HTML report that can be fixed before the release.

2. Create a release branch ``release/XX`` on GitHub.
2. Run the ``tests/update_files.py`` script with the appropriate arguments to make sure that all test data in the new release have correct experiment ids and filenames. If any (non-model) files need to be changed this should be investigated before the branch is released.

3. In the release branch:
3. Create a release branch ``release/XX`` on GitHub.

4. In the release branch:

a. update the version numbers in ``version.py``.

Expand All @@ -19,43 +21,43 @@ This process is only meant for the project administrators, not users and develop

e. update the README and this release documentation, if necessary.

4. Build the PyPI source and wheel distributions using ``python setup.py sdist build`` and ``python setup.py bdist_wheel build`` respectively.
5. Build the PyPI source and wheel distributions using ``python setup.py sdist build`` and ``python setup.py bdist_wheel build`` respectively.

5. Upload the source and wheel distributions to TestPyPI using ``twine upload --repository testpypi dist/*``. You will need to have the ``twine`` package installed and set up your ``$HOME/.pypirc`` correctly. See details `here <https://packaging.python.org/guides/using-testpypi/>`__.
6. Upload the source and wheel distributions to TestPyPI using ``twine upload --repository testpypi dist/*``. You will need to have the ``twine`` package installed and set up your ``$HOME/.pypirc`` correctly. See details `here <https://packaging.python.org/guides/using-testpypi/>`__.

6. Install the TestPyPI package as follows::
7. Install the TestPyPI package as follows::

pip install --index-url https://test.pypi.org/simple/ --extra-index-url https://pypi.org/simple rsmtool

7. Then run some tests from a RSMTool working copy. If the TestPyPI package works, then move on to the next step. If it doesn't, figure out why and rebuild and re-upload the package.
8. Then run some tests from a RSMTool working copy. If the TestPyPI package works, then move on to the next step. If it doesn't, figure out why and rebuild and re-upload the package.

8. Build the new generic conda package by running the following command in the ``conda-recipe`` directory (note that this assumes that you have cloned RSMTool in a directory named ``rsmtool`` and that the latest version of ``numpy`` is ``1.18``)::
9. Build the new generic conda package by running the following command in the ``conda-recipe`` directory (note that this assumes that you have cloned RSMTool in a directory named ``rsmtool`` and that the latest version of ``numpy`` is ``1.18``)::

conda build -c conda-forge -c ets --numpy=1.18 .

9. Upload the package to anaconda.org using ``anaconda upload --user ets <package tarball>``. You will need to have the appropriate permissions for the ``ets`` organization.
10. Upload the package to anaconda.org using ``anaconda upload --user ets <package tarball>``. You will need to have the appropriate permissions for the ``ets`` organization.

10. Create pull requests on the `rsmtool-conda-tester <https://github.com/EducationalTestingService/rsmtool-conda-tester/>`_ and `rsmtool-pip-tester <https://github.com/EducationalTestingService/rsmtool-pip-tester/>`_ repositories to test the conda and TestPyPI packages on Linux and Windows.
11. Create pull requests on the `rsmtool-conda-tester <https://github.com/EducationalTestingService/rsmtool-conda-tester/>`_ and `rsmtool-pip-tester <https://github.com/EducationalTestingService/rsmtool-pip-tester/>`_ repositories to test the conda and TestPyPI packages on Linux and Windows.

11. Draft a release on GitHub while the Linux and Windows package tester builds are running.
12. Draft a release on GitHub while the Linux and Windows package tester builds are running.

12. Once both builds have passed, make a pull request with the release branch to be merged into ``master`` and request code review.
13. Once both builds have passed, make a pull request with the release branch to be merged into ``master`` and request code review.

13. Once the build for the PR passes and the reviewers approve, merge the release branch into ``master``.
14. Once the build for the PR passes and the reviewers approve, merge the release branch into ``master``.

14. Upload source and wheel packages to PyPI using ``python setup.py sdist upload`` and ``python setup.py bdist_wheel upload``
15. Upload source and wheel packages to PyPI using ``python setup.py sdist upload`` and ``python setup.py bdist_wheel upload``

15. Make sure that the ReadTheDocs build for ``master`` passes by examining the badge at this `URL <https://img.shields.io/readthedocs/rsmtool/latest>`_ - this should say "passing" in green.
16. Make sure that the ReadTheDocs build for ``master`` passes by examining the badge at this `URL <https://img.shields.io/readthedocs/rsmtool/latest>`_ - this should say "passing" in green.

16. Tag the latest commit in ``master`` with the appropriate release tag and publish the release on GitHub.
17. Tag the latest commit in ``master`` with the appropriate release tag and publish the release on GitHub.

17. Make another PR to merge ``master`` branch into ``stable`` so that the the default ReadTheDocs build (which is ``stable``) always points to the latest release.
18. Make another PR to merge ``master`` branch into ``stable`` so that the the default ReadTheDocs build (which is ``stable``) always points to the latest release.

18. Update the CI plan for RSMExtra (only needed for ETS users) to use this newly built RSMTool conda package. Do any other requisite changes for RSMExtra. Once everything is done, do a release of RSMExtra.
19. Update the CI plan for RSMExtra (only needed for ETS users) to use this newly built RSMTool conda package. Do any other requisite changes for RSMExtra. Once everything is done, do a release of RSMExtra.

19. Update the RSMTool conda environment on the ETS linux servers with the latest packages for both RSMTool and RSMExtra.
20. Update the RSMTool conda environment on the ETS linux servers with the latest packages for both RSMTool and RSMExtra.

20. Send an email around at ETS announcing the release and the changes.
21. Send an email around at ETS announcing the release and the changes.

21. Create a `Dash <https://kapeli.com/dash>`_ docset from the documentation by following the instructions :ref:`here <dash_docset>`.
22. Create a `Dash <https://kapeli.com/dash>`_ docset from the documentation by following the instructions :ref:`here <dash_docset>`.

44 changes: 32 additions & 12 deletions rsmtool/test_utils.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
import os
import re
import sys
import warnings
Expand Down Expand Up @@ -37,6 +38,12 @@
tools_with_output = ['rsmtool', 'rsmeval',
'rsmsummarize', 'rsmpredict']

# check if tests are being run in strict mode
# if so, any deprecation warnings found in HTML
# reports should not be ignored
STRICT_MODE = os.environ.get('STRICT', None)
IGNORE_DEPRECATION_WARNINGS = False if STRICT_MODE else True


def check_run_experiment(source,
experiment_id,
Expand Down Expand Up @@ -127,12 +134,15 @@ def check_run_experiment(source,
if consistency:
check_consistency_files_exist(output_files, experiment_id, file_format=file_format)

# check report for any errors but ignore warnings
# which we check below separately
check_report(html_report, raise_warnings=False)

# we want to ignore deprecation warnings for RSMTool, so we remove
# them from the list; then, we make sure that there are no other warnings
# make sure that there are no warnings in the report
# but ignore deprecation warnings if appropriate
warning_msgs = collect_warning_messages_from_report(html_report)
warning_msgs = [msg for msg in warning_msgs if 'DeprecationWarning' not in msg]
if IGNORE_DEPRECATION_WARNINGS:
warning_msgs = [msg for msg in warning_msgs if 'DeprecationWarning' not in msg]
assert_equal(len(warning_msgs), 0)


Expand Down Expand Up @@ -209,12 +219,15 @@ def check_run_evaluation(source,
if consistency:
check_consistency_files_exist(output_files, experiment_id)

# check report for any errors but ignore warnings
# which we check below separately
check_report(html_report, raise_warnings=False)

# we want to ignore deprecation warnings for RSMEval, so we remove
# them from the list; then, we make sure that there are no other warnings
# make sure that there are no warnings in the report
# but ignore deprecation warnings if appropriate
warning_msgs = collect_warning_messages_from_report(html_report)
warning_msgs = [msg for msg in warning_msgs if 'DeprecationWarning' not in msg]
if IGNORE_DEPRECATION_WARNINGS:
warning_msgs = [msg for msg in warning_msgs if 'DeprecationWarning' not in msg]
assert_equal(len(warning_msgs), 0)


Expand Down Expand Up @@ -262,12 +275,16 @@ def check_run_comparison(source,
suppress_warnings_for=suppress_warnings_for)

html_report = join('test_outputs', source, '{}_report.html'.format(experiment_id))

# check report for any errors but ignore warnings
# which we check below separately
check_report(html_report, raise_warnings=False)

# we want to ignore deprecation warnings for RSMCompare, so we remove
# them from the list; then, we make sure that there are no other warnings
# make sure that there are no warnings in the report
# but ignore deprecation warnings if appropriate
warning_msgs = collect_warning_messages_from_report(html_report)
warning_msgs = [msg for msg in warning_msgs if 'DeprecationWarning' not in msg]
if IGNORE_DEPRECATION_WARNINGS:
warning_msgs = [msg for msg in warning_msgs if 'DeprecationWarning' not in msg]
assert_equal(len(warning_msgs), 0)


Expand Down Expand Up @@ -390,12 +407,15 @@ def check_run_summary(source,
if exists(expected_output_file):
check_file_output(output_file, expected_output_file)

# check report for any errors but ignore warnings
# which we check below separately
check_report(html_report, raise_warnings=False)

# we want to ignore deprecation warnings for RSMSummarize, so we remove
# them from the list; then, we make sure that there are no other warnings
# make sure that there are no warnings in the report
# but ignore deprecation warnings if appropriate
warning_msgs = collect_warning_messages_from_report(html_report)
warning_msgs = [msg for msg in warning_msgs if 'DeprecationWarning' not in msg]
if IGNORE_DEPRECATION_WARNINGS:
warning_msgs = [msg for msg in warning_msgs if 'DeprecationWarning' not in msg]
assert_equal(len(warning_msgs), 0)


Expand Down
19 changes: 15 additions & 4 deletions tests/test_cli.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,12 @@
else:
from rsmtool.test_utils import rsmtool_test_dir

# check if tests are being run in strict mode
# if so, any deprecation warnings found in HTML
# reports should not be ignored
STRICT_MODE = os.environ.get('STRICT', None)
IGNORE_DEPRECATION_WARNINGS = False if STRICT_MODE else True


class TestToolCLI:

Expand Down Expand Up @@ -118,16 +124,21 @@ def validate_run_output(self, name, experiment_dir):
check_generated_output(list(map(str, output_files)), 'lr', 'rsmtool')

# there's no report for rsmpredict but for the rest we want
# the reports to be free of warnings except deprecation warnings
# that might come from underlying packages
# the reports to be free of errors and warnings
if name != 'rsmpredict':
output_dir = Path(experiment_dir)
report_dir = output_dir / "report" if name != "rsmcompare" else output_dir
html_report = list(report_dir.glob('*_report.html'))[0]
check_report(str(html_report), raise_warnings=False)

# check report for any errors but ignore warnings
# which we check below separately
check_report(html_report, raise_warnings=False)

# make sure that there are no warnings in the report
# but ignore deprecation warnings if appropriate
warning_msgs = collect_warning_messages_from_report(html_report)
warning_msgs = [msg for msg in warning_msgs if 'DeprecationWarning' not in msg]
if IGNORE_DEPRECATION_WARNINGS:
warning_msgs = [msg for msg in warning_msgs if 'DeprecationWarning' not in msg]
eq_(len(warning_msgs), 0)

def validate_generate_output(self, name, output, subgroups=False):
Expand Down

0 comments on commit b04230c

Please sign in to comment.