Skip to content

Commit

Permalink
Merge branch 'dev' into 715-pan-starrs-forced-photometry
Browse files Browse the repository at this point in the history
  • Loading branch information
phycodurus committed Dec 15, 2023
2 parents ab5464b + 2ebd481 commit a3b65ce
Show file tree
Hide file tree
Showing 40 changed files with 810 additions and 211 deletions.
2 changes: 1 addition & 1 deletion docs/common/customsettings.rst
Original file line number Diff line number Diff line change
Expand Up @@ -211,7 +211,7 @@ the documentation on `Creating an Alert Module for the TOM
Toolkit </brokers/create_broker>`__.

`TOM_ALERT_DASH_CLASSES <#tom_alert_dash_classes>`__
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Default:

Expand Down
3 changes: 3 additions & 0 deletions docs/customization/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ Customization
customize_templates
adding_pages
customize_template_tags
testing_toms


Start here to learn how to customize the look and feel of your TOM or add new functionality.
Expand All @@ -20,3 +21,5 @@ displaying static html pages or dynamic database-driven content.

:doc:`Customizing Template Tags <customize_template_tags>` - Learn how to write your own template tags to display
the data you need.

:doc:`Testing TOMs <testing_toms>` - Learn how to test your TOM's functionality.
313 changes: 313 additions & 0 deletions docs/customization/testing_toms.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,313 @@
Testing TOMs
------------

As functionality is added to a TOM system it is highly beneficial to write
`unittests <https://docs.python.org/3/library/unittest.html>`__ for each
new function and module added. This allows you to easily and repeatably
test that the function behaves as expected, and makes error diagnosis much faster.
Although it can seem like extra work, ultimately writing unittests will save
you time in the long run. For more on why unittests are valuable,
`read this <https://docs.djangoproject.com/en/5.0/intro/tutorial05/>`__.

The TOM Toolkit provides a built-in framework for writing unittests,
including some TOM-specific factory functions that enable you to
generate examples of TOM database entries for testing. This is, in turn,
built on top of `Django's unittest framework <https://docs.djangoproject.com/en/5.0/topics/testing/overview/>`__, inheriting a wealth of
functionality. This allows test code to be written which acts on a
stand-alone test database, meaning that any input used for testing
doesn't interfere with an operational TOM database.

Code Structure and Running Tests
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

There are two main options on where the test code lives in your TOM,
depending on how you want to run the tests. Here we follow convention and refer to
the top-level directory of your TOM as the `project` and the subdirectory
for the TOM as the `application`. Although they are often called the same
name, the distinction matters because TOMs can have multiple `applications`
within the same `project`. The actual test code will be the same regardless
of which option you use - this is described in the next section.

Option 1: Use the built-in manage.py application to run the tests
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

With this option, test code is run using ``manage.py``, just like management
commands. ``manage.py`` is designed to consider any file with a filename
prefixed by ``test_`` within the project directory and subdirectories as test code,
and it will find them automatically.

If you chose this option, then it's good practise to place test code in the subdirectory for
the application that they apply to, in a file called ``tests.py``. The code structure for this option would be:

::

mytom/
├── data/
├── db.sqlite3
├── manage.py
├── mytom/
│ ├── __init__.py
│ ├── settings.py
│ ├── urls.py
│ ├── wsgi.py
│ └── tests.py
├── static.
├── templates/
└── tmp/

This gives you quite fine-grained control when it comes to running the tests. You can either run
all tests at once:

::

$ ./manage.py test

...run just the tests for a specific application...

::

$ ./manage.py test mytom.tests


...run a specific TestClass (more on these below) for a specific application ...

::

$ ./manage.py test mytom.tests.MyTestCase

...or run just a particular method of a single TestClass:

::

$ ./manage.py test mytom.test.MyTestClass.test_my_function

Option 2: Use a test runner
+++++++++++++++++++++++++++

A test runner script instead of ``manage.py`` can be useful because it
allows you to have more sophisticated control over settings that can be
used specifically for testing purposes, independently of the settings
used for the TOM in operation. This is commonly used in applications that
are designed to be re-useable. For more information,
see `Testing Reusable Applications.
<https://docs.djangoproject.com/en/5.0/topics/testing/advanced/#testing-reusable-applications>`_

With this option, it's good practise to place your unittest code in a
single subdirectory in the top-level directory of your TOM, normally
called ``tests/``, and add the test runner script in the project directory:

::

mytom/
├── data/
├── db.sqlite3
├── manage.py
├── runtests.py
├── mytom/
│ ├── __init__.py
│ ├── settings.py
│ ├── urls.py
│ └── wsgi.py
├── static.
├── templates/
├── tmp/
└── tests/
│ │ ├── test_settings.py
│ │ ├── tests.py

All files with test code within the ``tests/`` subdirectory should have
filenames prefixed with ``test_``.

In addition to the test functions, which are located in ``tests.py``, this
option requires two short files in addition:

.. code-block::
test_settings.py:
SECRET_KEY = "fake-key"
INSTALLED_APPS = [
'whitenoise.runserver_nostatic',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.sites',
'django_extensions',
'guardian',
'tom_common',
'django_comments',
'bootstrap4',
'crispy_bootstrap4',
'crispy_forms',
'django_filters',
'django_gravatar',
'rest_framework',
'rest_framework.authtoken',
'tom_targets',
'tom_alerts',
'tom_catalogs',
'tom_observations',
'tom_dataproducts',
'mytom',
'tests',
]
Note that you may need to extend the contents of the test_settings.py file,
often by adding the corresponding information from your TOM's main
``settings.py``, depending on the information required for your tests.

.. code-block::
runtests.py:
import os
import sys
import argparse
import django
from django.conf import settings
from django.test.utils import get_runner
def get_args():
parser = argparse.ArgumentParser()
parser.add_argument('module', help='name of module to test or all')
options = parser.parse_args()
return options
if __name__ == "__main__":
options = get_args()
os.environ["DJANGO_SETTINGS_MODULE"] = "tests.test_settings"
django.setup()
TestRunner = get_runner(settings)
test_runner = TestRunner()
if options.module == 'all':
failures = test_runner.run_tests(["tests"])
else:
failures = test_runner.run_tests([options.module])
sys.exit(bool(failures))
The test runner offers you similarly fine-grained control over whether to
run all of the tests in your application at once, or a single function,
using the following syntax:

::

$ python runtests.py tests
$ python runtests.py tests.test_mytom
$ python runtests.py tests.test_mytom.TestCase
$ python runtests.py tests.test_mytom.TestCase.test_my_function

Writing Unittests
~~~~~~~~~~~~~~~~~

Regardless of how they are run, the anatomy of a unittest will be the same.
Unittests are composed as `classes`, inheriting from Django's ``TestCase`` class.

.. code-block::
tests/test_mytom.py:
from django.test import TestCase
class TestMyFunctions(TestCase):
Each test class needs to have a ``setUp`` method and at least one test
method to be valid. As the name suggests, the ``setUp`` method
configures the parameters of the test, for instance establishing any
input data necessary for the test. These data should then be stored as
attributes of the TestCase instance so that they are available when the
test is run. As a simple example, suppose you have written a function in
your TOM that converts a star's RA, Dec to galactic coordinates called
``calc_gal_coords``. This function is stored in the file ``myfunctions.py``.

::

mytom/
├── data/
├── db.sqlite3
├── manage.py
├── mytom/
│ ├── __init__.py
│ ├── settings.py
│ ├── urls.py
│ ├── wsgi.py
│ └── myfunctions.py
│ └── tests.py
├── static.
├── templates/
└── tmp/

In order to test this, we need to set up some input data in the form of
coordinates. We could do this just by setting some input RA, Dec values
as purely numerical attributes. However, bearing in
mind that the TOM stores this information as entry in its
database, a more realistic test would present that information in the
form of a :doc:`Target object <../targets/index>`. The Toolkit includes a number of
``factory`` classes designed to make it easy to create realistic input
data for testing purposes.

.. code-block::
tests/test_mytom.py:
from django.test import TestCase
from mytom.myfunctions import calc_gal_coords
from tom_targets.tests.factories import SiderealTargetFactory
class TestMyFunctions(TestCase):
def setUp(self):
self.target = SiderealTargetFactory.create()
self.target.name = 'test_target'
self.target.ra = 262.71041667
self.target.dec = -28.50847222
A test method can now be added to complete the TestCase, which calls
the TOM's function with the test input and compares the results from
the function with the expected output using an ``assert``
statement. Python includes ``assert`` natively, but you can also use
`Numpy's testing suite <https://numpy.org/doc/stable/reference/routines.testing.html>`__
or the methods inherited from the ``TestCase`` class.

.. code-block::
tests/test_mytom.py:
from django.test import TestCase
from mytom.myfunctions import calc_gal_coords
from tom_targets.tests.factories import SiderealTargetFactory
class TestMyFunctions(TestCase):
def setUp(self):
self.target = SiderealTargetFactory.create()
self.target.name = 'test_target'
self.target.ra = 262.71041667
self.target.dec = -28.50847222
def test_calc_gal_coords(self):
expected_l = 358.62948127
expected_b = 2.96696435
(test_l, test_b) = calc_gal_coords(self.target.ra,
self.target.dec)
self.assertEqual(test_l, expected_l)
self.assertEqual(test_b, expected_b)
You can add as many additional test methods to a ``TestCase`` as you like.

TOM's Built-in Tests and Factory Functions
++++++++++++++++++++++++++++++++++++++++++

The Toolkit provides a number of factory functions to generate input
data to test various objects in a TOM system. These can be found in the ``tests``
subdirectory of the core modules of the TOM Toolkit:

- Targets: `tom_base/tom_targets/tests/factories.py <https://github.com/TOMToolkit/tom_base/blob/dev/tom_targets/tests/factories.py>`__
- Observations: `tom_base/tom_observations/tests/factories.py <https://github.com/TOMToolkit/tom_base/blob/dev/tom_observations/tests/factories.py>`__

The ``tests`` subdirectories for the Toolkit's core modules are also a great
resource if you are looking for more complex examples of test code.
14 changes: 14 additions & 0 deletions docs/managing_data/customizing_data_processing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,20 @@ In order to add new data product types, simply add a new key/value pair,
with the value being a 2-tuple. The first tuple item is the database
value, and the second is the display value.

When a ``DataProduct`` is uploaded, its path is set by a `data_product_path`. By default this is set to
``{target}/{facility}/{filename}``, but can be overridden by setting the ``DATA_PRODUCT_PATH`` in
``settings.py`` to a dot-separated method path that takes a ``DataProduct`` and filename as arguments. For example, if
you wanted to set the `data_product_path` to ``{target}/{facility}/{observation_id}/{filename}``, you would point the
``DATA_PRODUCT_PATH`` to a custom method that looks something like this:

.. code:: python
def custom_data_product_path(data_product, filename):
return f'{data_product.target.name}/' \
f'{data_product.observation_record.facility}/' \
f'{data_product.observation_record.observation_id}/' \
f'{filename}'
All data products are automatically “processed” on upload, as well. Of
course, that can mean different things to different TOMs! The TOM has
two built-in data processors, both of which simply ingest the data into
Expand Down
4 changes: 3 additions & 1 deletion docs/managing_data/forced_photometry.rst
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
Integrating Forced Photometry Service Queries
---------------------------------------
---------------------------------------------

The base TOM Toolkit comes with Atlas, panSTARRS, and ZTF query services. More services
can be added by extending the base ForcedPhotometryService implementation.
Expand All @@ -13,6 +13,7 @@ photometry services. This configuration will go in the ``FORCED_PHOTOMETRY_SERVI
shown below:

.. code:: python
FORCED_PHOTOMETRY_SERVICES = {
'ATLAS': {
'class': 'tom_dataproducts.forced_photometry.atlas.AtlasForcedPhotometryService',
Expand Down Expand Up @@ -49,6 +50,7 @@ or `rabbitmq <https://github.com/rabbitmq/rabbitmq-server>`_ server to act as th
a redis server, you would add the following to your ``settings.py``:

.. code:: python
INSTALLED_APPS = [
...
'django_dramatiq',
Expand Down
4 changes: 3 additions & 1 deletion docs/observing/observation_module.rst
Original file line number Diff line number Diff line change
Expand Up @@ -114,7 +114,9 @@ from two other classes.
logic” for interacting with the remote observatory. This includes
methods to submit observations, check observation status, etc. It
inherits from ``BaseRoboticObservationFacility``, which contains some
functionality that all observation facility classes will want.
functionality that all observation facility classes will want. You
can access the user within your facility implementation using
``self.user`` if you need it for any api requests.

``MyObservationFacilityForm`` is the class that will display a GUI form
for our users to create an observation. We can submit observations
Expand Down
Loading

0 comments on commit a3b65ce

Please sign in to comment.