diff --git a/notebooks/21_DLC.ipynb b/notebooks/21_DLC.ipynb
new file mode 100644
index 000000000..1c1756c0d
--- /dev/null
+++ b/notebooks/21_DLC.ipynb
@@ -0,0 +1,2183 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "a93a1550-8a67-4346-a4bf-e5a136f3d903",
+ "metadata": {},
+ "source": [
+ "## Position- DeepLabCut from Scratch"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "13dd3267",
+ "metadata": {},
+ "source": [
+ "### Overview"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b52aff0d",
+ "metadata": {},
+ "source": [
+ "_Developer Note:_ if you may make a PR in the future, be sure to copy this\n",
+ "notebook, and use the `gitignore` prefix `temp` to avoid future conflicts.\n",
+ "\n",
+ "This is one notebook in a multi-part series on Spyglass.\n",
+ "\n",
+ "- To set up your Spyglass environment and database, see\n",
+ " [the Setup notebook](./00_Setup.ipynb)\n",
+ "- For additional info on DataJoint syntax, including table definitions and\n",
+ " inserts, see\n",
+ " [the Insert Data notebook](./01_Insert_Data.ipynb)\n",
+ "\n",
+ "This tutorial will extract position via DeepLabCut (DLC). It will walk through...\n",
+ "\n",
+ "- creating a DLC project\n",
+ "- extracting and labeling frames\n",
+ "- training your model\n",
+ "- executing pose estimation on a novel behavioral video\n",
+ "- processing the pose estimation output to extract a centroid and orientation\n",
+ "- inserting the resulting information into the `PositionOutput` table\n",
+ "\n",
+ "**Note 2: Make sure you are running this within the spyglass-position Conda environment (instructions for install are in the environment_position.yml)**"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "a8b531f7",
+ "metadata": {},
+ "source": [
+ "Here is a schematic showing the tables used in this pipeline.\n",
+ "\n",
+ "![dlc_scratch.png|2000x900](./../notebook-images/dlc_scratch.png)\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "0c67d88c-c90e-467b-ae2e-672c49a12f95",
+ "metadata": {},
+ "source": [
+ "### Table of Contents\n",
+ "[`DLCProject`](#DLCProject1)
\n",
+ "[`DLCModelTraining`](#DLCModelTraining1)
\n",
+ "[`DLCModel`](#DLCModel1)
\n",
+ "[`DLCPoseEstimation`](#DLCPoseEstimation1)
\n",
+ "[`DLCSmoothInterp`](#DLCSmoothInterp1)
\n",
+ "[`DLCCentroid`](#DLCCentroid1)
\n",
+ "[`DLCOrientation`](#DLCOrientation1)
\n",
+ "[`DLCPosV1`](#DLCPosV1-1)
\n",
+ "[`DLCPosVideo`](#DLCPosVideo1)
\n",
+ "[`PositionOutput`](#PositionOutput1)
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "70a0a678",
+ "metadata": {},
+ "source": [
+ "__You can click on any header to return to the Table of Contents__"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "c9b98c3d",
+ "metadata": {},
+ "source": [
+ "### Imports"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 12,
+ "id": "968d5189",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "%load_ext autoreload\n",
+ "%autoreload 2"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 13,
+ "id": "0f567531",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import os\n",
+ "import datajoint as dj\n",
+ "from pprint import pprint\n",
+ "\n",
+ "import spyglass.common as sgc\n",
+ "import spyglass.position.v1 as sgp\n",
+ "\n",
+ "from pathlib import Path, PosixPath, PurePath\n",
+ "import glob\n",
+ "import numpy as np\n",
+ "import pandas as pd\n",
+ "import pynwb\n",
+ "from spyglass.position import PositionOutput\n",
+ "\n",
+ "# change to the upper level folder to detect dj_local_conf.json\n",
+ "if os.path.basename(os.getcwd()) == \"notebooks\":\n",
+ " os.chdir(\"..\")\n",
+ "dj.config.load(\"dj_local_conf.json\") # load config for database connection info\n",
+ "\n",
+ "# ignore datajoint+jupyter async warnings\n",
+ "import warnings\n",
+ "\n",
+ "warnings.simplefilter(\"ignore\", category=DeprecationWarning)\n",
+ "warnings.simplefilter(\"ignore\", category=ResourceWarning)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "5e6221a3-17e5-45c0-aa40-2fd664b02219",
+ "metadata": {},
+ "source": [
+ "#### [DLCProject](#TableOfContents) "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "27aed0e1-3af7-4499-bae8-96a64e81041e",
+ "metadata": {},
+ "source": [
+ "
\n",
+ "
Notes:\n",
+ " - \n",
+ " The cells within this
DLCProject
step need to be performed \n",
+ " in a local Jupyter notebook to allow for use of the frame labeling GUI.\n",
+ " \n",
+ " - \n",
+ " Please do not add to the
BodyPart
table in the production \n",
+ " database unless necessary.\n",
+ " \n",
+ "
\n",
+ "
\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "50c9f1c9",
+ "metadata": {},
+ "source": [
+ "### Body Parts"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "96637cb9-519d-41e1-8bfd-69f68dc66b36",
+ "metadata": {},
+ "source": [
+ "We'll begin by looking at the `BodyPart` table, which stores standard names of body parts used in DLC models throughout the lab with a concise description."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 14,
+ "id": "b69f829f-9877-48ae-89d1-f876af2b8835",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/html": [
+ "\n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ "
\n",
+ " | |
\n",
+ " back | \n",
+ "middle of the rat's back |
driveBack | \n",
+ "back of drive |
driveFront | \n",
+ "front of drive |
earL | \n",
+ "left ear of the rat |
earR | \n",
+ "right ear of the rat |
forelimbL | \n",
+ "left forelimb of the rat |
forelimbR | \n",
+ "right forelimb of the rat |
greenLED | \n",
+ "greenLED |
hindlimbL | \n",
+ "left hindlimb of the rat |
hindlimbR | \n",
+ "right hindlimb of the rat |
nose | \n",
+ "tip of the nose of the rat |
redLED_C | \n",
+ "redLED_C |
\n",
+ "
\n",
+ "
...
\n",
+ "
Total: 23
\n",
+ " "
+ ],
+ "text/plain": [
+ "*bodypart bodypart_descr\n",
+ "+------------+ +------------+\n",
+ "back middle of the \n",
+ "driveBack back of drive \n",
+ "driveFront front of drive\n",
+ "earL left ear of th\n",
+ "earR right ear of t\n",
+ "forelimbL left forelimb \n",
+ "forelimbR right forelimb\n",
+ "greenLED greenLED \n",
+ "hindlimbL left hindlimb \n",
+ "hindlimbR right hindlimb\n",
+ "nose tip of the nos\n",
+ "redLED_C redLED_C \n",
+ " ...\n",
+ " (Total: 23)"
+ ]
+ },
+ "execution_count": 14,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "sgp.BodyPart()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "9616512e",
+ "metadata": {},
+ "source": [
+ "If the bodyparts you plan to use in your model are not yet in the table, here is code to add bodyparts:\n",
+ "\n",
+ "```python\n",
+ "sgp.BodyPart.insert(\n",
+ " [\n",
+ " {\"bodypart\": \"bp_1\", \"bodypart_description\": \"concise descrip\"},\n",
+ " {\"bodypart\": \"bp_2\", \"bodypart_description\": \"concise descrip\"},\n",
+ " ],\n",
+ " skip_duplicates=True,\n",
+ ")\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "57b590d3",
+ "metadata": {},
+ "source": [
+ "### Define videos and camera name (optional) for training set"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "5d5aae37",
+ "metadata": {},
+ "source": [
+ "To train a model, we'll need to extract frames, which we can label as training data. We can construct a list of videos from which we'll extract frames.\n",
+ "\n",
+ "The list can either contain dictionaries identifying behavioral videos for NWB files that have already been added to Spyglass, or absolute file paths to the videos you want to use.\n",
+ "\n",
+ "For this tutorial, we'll use two videos for which we already have frames labeled."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "7b5e157b",
+ "metadata": {},
+ "source": [
+ "Defining camera name is optional: it should be done in cases where there are multiple cameras streaming per epoch, but not necessary otherwise.
\n",
+ "example:\n",
+ "`camera_name = \"HomeBox_camera\" \n",
+ " `"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "56f45e7f",
+ "metadata": {},
+ "source": [
+ "_NOTE:_ The official release of Spyglass does not yet support multicamera\n",
+ "projects. You can monitor progress on the effort to add this feature by checking\n",
+ "[this PR](https://github.com/LorenFrankLab/spyglass/pull/684) or use\n",
+ "[this experimental branch](https://github.com/dpeg22/spyglass/tree/add-multi-camera),\n",
+ "which takes the keys nwb_file_name and epoch, and camera_name in the video_list variable.\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 15,
+ "id": "15971506",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "video_list = [\n",
+ " {\"nwb_file_name\": \"J1620210529_.nwb\", \"epoch\": 2},\n",
+ " {\"nwb_file_name\": \"peanut20201103_.nwb\", \"epoch\": 4},\n",
+ "]"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "a9f8e43d",
+ "metadata": {},
+ "source": [
+ "### Path variables\n",
+ "\n",
+ "The position pipeline also keeps track of paths for project, video, and output.\n",
+ "Just like we saw in [Setup](./00_Setup.ipynb), you can manage these either with\n",
+ "environmental variables...\n",
+ "\n",
+ "```bash\n",
+ "export DLC_PROJECT_DIR=\"/nimbus/deeplabcut/projects\"\n",
+ "export DLC_VIDEO_DIR=\"/nimbus/deeplabcut/video\"\n",
+ "export DLC_OUTPUT_DIR=\"/nimbus/deeplabcut/output\"\n",
+ "```\n",
+ "\n",
+ "\n",
+ "\n",
+ "Or these can be set in your datajoint config:\n",
+ "\n",
+ "```json\n",
+ "{\n",
+ " \"custom\": {\n",
+ " \"dlc_dirs\": {\n",
+ " \"base\": \"/nimbus/deeplabcut/\",\n",
+ " \"project\": \"/nimbus/deeplabcut/projects\",\n",
+ " \"video\": \"/nimbus/deeplabcut/video\",\n",
+ " \"output\": \"/nimbus/deeplabcut/output\"\n",
+ " }\n",
+ " }\n",
+ "}\n",
+ "```\n",
+ "\n",
+ "_NOTE:_ If only `base` is specified as shown above, spyglass will assume the\n",
+ "relative directories shown.\n",
+ "\n",
+ "You can check the result of this setup process with..."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 16,
+ "id": "49d7d9fc",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "{'debug_mode': False,\n",
+ " 'prepopulate': True,\n",
+ " 'SPYGLASS_BASE_DIR': '/stelmo/nwb',\n",
+ " 'SPYGLASS_RAW_DIR': '/stelmo/nwb/raw',\n",
+ " 'SPYGLASS_ANALYSIS_DIR': '/stelmo/nwb/analysis',\n",
+ " 'SPYGLASS_RECORDING_DIR': '/stelmo/nwb/recording',\n",
+ " 'SPYGLASS_SORTING_DIR': '/stelmo/nwb/sorting',\n",
+ " 'SPYGLASS_WAVEFORMS_DIR': '/stelmo/nwb/waveforms',\n",
+ " 'SPYGLASS_TEMP_DIR': '/stelmo/nwb/tmp/spyglass',\n",
+ " 'SPYGLASS_VIDEO_DIR': '/stelmo/nwb/video',\n",
+ " 'KACHERY_CLOUD_DIR': '/stelmo/nwb/.kachery-cloud',\n",
+ " 'KACHERY_STORAGE_DIR': '/stelmo/nwb/kachery_storage',\n",
+ " 'KACHERY_TEMP_DIR': '/stelmo/nwb/tmp',\n",
+ " 'DLC_PROJECT_DIR': '/nimbus/deeplabcut/projects',\n",
+ " 'DLC_VIDEO_DIR': '/nimbus/deeplabcut/video',\n",
+ " 'DLC_OUTPUT_DIR': '/nimbus/deeplabcut/output',\n",
+ " 'KACHERY_ZONE': 'franklab.default',\n",
+ " 'FIGURL_CHANNEL': 'franklab2',\n",
+ " 'DJ_SUPPORT_FILEPATH_MANAGEMENT': 'TRUE',\n",
+ " 'KACHERY_CLOUD_EPHEMERAL': 'TRUE',\n",
+ " 'HD5_USE_FILE_LOCKING': 'FALSE'}"
+ ]
+ },
+ "execution_count": 16,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "from spyglass.settings import config\n",
+ "\n",
+ "config"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "32c023b0-d00d-40b0-9a37-d0d3e4a4ae2a",
+ "metadata": {},
+ "source": [
+ "Before creating our project, we need to define a few variables.\n",
+ "\n",
+ "- A team name, as shown in `LabTeam` for setting permissions. Here, we'll\n",
+ " use \"LorenLab\".\n",
+ "- A `project_name`, as a unique identifier for this DLC project. Here, we'll use\n",
+ " **\"tutorial_scratch_yourinitials\"**\n",
+ "- `bodyparts` is a list of body parts for which we want to extract position.\n",
+ " The pre-labeled frames we're using include the bodyparts listed below.\n",
+ "- Number of frames to extract/label as `frames_per_video`. Note that the DLC creators recommend having 200 frames as the minimum total number for each project."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 17,
+ "id": "347e98f1",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "project name: tutorial_scratch_DG is already in use.\n"
+ ]
+ }
+ ],
+ "source": [
+ "team_name = \"LorenLab\"\n",
+ "project_name = \"tutorial_scratch_DG\"\n",
+ "frames_per_video = 100\n",
+ "bodyparts = [\"redLED_C\", \"greenLED\", \"redLED_L\", \"redLED_R\", \"tailBase\"]\n",
+ "project_key = sgp.DLCProject.insert_new_project(\n",
+ " project_name=project_name,\n",
+ " bodyparts=bodyparts,\n",
+ " lab_team=team_name,\n",
+ " frames_per_video=frames_per_video,\n",
+ " video_list=video_list,\n",
+ " skip_duplicates=True,\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "f5d83452-48eb-4669-89eb-a6beb1f2d051",
+ "metadata": {},
+ "source": [
+ "Now that we've intialized our project we'll need to extract frames which we will then label. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "7d8b1595",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "#comment this line out after you finish frame extraction for each project\n",
+ "sgp.DLCProject().run_extract_frames(project_key)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "68110734",
+ "metadata": {},
+ "source": [
+ "This is the line used to label the frames you extracted, if you wish to use the DLC GUI on the computer you are currently using.\n",
+ "```#comment this line out after frames are labeled for your project\n",
+ "sgp.DLCProject().run_label_frames(project_key)\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "8b241030",
+ "metadata": {},
+ "source": [
+ "Otherwise, it is best/easiest practice to label the frames on your local computer (like a MacBook) that can run DeepLabCut's GUI well. Instructions:
\n",
+ "1. Install DLC on your local (preferably into a 'Src' folder): https://deeplabcut.github.io/DeepLabCut/docs/installation.html\n",
+ "2. Upload frames extracted and saved in nimbus (should be `/nimbus/deeplabcut//labeled-data`) AND the project's associated config file (should be `/nimbus/deeplabcut//config.yaml`) to Box (we get free with UCSF)\n",
+ "3. Download labeled-data and config files on your local from Box\n",
+ "4. Create a 'projects' folder where you installed DeepLabCut; create a new folder with your complete project name there; save the downloaded files there.\n",
+ "4. Edit the config.yaml file: line 9 defining `project_path` needs to be the file path where it is saved on your local (ex: `/Users/lorenlab/Src/DeepLabCut/projects/tutorial_sratch_DG-LorenLab-2023-08-16`)\n",
+ "5. Open the DLC GUI through terminal \n",
+ "
(ex: `conda activate miniconda/envs/DEEPLABCUT_M1`\n",
+ "\t\t
`pythonw -m deeplabcut`)\n",
+ "6. Load an existing project; choose the config.yaml file\n",
+ "7. Label frames; labeling tutorial: https://www.youtube.com/watch?v=hsA9IB5r73E.\n",
+ "8. Once all frames are labeled, you should re-upload labeled-data folder back to Box and overwrite it in the original nimbus location so that your completed frames are ready to be used in the model."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "c12dd229-2f8b-455a-a7b1-a20916cefed9",
+ "metadata": {},
+ "source": [
+ "Now we can check the `DLCProject.File` part table and see all of our training files and videos there!"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 18,
+ "id": "3d4f3fa6-cce9-4d4a-a252-3424313c6a97",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/html": [
+ "\n",
+ " \n",
+ " \n",
+ " \n",
+ " Paths of training files (e.g., labeled pngs, CSV or video)\n",
+ " \n",
+ "
\n",
+ " | | | |
\n",
+ " tutorial_scratch_DG | \n",
+ "20201103_peanut_04_r2 | \n",
+ "mp4 | \n",
+ "/nimbus/deeplabcut/projects/tutorial_scratch_DG-LorenLab-2023-08-16/videos/20201103_peanut_04_r2.mp4 |
tutorial_scratch_DG | \n",
+ "20201103_peanut_04_r2_labeled_data | \n",
+ "h5 | \n",
+ "/nimbus/deeplabcut/projects/tutorial_scratch_DG-LorenLab-2023-08-16/labeled-data/20201103_peanut_04_r2/CollectedData_LorenLab.h5 |
tutorial_scratch_DG | \n",
+ "20210529_J16_02_r1 | \n",
+ "mp4 | \n",
+ "/nimbus/deeplabcut/projects/tutorial_scratch_DG-LorenLab-2023-08-16/videos/20210529_J16_02_r1.mp4 |
tutorial_scratch_DG | \n",
+ "20210529_J16_02_r1_labeled_data | \n",
+ "h5 | \n",
+ "/nimbus/deeplabcut/projects/tutorial_scratch_DG-LorenLab-2023-08-16/labeled-data/20210529_J16_02_r1/CollectedData_LorenLab.h5 |
\n",
+ "
\n",
+ " \n",
+ "
Total: 4
\n",
+ " "
+ ],
+ "text/plain": [
+ "*project_name *file_name *file_ext file_path \n",
+ "+------------+ +------------+ +----------+ +------------+\n",
+ "tutorial_scrat 20201103_peanu mp4 /nimbus/deepla\n",
+ "tutorial_scrat 20201103_peanu h5 /nimbus/deepla\n",
+ "tutorial_scrat 20210529_J16_0 mp4 /nimbus/deepla\n",
+ "tutorial_scrat 20210529_J16_0 h5 /nimbus/deepla\n",
+ " (Total: 4)"
+ ]
+ },
+ "execution_count": 18,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "sgp.DLCProject.File & project_key"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "7e2e3eab-60c7-4a3c-bc8f-fd4e8dcf52a2",
+ "metadata": {},
+ "source": [
+ "\n",
+ " This step and beyond should be run on a GPU-enabled machine.\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "0e48ecf0",
+ "metadata": {},
+ "source": [
+ "#### [DLCModelTraining](#ToC)\n",
+ "\n",
+ "Please make sure you're running this notebook on a GPU-enabled machine.\n",
+ "\n",
+ "Now that we've imported existing frames, we can get ready to train our model.\n",
+ "\n",
+ "First, we'll need to define a set of parameters for `DLCModelTrainingParams`, which will get used by DeepLabCut during training. Let's start with `gputouse`,\n",
+ "which determines which GPU core to use.\n",
+ "\n",
+ "The cell below determines which core has space and set the `gputouse` variable\n",
+ "accordingly.\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 19,
+ "id": "a8fc5bb7",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "{0: 305}"
+ ]
+ },
+ "execution_count": 19,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "sgp.dlc_utils.get_gpu_memory()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "bca035a9",
+ "metadata": {},
+ "source": [
+ "Set GPU core:\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 20,
+ "id": "1ff0e393",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "gputouse = 1 # 1-9"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "2b047686",
+ "metadata": {},
+ "source": [
+ "Now we'll define the rest of our parameters and insert the entry.\n",
+ "\n",
+ "To see all possible parameters, try:\n",
+ "\n",
+ "```python\n",
+ "sgp.DLCModelTrainingParams.get_accepted_params()\n",
+ "```\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 21,
+ "id": "399581ee",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "New param set not added\n",
+ "A param set with name: tutorial already exists\n"
+ ]
+ }
+ ],
+ "source": [
+ "training_params_name = \"tutorial\"\n",
+ "sgp.DLCModelTrainingParams.insert_new_params(\n",
+ " paramset_name=training_params_name,\n",
+ " params={\n",
+ " \"trainingsetindex\": 0,\n",
+ " \"shuffle\": 1,\n",
+ " \"gputouse\": gputouse,\n",
+ " \"net_type\": \"resnet_50\",\n",
+ " \"augmenter_type\": \"imgaug\",\n",
+ " },\n",
+ " skip_duplicates=True,\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "6b6cc709",
+ "metadata": {},
+ "source": [
+ "Next we'll modify the `project_key` from above to include the necessary entries for `DLCModelTraining`"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 22,
+ "id": "7acd150b",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# project_key['project_path'] = os.path.dirname(project_key['config_path'])\n",
+ "if \"config_path\" in project_key:\n",
+ " del project_key[\"config_path\"]"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "0bc7ddaa",
+ "metadata": {},
+ "source": [
+ "We can insert an entry into `DLCModelTrainingSelection` and populate `DLCModelTraining`.\n",
+ "\n",
+ "_Note:_ You can stop training at any point using `I + I` or interrupt the Kernel. \n",
+ "\n",
+ "The maximum total number of training iterations is 1030000; you can end training before this amount if the loss rate (lr) and total loss plateau and are very close to 0.\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 23,
+ "id": "3c252541",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "project_name : varchar(100) # name of DLC project\n",
+ "dlc_training_params_name : varchar(50) # descriptive name of parameter set\n",
+ "training_id : int # unique integer,\n",
+ "---\n",
+ "model_prefix=\"\" : varchar(32) # "
+ ]
+ },
+ "execution_count": 23,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "sgp.DLCModelTrainingSelection.heading"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 24,
+ "id": "139d2f30",
+ "metadata": {
+ "tags": []
+ },
+ "outputs": [
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "2024-01-18 10:23:30.406102: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: SSE4.1 SSE4.2 AVX AVX2 FMA\n",
+ "To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Loading DLC 2.2.3...\n",
+ "OpenCV is built with OpenMP support. This usually results in poor performance. For details, see https://github.com/tensorpack/benchmarks/blob/master/ImageNet/benchmark-opencv-resize.py\n"
+ ]
+ },
+ {
+ "ename": "PermissionError",
+ "evalue": "[Errno 13] Permission denied: '/nimbus/deeplabcut/projects/tutorial_scratch_DG-LorenLab-2023-08-16/log.log'",
+ "output_type": "error",
+ "traceback": [
+ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
+ "\u001b[0;31mPermissionError\u001b[0m Traceback (most recent call last)",
+ "Cell \u001b[0;32mIn[24], line 16\u001b[0m\n\u001b[1;32m 1\u001b[0m sgp\u001b[38;5;241m.\u001b[39mDLCModelTrainingSelection()\u001b[38;5;241m.\u001b[39minsert1(\n\u001b[1;32m 2\u001b[0m {\n\u001b[1;32m 3\u001b[0m \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mproject_key,\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 7\u001b[0m }\n\u001b[1;32m 8\u001b[0m )\n\u001b[1;32m 9\u001b[0m model_training_key \u001b[38;5;241m=\u001b[39m (\n\u001b[1;32m 10\u001b[0m sgp\u001b[38;5;241m.\u001b[39mDLCModelTrainingSelection\n\u001b[1;32m 11\u001b[0m \u001b[38;5;241m&\u001b[39m {\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 14\u001b[0m }\n\u001b[1;32m 15\u001b[0m )\u001b[38;5;241m.\u001b[39mfetch1(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mKEY\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n\u001b[0;32m---> 16\u001b[0m \u001b[43msgp\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mDLCModelTraining\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mpopulate\u001b[49m\u001b[43m(\u001b[49m\u001b[43mmodel_training_key\u001b[49m\u001b[43m)\u001b[49m\n",
+ "File \u001b[0;32m~/anaconda3/envs/spyglass-position/lib/python3.9/site-packages/datajoint/autopopulate.py:241\u001b[0m, in \u001b[0;36mAutoPopulate.populate\u001b[0;34m(self, suppress_errors, return_exception_objects, reserve_jobs, order, limit, max_calls, display_progress, processes, make_kwargs, *restrictions)\u001b[0m\n\u001b[1;32m 237\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m processes \u001b[38;5;241m==\u001b[39m \u001b[38;5;241m1\u001b[39m:\n\u001b[1;32m 238\u001b[0m \u001b[38;5;28;01mfor\u001b[39;00m key \u001b[38;5;129;01min\u001b[39;00m (\n\u001b[1;32m 239\u001b[0m tqdm(keys, desc\u001b[38;5;241m=\u001b[39m\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m\u001b[38;5;18m__class__\u001b[39m\u001b[38;5;241m.\u001b[39m\u001b[38;5;18m__name__\u001b[39m) \u001b[38;5;28;01mif\u001b[39;00m display_progress \u001b[38;5;28;01melse\u001b[39;00m keys\n\u001b[1;32m 240\u001b[0m ):\n\u001b[0;32m--> 241\u001b[0m error \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_populate1\u001b[49m\u001b[43m(\u001b[49m\u001b[43mkey\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mjobs\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mpopulate_kwargs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 242\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m error \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m:\n\u001b[1;32m 243\u001b[0m error_list\u001b[38;5;241m.\u001b[39mappend(error)\n",
+ "File \u001b[0;32m~/anaconda3/envs/spyglass-position/lib/python3.9/site-packages/datajoint/autopopulate.py:292\u001b[0m, in \u001b[0;36mAutoPopulate._populate1\u001b[0;34m(self, key, jobs, suppress_errors, return_exception_objects, make_kwargs)\u001b[0m\n\u001b[1;32m 290\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m\u001b[38;5;18m__class__\u001b[39m\u001b[38;5;241m.\u001b[39m_allow_insert \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;01mTrue\u001b[39;00m\n\u001b[1;32m 291\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[0;32m--> 292\u001b[0m \u001b[43mmake\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;28;43mdict\u001b[39;49m\u001b[43m(\u001b[49m\u001b[43mkey\u001b[49m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43m(\u001b[49m\u001b[43mmake_kwargs\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;129;43;01mor\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[43m{\u001b[49m\u001b[43m}\u001b[49m\u001b[43m)\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 293\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m (\u001b[38;5;167;01mKeyboardInterrupt\u001b[39;00m, \u001b[38;5;167;01mSystemExit\u001b[39;00m, \u001b[38;5;167;01mException\u001b[39;00m) \u001b[38;5;28;01mas\u001b[39;00m error:\n\u001b[1;32m 294\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n",
+ "File \u001b[0;32m~/Src/spyglass/src/spyglass/position/v1/position_dlc_training.py:150\u001b[0m, in \u001b[0;36mDLCModelTraining.make\u001b[0;34m(self, key)\u001b[0m\n\u001b[1;32m 144\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mdeeplabcut\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mutils\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mauxiliaryfunctions\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m (\n\u001b[1;32m 145\u001b[0m GetModelFolder \u001b[38;5;28;01mas\u001b[39;00m get_model_folder,\n\u001b[1;32m 146\u001b[0m )\n\u001b[1;32m 147\u001b[0m config_path, project_name \u001b[38;5;241m=\u001b[39m (DLCProject() \u001b[38;5;241m&\u001b[39m key)\u001b[38;5;241m.\u001b[39mfetch1(\n\u001b[1;32m 148\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mconfig_path\u001b[39m\u001b[38;5;124m\"\u001b[39m, \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mproject_name\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m 149\u001b[0m )\n\u001b[0;32m--> 150\u001b[0m \u001b[38;5;28;01mwith\u001b[39;00m \u001b[43mOutputLogger\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m 151\u001b[0m \u001b[43m \u001b[49m\u001b[43mname\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mDLC_project_\u001b[39;49m\u001b[38;5;132;43;01m{project_name}\u001b[39;49;00m\u001b[38;5;124;43m_training\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[1;32m 152\u001b[0m \u001b[43m \u001b[49m\u001b[43mpath\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;124;43mf\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;132;43;01m{\u001b[39;49;00m\u001b[43mos\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mpath\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mdirname\u001b[49m\u001b[43m(\u001b[49m\u001b[43mconfig_path\u001b[49m\u001b[43m)\u001b[49m\u001b[38;5;132;43;01m}\u001b[39;49;00m\u001b[38;5;124;43m/log.log\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[1;32m 153\u001b[0m \u001b[43m \u001b[49m\u001b[43mprint_console\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43;01mTrue\u001b[39;49;00m\u001b[43m,\u001b[49m\n\u001b[1;32m 154\u001b[0m \u001b[43m\u001b[49m\u001b[43m)\u001b[49m \u001b[38;5;28;01mas\u001b[39;00m logger:\n\u001b[1;32m 155\u001b[0m dlc_config \u001b[38;5;241m=\u001b[39m read_config(config_path)\n\u001b[1;32m 156\u001b[0m project_path \u001b[38;5;241m=\u001b[39m dlc_config[\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mproject_path\u001b[39m\u001b[38;5;124m\"\u001b[39m]\n",
+ "File \u001b[0;32m~/Src/spyglass/src/spyglass/position/v1/dlc_utils.py:192\u001b[0m, in \u001b[0;36mOutputLogger.__init__\u001b[0;34m(self, name, path, level, **kwargs)\u001b[0m\n\u001b[1;32m 191\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21m__init__\u001b[39m(\u001b[38;5;28mself\u001b[39m, name, path, level\u001b[38;5;241m=\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mINFO\u001b[39m\u001b[38;5;124m\"\u001b[39m, \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mkwargs):\n\u001b[0;32m--> 192\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mlogger \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43msetup_logger\u001b[49m\u001b[43m(\u001b[49m\u001b[43mname\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mpath\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 193\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mname \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mlogger\u001b[38;5;241m.\u001b[39mname\n\u001b[1;32m 194\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mlevel \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mgetattr\u001b[39m(logging, level)\n",
+ "File \u001b[0;32m~/Src/spyglass/src/spyglass/position/v1/dlc_utils.py:244\u001b[0m, in \u001b[0;36mOutputLogger.setup_logger\u001b[0;34m(self, name_logfile, path_logfile, print_console)\u001b[0m\n\u001b[1;32m 241\u001b[0m logger\u001b[38;5;241m.\u001b[39maddHandler(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_get_stream_handler())\n\u001b[1;32m 243\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[0;32m--> 244\u001b[0m file_handler \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_get_file_handler\u001b[49m\u001b[43m(\u001b[49m\u001b[43mpath_logfile\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 245\u001b[0m logger\u001b[38;5;241m.\u001b[39maddHandler(file_handler)\n\u001b[1;32m 246\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m print_console:\n",
+ "File \u001b[0;32m~/Src/spyglass/src/spyglass/position/v1/dlc_utils.py:255\u001b[0m, in \u001b[0;36mOutputLogger._get_file_handler\u001b[0;34m(self, path)\u001b[0m\n\u001b[1;32m 253\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m os\u001b[38;5;241m.\u001b[39mpath\u001b[38;5;241m.\u001b[39mexists(output_dir):\n\u001b[1;32m 254\u001b[0m output_dir\u001b[38;5;241m.\u001b[39mmkdir(parents\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mTrue\u001b[39;00m, exist_ok\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mTrue\u001b[39;00m)\n\u001b[0;32m--> 255\u001b[0m file_handler \u001b[38;5;241m=\u001b[39m \u001b[43mlogging\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mFileHandler\u001b[49m\u001b[43m(\u001b[49m\u001b[43mpath\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mmode\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43ma\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m)\u001b[49m\n\u001b[1;32m 256\u001b[0m file_handler\u001b[38;5;241m.\u001b[39msetFormatter(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_get_formatter())\n\u001b[1;32m 257\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m file_handler\n",
+ "File \u001b[0;32m~/anaconda3/envs/spyglass-position/lib/python3.9/logging/__init__.py:1146\u001b[0m, in \u001b[0;36mFileHandler.__init__\u001b[0;34m(self, filename, mode, encoding, delay, errors)\u001b[0m\n\u001b[1;32m 1144\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mstream \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;01mNone\u001b[39;00m\n\u001b[1;32m 1145\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[0;32m-> 1146\u001b[0m StreamHandler\u001b[38;5;241m.\u001b[39m\u001b[38;5;21m__init__\u001b[39m(\u001b[38;5;28mself\u001b[39m, \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_open\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m)\n",
+ "File \u001b[0;32m~/anaconda3/envs/spyglass-position/lib/python3.9/logging/__init__.py:1175\u001b[0m, in \u001b[0;36mFileHandler._open\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m 1170\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21m_open\u001b[39m(\u001b[38;5;28mself\u001b[39m):\n\u001b[1;32m 1171\u001b[0m \u001b[38;5;250m \u001b[39m\u001b[38;5;124;03m\"\"\"\u001b[39;00m\n\u001b[1;32m 1172\u001b[0m \u001b[38;5;124;03m Open the current base file with the (original) mode and encoding.\u001b[39;00m\n\u001b[1;32m 1173\u001b[0m \u001b[38;5;124;03m Return the resulting stream.\u001b[39;00m\n\u001b[1;32m 1174\u001b[0m \u001b[38;5;124;03m \"\"\"\u001b[39;00m\n\u001b[0;32m-> 1175\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mopen\u001b[39;49m\u001b[43m(\u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mbaseFilename\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mmode\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mencoding\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mencoding\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 1176\u001b[0m \u001b[43m \u001b[49m\u001b[43merrors\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43merrors\u001b[49m\u001b[43m)\u001b[49m\n",
+ "\u001b[0;31mPermissionError\u001b[0m: [Errno 13] Permission denied: '/nimbus/deeplabcut/projects/tutorial_scratch_DG-LorenLab-2023-08-16/log.log'"
+ ]
+ }
+ ],
+ "source": [
+ "sgp.DLCModelTrainingSelection().insert1(\n",
+ " {\n",
+ " **project_key,\n",
+ " \"dlc_training_params_name\": training_params_name,\n",
+ " \"training_id\": 0,\n",
+ " \"model_prefix\": \"\",\n",
+ " }\n",
+ ")\n",
+ "model_training_key = (\n",
+ " sgp.DLCModelTrainingSelection\n",
+ " & {\n",
+ " **project_key,\n",
+ " \"dlc_training_params_name\": training_params_name,\n",
+ " }\n",
+ ").fetch1(\"KEY\")\n",
+ "sgp.DLCModelTraining.populate(model_training_key)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "da004b3e",
+ "metadata": {},
+ "source": [
+ "Here we'll make sure that the entry made it into the table properly!"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "e5306fd9",
+ "metadata": {
+ "scrolled": true
+ },
+ "outputs": [],
+ "source": [
+ "sgp.DLCModelTraining() & model_training_key"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "ac5b7687",
+ "metadata": {},
+ "source": [
+ "Populating `DLCModelTraining` automatically inserts the entry into\n",
+ "`DLCModelSource`, which is used to select between models trained using Spyglass\n",
+ "vs. other tools."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "a349dc3d",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "sgp.DLCModelSource() & model_training_key"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "92cb8969",
+ "metadata": {},
+ "source": [
+ "The `source` field will only accept _\"FromImport\"_ or _\"FromUpstream\"_ as entries. Let's checkout the `FromUpstream` part table attached to `DLCModelSource` below."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "b0cc1afa",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "sgp.DLCModelSource.FromUpstream() & model_training_key"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "67a9b2c6",
+ "metadata": {},
+ "source": [
+ "#### [DLCModel](#TableOfContents) \n",
+ "\n",
+ "Next we'll populate the `DLCModel` table, which holds all the relevant\n",
+ "information for all trained models.\n",
+ "\n",
+ "First, we'll need to determine a set of parameters for our model to select the\n",
+ "correct model file. Here is the default:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "bb663861",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "sgp.DLCModelParams.get_default()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "8b45a6ed",
+ "metadata": {},
+ "source": [
+ "Here is the syntax to add your own parameter set:\n",
+ "\n",
+ "```python\n",
+ "dlc_model_params_name = \"make_this_yours\"\n",
+ "params = {\n",
+ " \"params\": {},\n",
+ " \"shuffle\": 1,\n",
+ " \"trainingsetindex\": 0,\n",
+ " \"model_prefix\": \"\",\n",
+ "}\n",
+ "sgp.DLCModelParams.insert1(\n",
+ " {\"dlc_model_params_name\": dlc_model_params_name, \"params\": params},\n",
+ " skip_duplicates=True,\n",
+ ")\n",
+ "```\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "7bce9696",
+ "metadata": {},
+ "source": [
+ "We can insert sets of parameters into `DLCModelSelection` and populate\n",
+ "`DLCModel`."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "eaa23fab",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "temp_model_key = (sgp.DLCModelSource & model_training_key).fetch1(\"KEY\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "4e418eba",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "#comment these lines out after successfully inserting, for each project\n",
+ "sgp.DLCModelSelection().insert1({\n",
+ " **temp_model_key,\n",
+ " \"dlc_model_params_name\": \"default\"},\n",
+ " skip_duplicates=True)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "ccae03bb",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "model_key = (sgp.DLCModelSelection & temp_model_key).fetch1(\"KEY\")\n",
+ "sgp.DLCModel.populate(model_key)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "f8f1b839",
+ "metadata": {},
+ "source": [
+ "Again, let's make sure that everything looks correct in `DLCModel`."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "c39f72ca",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "sgp.DLCModel() & model_key"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "53ce4ee4",
+ "metadata": {},
+ "source": [
+ "#### [DLCPoseEstimation](#TableOfContents) \n",
+ "\n",
+ "Alright, now that we've trained model and populated the `DLCModel` table, we're ready to set-up Pose Estimation on a behavioral video of your choice.
For this tutorial, you can choose to use an epoch of your choice, we can also use the one specified below. If you'd like to use your own video, just specify the `nwb_file_name` and `epoch` number and make sure it's in the `VideoFile` table!"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "id": "fc2a8dab-7caf-4389-8494-9158d2ec5b20",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/html": [
+ "\n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ "
\n",
+ " | | | | |
\n",
+ " J1620210604_.nwb | \n",
+ "1 | \n",
+ "0 | \n",
+ " | \n",
+ "178f5746-30e3-4957-891e-8024e23522dc |
J1620210604_.nwb | \n",
+ "2 | \n",
+ "0 | \n",
+ " | \n",
+ "d64ec979-326b-429f-b3fe-1bbfbf806293 |
J1620210604_.nwb | \n",
+ "3 | \n",
+ "0 | \n",
+ " | \n",
+ "cf14bcd2-c0a9-457b-8791-42f3f28dd912 |
J1620210604_.nwb | \n",
+ "4 | \n",
+ "0 | \n",
+ " | \n",
+ "183c9910-36fd-46c1-a24c-8d1c306d7248 |
J1620210604_.nwb | \n",
+ "5 | \n",
+ "0 | \n",
+ " | \n",
+ "4677c7cd-8cd8-4801-8f6e-5b7bb14a6d6b |
J1620210604_.nwb | \n",
+ "6 | \n",
+ "0 | \n",
+ " | \n",
+ "0e46532b-483f-43af-ba6e-ba75ccf340ea |
J1620210604_.nwb | \n",
+ "7 | \n",
+ "0 | \n",
+ " | \n",
+ "c6d1d037-44ec-4d91-99d1-172d371bf82a |
J1620210604_.nwb | \n",
+ "8 | \n",
+ "0 | \n",
+ " | \n",
+ "4d7e070c-6220-47de-8173-993f013fafa8 |
J1620210604_.nwb | \n",
+ "9 | \n",
+ "0 | \n",
+ " | \n",
+ "b50108ec-f587-46df-b1c8-3ca23091bde0 |
J1620210604_.nwb | \n",
+ "10 | \n",
+ "0 | \n",
+ " | \n",
+ "b9b5da20-da39-4274-9be2-55610cfd1b5b |
J1620210604_.nwb | \n",
+ "11 | \n",
+ "0 | \n",
+ " | \n",
+ "6c827b8d-513c-4dba-ae75-0b36dcf4811f |
J1620210604_.nwb | \n",
+ "12 | \n",
+ "0 | \n",
+ " | \n",
+ "41bd2344-1b41-4737-8dfb-7c860d089155 |
\n",
+ "
\n",
+ "
...
\n",
+ "
Total: 20
\n",
+ " "
+ ],
+ "text/plain": [
+ "*nwb_file_name *epoch *video_file_nu camera_name video_file_obj\n",
+ "+------------+ +-------+ +------------+ +------------+ +------------+\n",
+ "J1620210604_.n 1 0 178f5746-30e3-\n",
+ "J1620210604_.n 2 0 d64ec979-326b-\n",
+ "J1620210604_.n 3 0 cf14bcd2-c0a9-\n",
+ "J1620210604_.n 4 0 183c9910-36fd-\n",
+ "J1620210604_.n 5 0 4677c7cd-8cd8-\n",
+ "J1620210604_.n 6 0 0e46532b-483f-\n",
+ "J1620210604_.n 7 0 c6d1d037-44ec-\n",
+ "J1620210604_.n 8 0 4d7e070c-6220-\n",
+ "J1620210604_.n 9 0 b50108ec-f587-\n",
+ "J1620210604_.n 10 0 b9b5da20-da39-\n",
+ "J1620210604_.n 11 0 6c827b8d-513c-\n",
+ "J1620210604_.n 12 0 41bd2344-1b41-\n",
+ " ...\n",
+ " (Total: 20)"
+ ]
+ },
+ "execution_count": 3,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "nwb_file_name = \"J1620210604_.nwb\"\n",
+ "sgc.VideoFile() & {\"nwb_file_name\": nwb_file_name}"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "id": "4140ece8",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "epoch = 14 #change based on VideoFile entry\n",
+ "video_file_num = 0 #change based on VideoFile entry"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "0f26a081-859d-4dff-bb58-84cec2ff4b3f",
+ "metadata": {},
+ "source": [
+ "Using `insert_estimation_task` will convert out video to be in .mp4 format (DLC\n",
+ "struggles with .h264) and determine the directory in which we'll store the pose\n",
+ "estimation results.\n",
+ "\n",
+ "- `task_mode` (trigger or load) determines whether or not populating\n",
+ " `DLCPoseEstimation` triggers a new pose estimation, or loads an existing.\n",
+ "- `video_file_num` will be 0 in almost all\n",
+ " cases.\n",
+ "- `gputouse` was already set during training. It may be a good idea to make sure\n",
+ " that core is still free before moving forward."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "e60eb2fc",
+ "metadata": {},
+ "source": [
+ "The `DLCPoseEstimationSelection` insertion step will convert your .h264 video to an .mp4 first and save it in `/nimbus/deeplabcut/video`. If this video already exists here, the insertion will never complete.\n",
+ "\n",
+ "We first delete any .mp4 that exists for this video from the nimbus folder:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "130d85d0",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "! find /nimbus/deeplabcut/video -type f -name '*20210604_J16*' -delete # change based on date and rat with which you are training the model"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "id": "9df5644f-febc-49d7-a60d-6991798c20d7",
+ "metadata": {},
+ "outputs": [
+ {
+ "ename": "NameError",
+ "evalue": "name 'model_key' is not defined",
+ "output_type": "error",
+ "traceback": [
+ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
+ "\u001b[0;31mNameError\u001b[0m Traceback (most recent call last)",
+ "Cell \u001b[0;32mIn[6], line 6\u001b[0m\n\u001b[1;32m 1\u001b[0m pose_estimation_key \u001b[38;5;241m=\u001b[39m sgp\u001b[38;5;241m.\u001b[39mDLCPoseEstimationSelection\u001b[38;5;241m.\u001b[39minsert_estimation_task(\n\u001b[1;32m 2\u001b[0m {\n\u001b[1;32m 3\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mnwb_file_name\u001b[39m\u001b[38;5;124m\"\u001b[39m: nwb_file_name,\n\u001b[1;32m 4\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mepoch\u001b[39m\u001b[38;5;124m\"\u001b[39m: epoch,\n\u001b[1;32m 5\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mvideo_file_num\u001b[39m\u001b[38;5;124m\"\u001b[39m: video_file_num,\n\u001b[0;32m----> 6\u001b[0m \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39m\u001b[43mmodel_key\u001b[49m,\n\u001b[1;32m 7\u001b[0m },\n\u001b[1;32m 8\u001b[0m task_mode\u001b[38;5;241m=\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mtrigger\u001b[39m\u001b[38;5;124m\"\u001b[39m, \u001b[38;5;66;03m#trigger or load\u001b[39;00m\n\u001b[1;32m 9\u001b[0m params\u001b[38;5;241m=\u001b[39m{\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mgputouse\u001b[39m\u001b[38;5;124m\"\u001b[39m: gputouse, \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mvideotype\u001b[39m\u001b[38;5;124m\"\u001b[39m: \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mmp4\u001b[39m\u001b[38;5;124m\"\u001b[39m},\n\u001b[1;32m 10\u001b[0m )\n",
+ "\u001b[0;31mNameError\u001b[0m: name 'model_key' is not defined"
+ ]
+ }
+ ],
+ "source": [
+ "pose_estimation_key = sgp.DLCPoseEstimationSelection.insert_estimation_task(\n",
+ " {\n",
+ " \"nwb_file_name\": nwb_file_name,\n",
+ " \"epoch\": epoch,\n",
+ " \"video_file_num\": video_file_num,\n",
+ " **model_key,\n",
+ " },\n",
+ " task_mode=\"trigger\", #trigger or load\n",
+ " params={\"gputouse\": gputouse, \"videotype\": \"mp4\"},\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "d19390eb",
+ "metadata": {},
+ "source": [
+ "If the above insertion step fails in either trigger or load mode for an epoch, run the following lines:\n",
+ "```\n",
+ "(pose_estimation_key = sgp.DLCPoseEstimationSelection.insert_estimation_task(\n",
+ " {\n",
+ " \"nwb_file_name\": nwb_file_name,\n",
+ " \"epoch\": epoch,\n",
+ " \"video_file_num\": video_file_num,\n",
+ " **model_key,\n",
+ " }).delete()\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "5feb2a26-fae1-41ca-828f-cc6c73ebd24e",
+ "metadata": {},
+ "source": [
+ "And now we populate `DLCPoseEstimation`! This might take some time for full datasets."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "88f28ecc-d3a4-40f9-a1fb-afb4bdd04497",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "sgp.DLCPoseEstimation().populate(pose_estimation_key)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "88757488-cfa4-4e7c-b965-7dacac43810a",
+ "metadata": {},
+ "source": [
+ "Let's visualize the output from Pose Estimation"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "45dd4f3b-7bf4-41b7-be5f-820fe3ee9f69",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "(sgp.DLCPoseEstimation() & pose_estimation_key).fetch_dataframe()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "52f45ab3-9344-4975-b5ff-f80a5727cdac",
+ "metadata": {},
+ "source": [
+ "#### [DLCSmoothInterp](#TableOfContents) "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "0ccd5dbe-097a-4138-a234-da78a5902684",
+ "metadata": {},
+ "source": [
+ "Now that we've completed pose estimation, it's time to identify NaNs and optionally interpolate over low likelihood periods and smooth the resulting positions.
First we need to define some parameters for smoothing and interpolation. We can see the default parameter set below.
__Note__: it is recommended to use the `just_nan` parameters here and save interpolation and smoothing for the centroid step as this provides for a better end result."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "f6e44a34-8d6d-4206-b02a-9ca38a68f1c0",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# The default parameter set to interpolate and smooth over each LED individually\n",
+ "print(sgp.DLCSmoothInterpParams.get_default())"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "3bc4f13c",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# The just_nan parameter set that identifies NaN indices and leaves smoothing and interpolation to the centroid step\n",
+ "print(sgp.DLCSmoothInterpParams.get_nan_params())\n",
+ "si_params_name = \"just_nan\" #could also use \"default\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "a245c9e5-e8f6-4c6f-b9e1-d71ab3e06d59",
+ "metadata": {},
+ "source": [
+ "To change any of these parameters, one would do the following:\n",
+ "\n",
+ "```python\n",
+ "si_params_name = \"your_unique_param_name\"\n",
+ "params = {\n",
+ " \"smoothing_params\": {\n",
+ " \"smoothing_duration\": 0.00,\n",
+ " \"smooth_method\": \"moving_avg\",\n",
+ " },\n",
+ " \"interp_params\": {\"likelihood_thresh\": 0.00},\n",
+ " \"max_plausible_speed\": 0,\n",
+ " \"speed_smoothing_std_dev\": 0.000,\n",
+ "}\n",
+ "sgp.DLCSmoothInterpParams().insert1(\n",
+ " {\"dlc_si_params_name\": si_params_name, \"params\": params},\n",
+ " skip_duplicates=True,\n",
+ ")\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "8139036e-ce7e-41ec-be78-aa15a4b0b795",
+ "metadata": {},
+ "source": [
+ "We'll create a dictionary with the correct set of keys for the `DLCSmoothInterpSelection` table"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "ec730b91-a974-4f54-9d55-35f52e08487f",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "si_key = pose_estimation_key.copy()\n",
+ "fields = list(sgp.DLCSmoothInterpSelection.fetch().dtype.fields.keys())\n",
+ "si_key = {key: val for key, val in si_key.items() if key in fields}\n",
+ "si_key"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "9a47a6de-51ff-4980-b105-42a75ef7f7a3",
+ "metadata": {},
+ "source": [
+ "We can insert all of the bodyparts we want to process into `DLCSmoothInterpSelection`
\n",
+ "First lets visualize the bodyparts we have available to us.
"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "6e5fcad0-e211-4bd7-82b1-d69bec0eb3d7",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "print((sgp.DLCPoseEstimation.BodyPart & pose_estimation_key).fetch(\"bodypart\"))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "7c6e3ad2-1960-43cd-a223-784c08211013",
+ "metadata": {},
+ "source": [
+ "We can use `insert1` to insert a single bodypart, but would suggest using `insert` to insert a list of keys with different bodyparts."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "1a93ba8d",
+ "metadata": {},
+ "source": [
+ "To insert a single bodypart, one would do the following:\n",
+ "\n",
+ "```python\n",
+ "sgp.DLCSmoothInterpSelection.insert1(\n",
+ " {\n",
+ " **si_key,\n",
+ " 'bodypart': 'greenLED',\n",
+ " 'dlc_si_params_name': si_params_name,\n",
+ " },\n",
+ " skip_duplicates=True)\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "3e2f73cd-2534-40a2-86e6-948ccd902812",
+ "metadata": {},
+ "source": [
+ "We'll see a list of bodyparts and then insert them into `DLCSmoothInterpSelection`."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "819e826d-38ef-4219-8d52-5353c6b4b61a",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "bodyparts = [\"greenLED\", \"redLED_C\"]\n",
+ "sgp.DLCSmoothInterpSelection.insert(\n",
+ " [\n",
+ " {\n",
+ " **si_key,\n",
+ " \"bodypart\": bodypart,\n",
+ " \"dlc_si_params_name\": si_params_name,\n",
+ " }\n",
+ " for bodypart in bodyparts\n",
+ " ],\n",
+ " skip_duplicates=True,\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "6dca5640-3e9a-42b7-bc61-7f3e1a219619",
+ "metadata": {},
+ "source": [
+ "And verify the entry:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "3b347b29-1583-4fbc-9b35-8e062b611d59",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "sgp.DLCSmoothInterpSelection() & si_key"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "af8f0d26-3879-4f50-a076-e60685028083",
+ "metadata": {},
+ "source": [
+ "Now, we populate `DLCSmoothInterp`, which will perform smoothing and\n",
+ "interpolation on all of the bodyparts specified."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "9bf16c32-0f5e-4cd2-b814-56745e836599",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "sgp.DLCSmoothInterp().populate(si_key)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "3d3af0a2-16cc-43dc-af9c-0ec606cfe1e1",
+ "metadata": {},
+ "source": [
+ "And let's visualize the resulting position data using a scatter plot"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "ced96b05-e6dc-4771-bfb8-bcbddfb8e494",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "(sgp.DLCSmoothInterp() & {**si_key, \"bodypart\": bodyparts[0]}\n",
+ ").fetch1_dataframe().plot.scatter(x=\"x\", y=\"y\", s=1, figsize=(5, 5))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "a838e4c4-8ff9-4b73-aee5-00eb91ea899f",
+ "metadata": {},
+ "source": [
+ "#### [DLCSmoothInterpCohort](#TableOfContents) "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "3cf3d882-2c24-46ca-bfcc-72f21712e47b",
+ "metadata": {},
+ "source": [
+ "After smoothing/interpolation, we need to select bodyparts from which we want to\n",
+ "derive a centroid and orientation, which is performed by the\n",
+ "`DLCSmoothInterpCohort` table."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "5017fd46-2bb9-4349-981b-f9789ffec338",
+ "metadata": {},
+ "source": [
+ "First, let's make a key that represents the 'cohort', using\n",
+ "`dlc_si_cohort_selection_name`. We'll need a bodypart dictionary using bodypart\n",
+ "keys and smoothing/interpolation parameters used as value."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "92fb1af9-20cf-46d9-a518-a7f551334bc8",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "cohort_key = si_key.copy()\n",
+ "if \"bodypart\" in cohort_key:\n",
+ " del cohort_key[\"bodypart\"]\n",
+ "if \"dlc_si_params_name\" in cohort_key:\n",
+ " del cohort_key[\"dlc_si_params_name\"]\n",
+ "cohort_key[\"dlc_si_cohort_selection_name\"] = \"green_red_led\"\n",
+ "cohort_key[\"bodyparts_params_dict\"] = {\n",
+ " \"greenLED\": si_params_name,\n",
+ " \"redLED_C\": si_params_name,\n",
+ "}\n",
+ "print(cohort_key)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "11c6a327-d4b0-4de1-a2c6-10a0443a3f96",
+ "metadata": {},
+ "source": [
+ "We'll insert the cohort into `DLCSmoothInterpCohortSelection` and populate `DLCSmoothInterpCohort`, which collates the separately smoothed and interpolated bodyparts into a single entry."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "805f55c1-3c7b-4cf9-bdd7-98743810c671",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "sgp.DLCSmoothInterpCohortSelection().insert1(cohort_key, skip_duplicates=True)\n",
+ "sgp.DLCSmoothInterpCohort.populate(cohort_key)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "a6b7d361-47c5-4748-ac59-f51b897f7fe6",
+ "metadata": {},
+ "source": [
+ "And verify the entry:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "e7672b63-6dfc-46db-b8df-95c1e6730b6c",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "sgp.DLCSmoothInterpCohort.BodyPart() & cohort_key"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "d871bdca-2278-43ec-a70c-52257ad26170",
+ "metadata": {},
+ "source": [
+ "#### [DLCCentroid](#TableOfContents) "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "4cc37edb-fdd3-4a05-8cd5-91f3c5f7cbbb",
+ "metadata": {},
+ "source": [
+ "With this cohort, we can determine a centroid using another set of parameters."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "4e31c8db-0396-475a-af71-ae38433d2b7d",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Here is the default set\n",
+ "print(sgp.DLCCentroidParams.get_default())\n",
+ "centroid_params_name = \"default\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "852948f7-e743-4319-be6b-265dadfca713",
+ "metadata": {},
+ "source": [
+ "Here is the syntax to add your own parameters:\n",
+ "\n",
+ "```python\n",
+ "centroid_params = {\n",
+ " \"centroid_method\": \"two_pt_centroid\",\n",
+ " \"points\": {\n",
+ " \"greenLED\": \"greenLED\",\n",
+ " \"redLED_C\": \"redLED_C\",\n",
+ " },\n",
+ " \"speed_smoothing_std_dev\": 0.100,\n",
+ "}\n",
+ "centroid_params_name = \"your_unique_param_name\"\n",
+ "sgp.DLCCentroidParams.insert1(\n",
+ " {\n",
+ " \"dlc_centroid_params_name\": centroid_params_name,\n",
+ " \"params\": centroid_params,\n",
+ " },\n",
+ " skip_duplicates=True,\n",
+ ")\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "85ad4e53-43dd-4e05-84c4-7d4504766746",
+ "metadata": {},
+ "source": [
+ "We'll make a key to insert into `DLCCentroidSelection`."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "28ac17cb-4bb3-47b2-b1b9-1c4b37797591",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "centroid_key = cohort_key.copy()\n",
+ "fields = list(sgp.DLCCentroidSelection.fetch().dtype.fields.keys())\n",
+ "centroid_key = {key: val for key, val in centroid_key.items() if key in fields}\n",
+ "centroid_key[\"dlc_centroid_params_name\"] = centroid_params_name\n",
+ "print(centroid_key)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "2674c0d3-d3fd-4cd9-a843-260c442c2d23",
+ "metadata": {},
+ "source": [
+ "After inserting into the selection table, we can populate `DLCCentroid`"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "47fccef4-2fef-4f74-b7a4-8564328b14d4",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "sgp.DLCCentroidSelection.insert1(centroid_key, skip_duplicates=True)\n",
+ "sgp.DLCCentroid.populate(centroid_key)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "6e49c5ad-909f-4f1a-a156-f8f8a84fb78a",
+ "metadata": {},
+ "source": [
+ "Here we can visualize the resulting centroid position"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "29e7e447-fa6f-4f06-9ec9-4b9838b7255e",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "(sgp.DLCCentroid() & centroid_key).fetch1_dataframe().plot.scatter(\n",
+ " x=\"position_x\",\n",
+ " y=\"position_y\",\n",
+ " c=\"speed\",\n",
+ " colormap=\"viridis\",\n",
+ " alpha=0.5,\n",
+ " s=0.5,\n",
+ " figsize=(10, 10),\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "cb513a9d-5250-404c-8887-639f785516c7",
+ "metadata": {},
+ "source": [
+ "#### [DLCOrientation](#TableOfContents) "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "509076f0-f0b8-4fd0-8884-32c48ca4a125",
+ "metadata": {},
+ "source": [
+ "We'll now go through a similar process to identify the orientation."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "faf244b3-7295-48ed-90ea-cf878e85e122",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "print(sgp.DLCOrientationParams.get_default())\n",
+ "dlc_orientation_params_name = \"default\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "8ec170be-7a7a-4a20-986c-d055aee1a08b",
+ "metadata": {},
+ "source": [
+ "We'll prune the `cohort_key` we used above and add our `dlc_orientation_params_name` to make it suitable for `DLCOrientationSelection`."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "09e4a6cf-472e-43e3-90aa-f7ff7fb9dc72",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "fields = list(sgp.DLCOrientationSelection.fetch().dtype.fields.keys())\n",
+ "orient_key = {key: val for key, val in cohort_key.items() if key in fields}\n",
+ "orient_key[\"dlc_orientation_params_name\"] = dlc_orientation_params_name\n",
+ "print(orient_key)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "9406d2de-9b71-4591-82f6-ed53f2d4f220",
+ "metadata": {},
+ "source": [
+ "We'll insert into `DLCOrientationSelection` and populate `DLCOrientation`"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "f5d23302-02e3-427a-ac35-2f648e3ae674",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "sgp.DLCOrientationSelection().insert1(orient_key, skip_duplicates=True)\n",
+ "sgp.DLCOrientation().populate(orient_key)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "36f62da0-0cc5-4ffb-b2df-7b68c3f6e268",
+ "metadata": {},
+ "source": [
+ "We can fetch the orientation as a dataframe as quality assurance."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "c5eba7f4-0b32-486a-894a-c97404c74d2b",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "(sgp.DLCOrientation() & orient_key).fetch1_dataframe()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "dc75aeaf-018a-46ed-83a8-6603ae100791",
+ "metadata": {},
+ "source": [
+ "#### [DLCPosV1](#TableOfContents) "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "21d3f9ba-dc89-4c32-a125-1fa85cd4132d",
+ "metadata": {},
+ "source": [
+ "After processing the position data, we have to do a few table manipulations to standardize various outputs. \n",
+ "\n",
+ "To summarize, we brought in a pretrained DLC project, used that model to run pose estimation on a new behavioral video, smoothed and interpolated the result, formed a cohort of bodyparts, and determined the centroid and orientation of this cohort.\n",
+ "\n",
+ "Now we'll populate `DLCPos` with our centroid/orientation entries above."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "2a166dd6-3863-4349-97ac-19d7d6a841b4",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "fields = list(sgp.DLCPosV1.fetch().dtype.fields.keys())\n",
+ "dlc_key = {key: val for key, val in centroid_key.items() if key in fields}\n",
+ "dlc_key[\"dlc_si_cohort_centroid\"] = centroid_key[\"dlc_si_cohort_selection_name\"]\n",
+ "dlc_key[\"dlc_si_cohort_orientation\"] = orient_key[\n",
+ " \"dlc_si_cohort_selection_name\"\n",
+ "]\n",
+ "dlc_key[\"dlc_orientation_params_name\"] = orient_key[\n",
+ " \"dlc_orientation_params_name\"\n",
+ "]\n",
+ "print(dlc_key)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "551e4c5e-7c32-46b0-a138-80064a212fbe",
+ "metadata": {},
+ "source": [
+ "Now we can insert into `DLCPosSelection` and populate `DLCPos` with our `dlc_key`"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "7d7badff-0ad7-48cf-aef6-a4f55df8ded9",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "sgp.DLCPosSelection().insert1(dlc_key, skip_duplicates=True)\n",
+ "sgp.DLCPosV1().populate(dlc_key)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "412f1cff-2ead-4489-8a10-9fa7a5d33292",
+ "metadata": {},
+ "source": [
+ "We can also make sure that all of our data made it through by fetching the dataframe attached to this entry.
We should expect 8 columns:\n",
+ ">time
video_frame_ind
position_x
position_y
orientation
velocity_x
velocity_y
speed"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "853db96b-1cd4-4ff6-91ea-aca7f7d3851d",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "(sgp.DLCPosV1() & dlc_key).fetch1_dataframe()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "2d8623a8-1725-4e02-b1a2-d2f993988102",
+ "metadata": {},
+ "source": [
+ "And even more, we can fetch the `pose_eval_result` that is calculated during this step. This field contains the percentage of frames that each bodypart was below the likelihood threshold of 0.95 as a means of assessing the quality of the pose estimation."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "d4f06244-9d59-44d4-bcbb-062809b3ea6e",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "(sgp.DLCPosV1() & dlc_key).fetch1(\"pose_eval_result\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b2303147-3657-479c-8f72-b3fc6905a596",
+ "metadata": {},
+ "source": [
+ "#### [DLCPosVideo](#TableOfContents) "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "af0b081d-f619-4c38-ba48-6ae1c0c5ff2b",
+ "metadata": {},
+ "source": [
+ "We can create a video with the centroid and orientation overlaid on the original\n",
+ "video. This will also plot the likelihood of each bodypart used in the cohort.\n",
+ "This is optional, but a good quality assurance step."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "0a725c08-a616-43a0-8925-4a82bf872ba3",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "sgp.DLCPosVideoParams.insert_default()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "84e2f782-ba45-487a-8e8f-e80dd33d9c31",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "params = {\n",
+ " \"percent_frames\": 0.05,\n",
+ " \"incl_likelihood\": True,\n",
+ "}\n",
+ "sgp.DLCPosVideoParams.insert1(\n",
+ " {\"dlc_pos_video_params_name\": \"five_percent\", \"params\": params},\n",
+ " skip_duplicates=True,\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "5758e2fc-13e6-46cb-9a93-ae1b4c1f4741",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "sgp.DLCPosVideoSelection.insert1(\n",
+ " {**dlc_key, \"dlc_pos_video_params_name\": \"five_percent\"},\n",
+ " skip_duplicates=True,\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "2887c0a5-77c8-421e-935e-0692f3f1fd68",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "sgp.DLCPosVideo().populate(dlc_key)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "5a68bba8-9871-40ac-84c9-51ac0e76d44e",
+ "metadata": {},
+ "source": [
+ "#### [PositionOutput](#TableOfContents) "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "25325173-bbaf-4b85-aef6-201384d9933b",
+ "metadata": {},
+ "source": [
+ "`PositionOutput` is the final table of the pipeline and is automatically\n",
+ "populated when we populate `DLCPosV1`"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "59ec40c9-78d8-4edd-8158-be91fb15af3e",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "sgp.PositionOutput.merge_get_part(dlc_key)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "c414d9e0-e495-42ef-a8b0-1c7d53aed02e",
+ "metadata": {},
+ "source": [
+ "`PositionOutput` also has a part table, similar to the `DLCModelSource` table above. Let's check that out as well."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "50760123-7f09-4a94-a1f7-41a037914fd7",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "PositionOutput.DLCPosV1() & dlc_key"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "c96daaa9-5e70-4a2c-b0a4-c2849e3a1440",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "(PositionOutput.DLCPosV1() & dlc_key).fetch1_dataframe()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "e48c7a4e-0bbc-4101-baf2-e84f1f5739d5",
+ "metadata": {},
+ "source": [
+ "#### [PositionVideo](#TableOfContents)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "388e6602-8e80-47fa-be78-4ae120d52e41",
+ "metadata": {},
+ "source": [
+ "We can use the `PositionVideo` table to create a video that overlays just the\n",
+ "centroid and orientation on the video. This table uses the parameter `plot` to\n",
+ "determine whether to plot the entry deriving from the DLC arm or from the Trodes\n",
+ "arm of the position pipeline. This parameter also accepts 'all', which will plot\n",
+ "both (if they exist) in order to compare results."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "b2a782ce-0a14-4725-887f-ae6f341635f8",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "sgp.PositionVideoSelection().insert1(\n",
+ " {\n",
+ " \"nwb_file_name\": \"J1620210604_.nwb\",\n",
+ " \"interval_list_name\": \"pos 13 valid times\",\n",
+ " \"trodes_position_id\": 0,\n",
+ " \"dlc_position_id\": 1,\n",
+ " \"plot\": \"DLC\",\n",
+ " \"output_dir\": \"/home/dgramling/Src/\",\n",
+ " }\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "c32993e7-5b32-46f9-a2f9-9634aef785f2",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "sgp.PositionVideo.populate({\"plot\": \"DLC\"})"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "be097052-3789-4d55-aca1-e44d426c39b4",
+ "metadata": {},
+ "source": [
+ "### _CONGRATULATIONS!!_\n",
+ "Please treat yourself to a nice tea break :-)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "c71c90a2",
+ "metadata": {},
+ "source": [
+ "### [Return To Table of Contents](#TableOfContents)
"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.9.16"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/notebooks/21_Position_DLC_1.ipynb b/notebooks/22_DLC_Loop.ipynb
similarity index 54%
rename from notebooks/21_Position_DLC_1.ipynb
rename to notebooks/22_DLC_Loop.ipynb
index dee0d2594..4d9d33e77 100644
--- a/notebooks/21_Position_DLC_1.ipynb
+++ b/notebooks/22_DLC_Loop.ipynb
@@ -5,20 +5,20 @@
"id": "a93a1550-8a67-4346-a4bf-e5a136f3d903",
"metadata": {},
"source": [
- "# Position - DeepLabCut from Scratch\n"
+ "## Position- DeepLabCut from Scratch"
]
},
{
"cell_type": "markdown",
- "id": "cbf56794",
+ "id": "13dd3267",
"metadata": {},
"source": [
- "### Overview\n"
+ "### Overview"
]
},
{
"cell_type": "markdown",
- "id": "de29d04e",
+ "id": "b52aff0d",
"metadata": {},
"source": [
"_Developer Note:_ if you may make a PR in the future, be sure to copy this\n",
@@ -37,11 +37,11 @@
"- creating a DLC project\n",
"- extracting and labeling frames\n",
"- training your model\n",
+ "- executing pose estimation on a novel behavioral video\n",
+ "- processing the pose estimation output to extract a centroid and orientation\n",
+ "- inserting the resulting information into the `PositionOutput` table\n",
"\n",
- "If you have a pre-trained project, you can either skip to the\n",
- "[next tutorial](./22_Position_DLC_2.ipynb) to load it into the database, or skip\n",
- "to the [following tutorial](./23_Position_DLC_3.ipynb) to start pose estimation\n",
- "with a model that is already inserted.\n"
+ "**Note 2: Make sure you are running this within the spyglass-position Conda environment (instructions for install are in the environment_position.yml)**"
]
},
{
@@ -59,36 +59,62 @@
"id": "0c67d88c-c90e-467b-ae2e-672c49a12f95",
"metadata": {},
"source": [
- "### Table of Contents\n"
+ "### Table of Contents\n",
+ "[`DLCProject`](#DLCProject1)
\n",
+ "[`DLCModelTraining`](#DLCModelTraining1)
\n",
+ "[`DLCModel`](#DLCModel1)
\n",
+ "[`DLCPoseEstimation`](#DLCPoseEstimation1)
\n",
+ "[`DLCSmoothInterp`](#DLCSmoothInterp1)
\n",
+ "[`DLCCentroid`](#DLCCentroid1)
\n",
+ "[`DLCOrientation`](#DLCOrientation1)
\n",
+ "[`DLCPosV1`](#DLCPosV1-1)
\n",
+ "[`DLCPosVideo`](#DLCPosVideo1)
\n",
+ "[`PositionOutput`](#PositionOutput1)
"
]
},
{
"cell_type": "markdown",
- "id": "3ece5c05",
+ "id": "70a0a678",
"metadata": {},
"source": [
- "- [Imports](#imports)\n",
- "- [`DLCProject`](#DLCProject1)\n",
- "- [`DLCModelTraining`](#DLCModelTraining1)\n",
- "- [`DLCModel`](#DLCModel1)\n",
- "\n",
- "**You can click on any header to return to the Table of Contents**\n"
+ "__You can click on any header to return to the Table of Contents__"
]
},
{
"cell_type": "markdown",
- "id": "c52f2a05",
+ "id": "c9b98c3d",
"metadata": {},
"source": [
- "### Imports\n"
+ "### Imports"
]
},
{
"cell_type": "code",
- "execution_count": 2,
- "id": "5ddbc468",
+ "execution_count": 1,
+ "id": "b36026fa",
"metadata": {},
"outputs": [],
+ "source": [
+ "%load_ext autoreload\n",
+ "%autoreload 2"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "id": "0f567531",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "[2024-01-18 10:12:13,219][INFO]: Connecting ebroyles@lmf-db.cin.ucsf.edu:3306\n",
+ "[2024-01-18 10:12:13,255][INFO]: Connected ebroyles@lmf-db.cin.ucsf.edu:3306\n",
+ "OMP: Info #276: omp_set_nested routine deprecated, please use omp_set_max_active_levels instead.\n"
+ ]
+ }
+ ],
"source": [
"import os\n",
"import datajoint as dj\n",
@@ -97,6 +123,13 @@
"import spyglass.common as sgc\n",
"import spyglass.position.v1 as sgp\n",
"\n",
+ "from pathlib import Path, PosixPath, PurePath\n",
+ "import glob\n",
+ "import numpy as np\n",
+ "import pandas as pd\n",
+ "import pynwb\n",
+ "from spyglass.position import PositionOutput\n",
+ "\n",
"# change to the upper level folder to detect dj_local_conf.json\n",
"if os.path.basename(os.getcwd()) == \"notebooks\":\n",
" os.chdir(\"..\")\n",
@@ -114,7 +147,7 @@
"id": "5e6221a3-17e5-45c0-aa40-2fd664b02219",
"metadata": {},
"source": [
- "#### [DLCProject](#TableOfContents) \n"
+ "#### [DLCProject](#TableOfContents) "
]
},
{
@@ -126,7 +159,7 @@
" Notes:\n",
" - \n",
" The cells within this
DLCProject
step need to be performed \n",
- " in a local Jupyter notebook to allow for use of the frame labeling GUI\n",
+ " in a local Jupyter notebook to allow for use of the frame labeling GUI.\n",
" \n",
" - \n",
" Please do not add to the
BodyPart
table in the production \n",
@@ -138,10 +171,10 @@
},
{
"cell_type": "markdown",
- "id": "1307d3d7",
+ "id": "50c9f1c9",
"metadata": {},
"source": [
- "### Body Parts\n"
+ "### Body Parts"
]
},
{
@@ -149,12 +182,22 @@
"id": "96637cb9-519d-41e1-8bfd-69f68dc66b36",
"metadata": {},
"source": [
- "We'll begin by looking at the `BodyPart` table, which stores standard names of body parts used in DLC models throughout the lab with a concise description.\n"
+ "We'll begin by looking at the `BodyPart` table, which stores standard names of body parts used in DLC models throughout the lab with a concise description."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "b69f829f-9877-48ae-89d1-f876af2b8835",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "sgp.BodyPart()"
]
},
{
"cell_type": "markdown",
- "id": "ca5a15e2-f087-4bd2-9d4a-ea2ac4becd80",
+ "id": "9616512e",
"metadata": {},
"source": [
"If the bodyparts you plan to use in your model are not yet in the table, here is code to add bodyparts:\n",
@@ -167,173 +210,56 @@
" ],\n",
" skip_duplicates=True,\n",
")\n",
- "```\n"
+ "```"
]
},
{
"cell_type": "markdown",
- "id": "78fe7c06-30c9-43e1-9e9a-029a70b0d4dd",
+ "id": "57b590d3",
"metadata": {},
"source": [
- "To train a model, we'll need to extract frames, which we can label as training data. We can construct a list of videos from which we'll extract frames.\n",
- "\n",
- "The list can either contain dictionaries identifying behavioral videos for NWB files that have already been added to Spyglass, or absolute file paths to the videos you want to use.\n",
- "\n",
- "For this tutorial, we'll use two videos for which we already have frames labeled.\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 57,
- "id": "b69f829f-9877-48ae-89d1-f876af2b8835",
- "metadata": {},
- "outputs": [
- {
- "data": {
- "text/html": [
- "\n",
- " \n",
- " \n",
- " \n",
- " \n",
- " \n",
- "
\n",
- " | |
\n",
- " driveBack | \n",
- "back of drive |
driveFront | \n",
- "front of drive |
forelimbL | \n",
- "left forelimb of the rat |
forelimbR | \n",
- "right forelimb of the rat |
greenLED | \n",
- "greenLED |
hindlimbL | \n",
- "left hindlimb of the rat |
hindlimbR | \n",
- "right hindlimb of the rat |
nose | \n",
- "tip of the nose of the rat |
redLED_C | \n",
- "redLED_C |
redLED_L | \n",
- "redLED_L |
redLED_R | \n",
- "redLED_R |
tailBase | \n",
- "tailBase |
\n",
- "
\n",
- "
...
\n",
- "
Total: 15
\n",
- " "
- ],
- "text/plain": [
- "*bodypart bodypart_descr\n",
- "+------------+ +------------+\n",
- "driveBack back of drive \n",
- "driveFront front of drive\n",
- "forelimbL left forelimb \n",
- "forelimbR right forelimb\n",
- "greenLED greenLED \n",
- "hindlimbL left hindlimb \n",
- "hindlimbR right hindlimb\n",
- "nose tip of the nos\n",
- "redLED_C redLED_C \n",
- "redLED_L redLED_L \n",
- "redLED_R redLED_R \n",
- "tailBase tailBase \n",
- " ...\n",
- " (Total: 15)"
- ]
- },
- "execution_count": 57,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "sgp.BodyPart()"
+ "### Define videos and camera name (optional) for training set"
]
},
{
"cell_type": "markdown",
- "id": "a0af0110",
+ "id": "5d5aae37",
"metadata": {},
"source": [
- "### Define camera name and videos for training set\n",
+ "To train a model, we'll need to extract frames, which we can label as training data. We can construct a list of videos from which we'll extract frames.\n",
"\n",
- "Defining camera name is optional: it should be done in cases where there are multiple cameras streaming per epoch, but not necessary otherwise.\n"
+ "The list can either contain dictionaries identifying behavioral videos for NWB files that have already been added to Spyglass, or absolute file paths to the videos you want to use.\n",
+ "\n",
+ "For this tutorial, we'll use two videos for which we already have frames labeled."
]
},
{
"cell_type": "markdown",
- "id": "667bcb28",
+ "id": "7b5e157b",
"metadata": {},
"source": [
+ "Defining camera name is optional: it should be done in cases where there are multiple cameras streaming per epoch, but not necessary otherwise.
\n",
"example:\n",
"`camera_name = \"HomeBox_camera\" \n",
- " `\n"
+ " `"
]
},
{
"cell_type": "markdown",
+ "id": "56f45e7f",
"metadata": {},
"source": [
"_NOTE:_ The official release of Spyglass does not yet support multicamera\n",
"projects. You can monitor progress on the effort to add this feature by checking\n",
"[this PR](https://github.com/LorenFrankLab/spyglass/pull/684) or use\n",
"[this experimental branch](https://github.com/dpeg22/spyglass/tree/add-multi-camera),\n",
- "which only takes the keys nwb_file_name and epoch in the video_list variable.\n"
+ "which takes the keys nwb_file_name and epoch, and camera_name in the video_list variable.\n"
]
},
{
"cell_type": "code",
- "execution_count": 38,
- "id": "e3aa1c2f",
+ "execution_count": null,
+ "id": "15971506",
"metadata": {},
"outputs": [],
"source": [
@@ -345,7 +271,7 @@
},
{
"cell_type": "markdown",
- "id": "aadce1b3",
+ "id": "a9f8e43d",
"metadata": {},
"source": [
"### Path variables\n",
@@ -380,12 +306,13 @@
"_NOTE:_ If only `base` is specified as shown above, spyglass will assume the\n",
"relative directories shown.\n",
"\n",
- "You can check the result of this setup process with...\n"
+ "You can check the result of this setup process with..."
]
},
{
"cell_type": "code",
"execution_count": null,
+ "id": "49d7d9fc",
"metadata": {},
"outputs": [],
"source": [
@@ -394,13 +321,6 @@
"config"
]
},
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "_NOTE:_ The official release of Spyglass does not yet support master branch only takes the keys nwb_file_name and epoch in the video_list variable. EB is circumventing this by running this on daniel's (dpeg22) branch \"add-multi-camera\"\n"
- ]
- },
{
"cell_type": "markdown",
"id": "32c023b0-d00d-40b0-9a37-d0d3e4a4ae2a",
@@ -414,26 +334,15 @@
" **\"tutorial_scratch_yourinitials\"**\n",
"- `bodyparts` is a list of body parts for which we want to extract position.\n",
" The pre-labeled frames we're using include the bodyparts listed below.\n",
- "- Number of frames to extract/label as `frames_per_video`. A true project might\n",
- " use 200, but we'll use 100 for efficiency.\n"
+ "- Number of frames to extract/label as `frames_per_video`. Note that the DLC creators recommend having 200 frames as the minimum total number for each project."
]
},
{
"cell_type": "code",
- "execution_count": 39,
+ "execution_count": null,
"id": "347e98f1",
- "metadata": {
- "scrolled": true
- },
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "project name: 20230607_SC38_home is already in use.\n"
- ]
- }
- ],
+ "metadata": {},
+ "outputs": [],
"source": [
"team_name = \"LorenLab\"\n",
"project_name = \"tutorial_scratch_DG\"\n",
@@ -454,50 +363,66 @@
"id": "f5d83452-48eb-4669-89eb-a6beb1f2d051",
"metadata": {},
"source": [
- "After initializing our project, we would typically extract and label frames. Use the following commands to pull up the DLC GUI:\n"
+ "Now that we've intialized our project we'll need to extract frames which we will then label. "
]
},
{
"cell_type": "code",
"execution_count": null,
- "id": "cb38f911",
- "metadata": {
- "scrolled": false
- },
+ "id": "7d8b1595",
+ "metadata": {},
"outputs": [],
"source": [
- "sgp.DLCProject().run_extract_frames(project_key)\n",
- "sgp.DLCProject().run_label_frames(project_key)"
+ "#comment this line out after you finish frame extraction for each project\n",
+ "sgp.DLCProject().run_extract_frames(project_key)"
]
},
{
"cell_type": "markdown",
- "id": "df257015",
+ "id": "68110734",
"metadata": {},
"source": [
- "In order to use pre-labeled frames, you'll need to change the values in the\n",
- "labeled-data files. You can do that using the `import_labeled_frames` method,\n",
- "which expects:\n",
- "\n",
- "- `project_key` from your new project.\n",
- "- The absolute path to the project directory from which we'll import labeled\n",
- " frames.\n",
- "- The filenames, without extension, of the videos from which we want frames.\n"
+ "This is the line used to label the frames you extracted, if you wish to use the DLC GUI on the computer you are currently using.\n",
+ "```#comment this line out after frames are labeled for your project\n",
+ "sgp.DLCProject().run_label_frames(project_key)\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "8b241030",
+ "metadata": {},
+ "source": [
+ "Otherwise, it is best/easiest practice to label the frames on your local computer (like a MacBook) that can run DeepLabCut's GUI well. Instructions:
\n",
+ "1. Install DLC on your local (preferably into a 'Src' folder): https://deeplabcut.github.io/DeepLabCut/docs/installation.html\n",
+ "2. Upload frames extracted and saved in nimbus (should be `/nimbus/deeplabcut//labeled-data`) AND the project's associated config file (should be `/nimbus/deeplabcut//config.yaml`) to Box (we get free with UCSF)\n",
+ "3. Download labeled-data and config files on your local from Box\n",
+ "4. Create a 'projects' folder where you installed DeepLabCut; create a new folder with your complete project name there; save the downloaded files there.\n",
+ "4. Edit the config.yaml file: line 9 defining `project_path` needs to be the file path where it is saved on your local (ex: `/Users/lorenlab/Src/DeepLabCut/projects/tutorial_sratch_DG-LorenLab-2023-08-16`)\n",
+ "5. Open the DLC GUI through terminal \n",
+ "
(ex: `conda activate miniconda/envs/DEEPLABCUT_M1`\n",
+ "\t\t
`pythonw -m deeplabcut`)\n",
+ "6. Load an existing project; choose the config.yaml file\n",
+ "7. Label frames; labeling tutorial: https://www.youtube.com/watch?v=hsA9IB5r73E.\n",
+ "8. Once all frames are labeled, you should re-upload labeled-data folder back to Box and overwrite it in the original nimbus location so that your completed frames are ready to be used in the model."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "c12dd229-2f8b-455a-a7b1-a20916cefed9",
+ "metadata": {},
+ "source": [
+ "Now we can check the `DLCProject.File` part table and see all of our training files and videos there!"
]
},
{
"cell_type": "code",
"execution_count": null,
- "id": "520a9526-fcd1-417b-b368-00d17e0284e2",
+ "id": "3d4f3fa6-cce9-4d4a-a252-3424313c6a97",
"metadata": {},
"outputs": [],
"source": [
- "sgp.DLCProject.import_labeled_frames(\n",
- " project_key.copy(),\n",
- " import_project_path=\"/nimbus/deeplabcut/projects/tutorial_model-LorenLab-2022-07-15/\",\n",
- " video_filenames=[\"20201103_peanut_04_r2\", \"20210529_J16_02_r1\"],\n",
- " skip_duplicates=True,\n",
- ")"
+ "sgp.DLCProject.File & project_key"
]
},
{
@@ -506,8 +431,8 @@
"metadata": {},
"source": [
"\n",
- " This step and beyond should be run on a GPU-enabled machine.\n",
- "
\n"
+ " This step and beyond should be run on a GPU-enabled machine.\n",
+ ""
]
},
{
@@ -553,7 +478,7 @@
"metadata": {},
"outputs": [],
"source": [
- "gputouse = 1 ## 1-9"
+ "gputouse = 1 # 1-9"
]
},
{
@@ -596,8 +521,7 @@
"id": "6b6cc709",
"metadata": {},
"source": [
- "Next we'll modify the `project_key` to include the entries for\n",
- "`DLCModelTraining`\n"
+ "Next we'll modify the `project_key` from above to include the necessary entries for `DLCModelTraining`"
]
},
{
@@ -619,16 +543,16 @@
"source": [
"We can insert an entry into `DLCModelTrainingSelection` and populate `DLCModelTraining`.\n",
"\n",
- "_Note:_ You can stop training at any point using `I + I` or interrupt the Kernel\n"
+ "_Note:_ You can stop training at any point using `I + I` or interrupt the Kernel. \n",
+ "\n",
+ "The maximum total number of training iterations is 1030000; you can end training before this amount if the loss rate (lr) and total loss plateau and are very close to 0.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
- "id": "d56d3c39-7b85-4f6a-b9fb-816a1d1912da",
- "metadata": {
- "tags": []
- },
+ "id": "3c252541",
+ "metadata": {},
"outputs": [],
"source": [
"sgp.DLCModelTrainingSelection.heading"
@@ -639,9 +563,6 @@
"execution_count": null,
"id": "139d2f30",
"metadata": {
- "jupyter": {
- "outputs_hidden": true
- },
"tags": []
},
"outputs": [],
@@ -669,7 +590,7 @@
"id": "da004b3e",
"metadata": {},
"source": [
- "Here we'll make sure that the entry made it into the table properly!\n"
+ "Here we'll make sure that the entry made it into the table properly!"
]
},
{
@@ -691,7 +612,7 @@
"source": [
"Populating `DLCModelTraining` automatically inserts the entry into\n",
"`DLCModelSource`, which is used to select between models trained using Spyglass\n",
- "vs. other tools.\n"
+ "vs. other tools."
]
},
{
@@ -709,7 +630,7 @@
"id": "92cb8969",
"metadata": {},
"source": [
- "The `source` field will only accept _\"FromImport\"_ or _\"FromUpstream\"_ as entries. Let's checkout the `FromUpstream` part table attached to `DLCModelSource` below.\n"
+ "The `source` field will only accept _\"FromImport\"_ or _\"FromUpstream\"_ as entries. Let's checkout the `FromUpstream` part table attached to `DLCModelSource` below."
]
},
{
@@ -733,7 +654,7 @@
"information for all trained models.\n",
"\n",
"First, we'll need to determine a set of parameters for our model to select the\n",
- "correct model file. Here is the default:\n"
+ "correct model file. Here is the default:"
]
},
{
@@ -743,7 +664,7 @@
"metadata": {},
"outputs": [],
"source": [
- "pprint(sgp.DLCModelParams.get_default())"
+ "sgp.DLCModelParams.get_default()"
]
},
{
@@ -774,7 +695,7 @@
"metadata": {},
"source": [
"We can insert sets of parameters into `DLCModelSelection` and populate\n",
- "`DLCModel`.\n"
+ "`DLCModel`."
]
},
{
@@ -784,10 +705,30 @@
"metadata": {},
"outputs": [],
"source": [
- "temp_model_key = (sgp.DLCModelSource & model_training_key).fetch1(\"KEY\")\n",
- "sgp.DLCModelSelection().insert1(\n",
- " {**temp_model_key, \"dlc_model_params_name\": \"default\"}, skip_duplicates=True\n",
- ")\n",
+ "temp_model_key = (sgp.DLCModelSource & model_training_key).fetch1(\"KEY\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "4e418eba",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "#comment these lines out after successfully inserting, for each project\n",
+ "sgp.DLCModelSelection().insert1({\n",
+ " **temp_model_key,\n",
+ " \"dlc_model_params_name\": \"default\"},\n",
+ " skip_duplicates=True)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "ccae03bb",
+ "metadata": {},
+ "outputs": [],
+ "source": [
"model_key = (sgp.DLCModelSelection & temp_model_key).fetch1(\"KEY\")\n",
"sgp.DLCModel.populate(model_key)"
]
@@ -797,7 +738,7 @@
"id": "f8f1b839",
"metadata": {},
"source": [
- "Again, let's make sure that everything looks correct in `DLCModel`.\n"
+ "Again, let's make sure that everything looks correct in `DLCModel`."
]
},
{
@@ -812,15 +753,191 @@
},
{
"cell_type": "markdown",
- "id": "be097052-3789-4d55-aca1-e44d426c39b4",
+ "id": "02202650",
+ "metadata": {},
+ "source": [
+ "## Loop Begins"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "dd886971",
"metadata": {},
"source": [
- "### Next Steps\n",
+ "We can view all `VideoFile` entries with the specidied `camera_ name` for this project to ensure the rat whose position you wish to model is in this table `matching_rows`"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "844174d2",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "camera_name = \"SleepBox_camera\"\n",
+ "matching_rows = sgc.VideoFile() & {\"camera_name\": camera_name}\n",
+ "matching_rows"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "d0315698",
+ "metadata": {},
+ "source": [
+ "The `DLCPoseEstimationSelection` insertion step will convert your .h264 video to an .mp4 first and save it in `/nimbus/deeplabcut/video`. If this video already exists here, the insertion will never complete.\n",
"\n",
- "With our trained model in place, we're ready to move on to pose estimation\n",
- "(notebook coming soon!).\n",
+ "We first delete any .mp4 that exists for this video from the nimbus folder:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "8884111c",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "! find /nimbus/deeplabcut/video -type f -name '*20230606_SC38*' -delete # change based on date and rat with which you are training the model"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "510cf05b",
+ "metadata": {},
+ "source": [
+ "If the first insertion step (for pose estimation task) fails in either trigger or load mode for an epoch, run the following lines:\n",
+ "```\n",
+ "(pose_estimation_key = sgp.DLCPoseEstimationSelection.insert_estimation_task(\n",
+ " {\n",
+ " \"nwb_file_name\": nwb_file_name,\n",
+ " \"epoch\": epoch,\n",
+ " \"video_file_num\": video_file_num,\n",
+ " **model_key,\n",
+ " }).delete()\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "7eb99b6f",
+ "metadata": {},
+ "source": [
+ "This loop will generate posiiton data for all epochs associated with the pre-defined camera in one day, for one rat (based on the NWB file; see ***)\n",
+ "
The output should print Pose Estimation and Centroid plots for each epoch.\n",
"\n",
- "\n"
+ "- It defines `col1val` as each `nwb_file_name` entry in the table, one at a time.\n",
+ "- Next, it sees if the trial on which you are testing this model is in the string for the current `col1val`; if not, it re-defines `col1val` as the next `nwb_file_name` entry and re-tries this step. \n",
+ "- If the previous step works, it then saves `col2val` and `col3val` as the `epoch` and the `video_file_num`, respectively, based on the nwb_file_name. From there, it iterates through the insertion and population steps required to extract position data, which we see laid out in notebook 05_DLC.ipynb."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "f41a51d1",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "for row in matching_rows:\n",
+ " col1val = row[\"nwb_file_name\"]\n",
+ " if \"SC3820230606\" in col1val: #*** change depending on rat/day!!!\n",
+ " col2val = row[\"epoch\"]\n",
+ " col3val = row[\"video_file_num\"]\n",
+ "\n",
+ " ##insert pose estimation task\n",
+ " pose_estimation_key = sgp.DLCPoseEstimationSelection.insert_estimation_task(\n",
+ " {\"nwb_file_name\": col1val,\n",
+ " \"epoch\": col2val,\n",
+ " \"video_file_num\": col3val,\n",
+ " **model_key\n",
+ " },\n",
+ " task_mode = \"trigger\", #load or trigger\n",
+ " params = {\"gputouse\": gputouse, \"videotype\": \"mp4\"}\n",
+ " )\n",
+ "\n",
+ " ##populate DLC Pose Estimation\n",
+ " sgp.DLCPoseEstimation().populate(pose_estimation_key)\n",
+ "\n",
+ " ##start smooth interpolation\n",
+ " si_params_name = \"just_nan\"\n",
+ " si_key = pose_estimation_key.copy()\n",
+ " fields = list(sgp.DLCSmoothInterpSelection.fetch().dtype.fields.keys())\n",
+ " si_key = {key: val for key, val in si_key.items() if key in fields}\n",
+ " bodyparts = [\"greenLED\", \"redLED_C\"]\n",
+ " sgp.DLCSmoothInterpSelection.insert(\n",
+ " [\n",
+ " {\n",
+ " **si_key,\n",
+ " \"bodypart\": bodypart,\n",
+ " \"dlc_si_params_name\": si_params_name,\n",
+ " }\n",
+ " for bodypart in bodyparts\n",
+ " ],\n",
+ " skip_duplicates = True,\n",
+ " )\n",
+ " sgp.DLCSmoothInterp().populate(si_key)\n",
+ " (sgp.DLCSmoothInterp() & {**si_key, \"bodypart\": bodyparts[0]}\n",
+ " ).fetch1_dataframe().plot.scatter(x=\"x\", y=\"y\", s=1, figsize=(5, 5))\n",
+ "\n",
+ " ##smoothinterpcohort\n",
+ " cohort_key = si_key.copy()\n",
+ " if \"bodypart\" in cohort_key:\n",
+ " del cohort_key[\"bodypart\"]\n",
+ " if \"dlc_si_params_name\" in cohort_key:\n",
+ " del cohort_key[\"dlc_si_params_name\"]\n",
+ " cohort_key[\"dlc_si_cohort_selection_name\"] = \"green_red_led\"\n",
+ " cohort_key[\"bodyparts_params_dict\"] = {\"greenLED\": si_params_name, \"redLED_C\": si_params_name,}\n",
+ " sgp.DLCSmoothInterpCohortSelection().insert1(cohort_key, skip_duplicates=True)\n",
+ " sgp.DLCSmoothInterpCohort.populate(cohort_key)\n",
+ "\n",
+ " ##DLC Centroid\n",
+ " centroid_params_name = \"default\"\n",
+ " centroid_key = cohort_key.copy()\n",
+ " fields = list(sgp.DLCCentroidSelection.fetch().dtype.fields.keys())\n",
+ " centroid_key = {key: val for key, val in centroid_key.items() if key in fields}\n",
+ " centroid_key[\"dlc_centroid_params_name\"] = centroid_params_name\n",
+ " sgp.DLCCentroidSelection.insert1(centroid_key, skip_duplicates=True)\n",
+ " sgp.DLCCentroid.populate(centroid_key)\n",
+ " (sgp.DLCCentroid() & centroid_key).fetch1_dataframe().plot.scatter(\n",
+ " x=\"position_x\",\n",
+ " y=\"position_y\",\n",
+ " c=\"speed\",\n",
+ " colormap=\"viridis\",\n",
+ " alpha=0.5,\n",
+ " s=0.5,\n",
+ " figsize=(10, 10),\n",
+ " )\n",
+ "\n",
+ " ##DLC Orientation\n",
+ " dlc_orientation_params_name = \"default\"\n",
+ " fields = list(sgp.DLCOrientationSelection.fetch().dtype.fields.keys())\n",
+ " orient_key = {key: val for key, val in cohort_key.items() if key in fields}\n",
+ " orient_key[\"dlc_orientation_params_name\"] = dlc_orientation_params_name\n",
+ " sgp.DLCOrientationSelection().insert1(orient_key, skip_duplicates=True)\n",
+ " sgp.DLCOrientation().populate(orient_key)\n",
+ "\n",
+ " ##DLCPosV1\n",
+ " fields = list(sgp.DLCPosV1.fetch().dtype.fields.keys())\n",
+ " dlc_key = {key: val for key, val in centroid_key.items() if key in fields}\n",
+ " dlc_key[\"dlc_si_cohort_centroid\"] = centroid_key[\"dlc_si_cohort_selection_name\"]\n",
+ " dlc_key[\"dlc_si_cohort_orientation\"] = orient_key[\n",
+ " \"dlc_si_cohort_selection_name\"\n",
+ " ]\n",
+ " dlc_key[\"dlc_orientation_params_name\"] = orient_key[\n",
+ " \"dlc_orientation_params_name\"\n",
+ " ]\n",
+ " sgp.DLCPosSelection().insert1(dlc_key, skip_duplicates=True)\n",
+ " sgp.DLCPosV1().populate(dlc_key)\n",
+ "\n",
+ " else:\n",
+ " continue"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "be097052-3789-4d55-aca1-e44d426c39b4",
+ "metadata": {},
+ "source": [
+ "### _CONGRATULATIONS!!_\n",
+ "Please treat yourself to a nice tea break :-)"
]
},
{
@@ -828,7 +945,7 @@
"id": "c71c90a2",
"metadata": {},
"source": [
- "### [Return To Table of Contents](#TableOfContents)
\n"
+ "### [Return To Table of Contents](#TableOfContents)
"
]
}
],
diff --git a/notebooks/22_Position_DLC_2.ipynb b/notebooks/22_Position_DLC_2.ipynb
deleted file mode 100644
index cfc86a985..000000000
--- a/notebooks/22_Position_DLC_2.ipynb
+++ /dev/null
@@ -1,429 +0,0 @@
-{
- "cells": [
- {
- "cell_type": "markdown",
- "id": "de73cd97",
- "metadata": {},
- "source": [
- "# Position - DeepLabCut PreTrained\n"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "3c2ac37a",
- "metadata": {},
- "source": [
- "## Overview\n"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "6bc203b0",
- "metadata": {},
- "source": [
- "_Developer Note:_ if you may make a PR in the future, be sure to copy this\n",
- "notebook, and use the `gitignore` prefix `temp` to avoid future conflicts.\n",
- "\n",
- "This is one notebook in a multi-part series on Spyglass.\n",
- "\n",
- "- To set up your Spyglass environment and database, see\n",
- " [the Setup notebook](./00_Setup.ipynb)\n",
- "- For additional info on DataJoint syntax, including table definitions and\n",
- " inserts, see\n",
- " [the Insert Data notebook](./01_Insert_Data.ipynb)\n",
- "\n",
- "This is a tutorial will cover how to extract position given a pre-trained DeepLabCut (DLC) model. It will walk through adding your DLC model to Spyglass.\n",
- "\n",
- "If you already have a model in the database, skip to the \n",
- "[next tutorial](./23_Position_DLC_3.ipynb)."
- ]
- },
- {
- "cell_type": "markdown",
- "id": "e3ff00d6",
- "metadata": {},
- "source": [
- "## Imports\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 13,
- "id": "704fe083",
- "metadata": {},
- "outputs": [],
- "source": [
- "import os\n",
- "import datajoint as dj\n",
- "\n",
- "# change to the upper level folder to detect dj_local_conf.json\n",
- "if os.path.basename(os.getcwd()) == \"notebooks\":\n",
- " os.chdir(\"..\")\n",
- "dj.config.load(\"dj_local_conf.json\") # load config for database connection info\n",
- "\n",
- "from spyglass.settings import load_config\n",
- "\n",
- "load_config(base_dir=\"/home/cb/wrk/zOther/data/\")\n",
- "\n",
- "import spyglass.common as sgc\n",
- "import spyglass.position.v1 as sgp\n",
- "from spyglass.position import PositionOutput\n",
- "\n",
- "# ignore datajoint+jupyter async warnings\n",
- "import warnings\n",
- "\n",
- "warnings.simplefilter(\"ignore\", category=DeprecationWarning)\n",
- "warnings.simplefilter(\"ignore\", category=ResourceWarning)"
- ]
- },
- {
- "attachments": {},
- "cell_type": "markdown",
- "id": "7e3e1854-0baf-44f4-a5a6-ddc1fdb4c3e1",
- "metadata": {},
- "source": [
- "#### Here is a schematic showing the tables used in this notebook.
\n",
- "![dlc_existing.png|2000x900](./../notebook-images/dlc_existing.png)"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "0388fc5f",
- "metadata": {},
- "source": [
- "## Table of Contents\n",
- "\n",
- "- [`DLCProject`](#DLCProject)\n",
- "- [`DLCModel`](#DLCModel)\n",
- "\n",
- "\n",
- "You can click on any header to return to the Table of Contents"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "6adc175d",
- "metadata": {},
- "source": [
- "## [DLCProject](#ToC) "
- ]
- },
- {
- "cell_type": "markdown",
- "id": "e7c51888-b05d-4a51-bb9f-b075db4bbf49",
- "metadata": {},
- "source": [
- "We'll look at the BodyPart table, which stores standard names of body parts used within DLC models."
- ]
- },
- {
- "cell_type": "markdown",
- "id": "f8a64a57",
- "metadata": {},
- "source": [
- "\n",
- "
Notes:\n",
- " - \n",
- " Please do not add to the
BodyPart
table in the production \n",
- " database unless necessary.\n",
- " \n",
- "
\n",
- "
"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "c938c639",
- "metadata": {},
- "outputs": [],
- "source": [
- "sgp.BodyPart()"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "f422dd98-728b-4b48-877b-f77c2d60872f",
- "metadata": {},
- "source": [
- "We can `insert_existing_project` into the `DLCProject` table using:\n",
- "\n",
- "- `project_name`: a short, unique, descriptive project name to reference\n",
- " throughout the pipeline\n",
- "- `lab_team`: team name from `LabTeam`\n",
- "- `config_path`: string path to a DLC `config.yaml`\n",
- "- `bodyparts`: optional list of bodyparts used in the project\n",
- "- `frames_per_video`: optional, number of frames to extract for training from\n",
- " each video"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "f20ecce9",
- "metadata": {},
- "outputs": [],
- "source": [
- "project_name = \"tutorial_DG\"\n",
- "lab_team = \"LorenLab\"\n",
- "project_key = sgp.DLCProject.insert_existing_project(\n",
- " project_name=project_name,\n",
- " lab_team=lab_team,\n",
- " config_path=\"/nimbus/deeplabcut/projects/tutorial_model-LorenLab-2022-07-15/config.yaml\",\n",
- " bodyparts=[\"redLED_C\", \"greenLED\", \"redLED_L\", \"redLED_R\", \"tailBase\"],\n",
- " frames_per_video=200,\n",
- " skip_duplicates=True,\n",
- ")"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "6d9d4223-63da-462e-8164-7cc63c945760",
- "metadata": {},
- "outputs": [],
- "source": [
- "sgp.DLCProject() & {\"project_name\": project_name}"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "1c7876e7",
- "metadata": {},
- "source": [
- "## [DLCModel](#ToC) "
- ]
- },
- {
- "cell_type": "markdown",
- "id": "fa36a042-f13e-4a36-812a-a4efaeb57a09",
- "metadata": {},
- "source": [
- "The `DLCModelInput` table has `dlc_model_name` and `project_name` as primary keys and `project_path` as a secondary key. "
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "25f0a45e-5bd9-48bf-a79d-908bd5a17235",
- "metadata": {},
- "outputs": [],
- "source": [
- "sgp.DLCModelInput()"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "39ee99ae-586a-4cbb-9255-15ddd594b1b7",
- "metadata": {},
- "source": [
- "We can modify the `project_key` to replace `config_path` with `project_path` to\n",
- "fit with the fields in `DLCModelInput`"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "fc961e93-8fe8-4069-a945-a9fc1e1ad993",
- "metadata": {},
- "outputs": [],
- "source": [
- "print(f\"current project_key:\\n{project_key}\")\n",
- "if not \"project_path\" in project_key:\n",
- " project_key[\"project_path\"] = os.path.dirname(project_key[\"config_path\"])\n",
- " del project_key[\"config_path\"]\n",
- " print(f\"updated project_key:\\n{project_key}\")"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "4b958ef7-160c-4141-a7c2-1177fdfd6eb6",
- "metadata": {},
- "source": [
- "After adding a unique `dlc_model_name` to `project_key`, we insert into\n",
- "`DLCModelInput`."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "49650dc2",
- "metadata": {},
- "outputs": [],
- "source": [
- "dlc_model_name = \"tutorial_model_DG\"\n",
- "sgp.DLCModelInput().insert1(\n",
- " {\"dlc_model_name\": dlc_model_name, **project_key}, skip_duplicates=True\n",
- ")\n",
- "sgp.DLCModelInput()"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "d04c4785-23b4-4a79-9ef9-3815c1215422",
- "metadata": {},
- "source": [
- "Inserting into `DLCModelInput` will also populate `DLCModelSource`, which\n",
- "records whether or not a model was trained with Spyglass."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "01021925",
- "metadata": {},
- "outputs": [],
- "source": [
- "sgp.DLCModelSource() & project_key"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "8d8756c5-0d85-490b-a712-a95faa074b43",
- "metadata": {},
- "source": [
- "The `source` field will only accept _\"FromImport\"_ or _\"FromUpstream\"_ as entries. Let's checkout the `FromUpstream` part table attached to `DLCModelSource` below."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "22fb6d58-225f-49fb-86ee-4b3197aa841f",
- "metadata": {},
- "outputs": [],
- "source": [
- "sgp.DLCModelSource.FromImport() & project_key"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "02b9297c-49dc-43b8-ad7b-3897c4d442bf",
- "metadata": {},
- "source": [
- "Next we'll get ready to populate the `DLCModel` table, which holds all the relevant information for both pre-trained models and models trained within Spyglass.
First we'll need to determine a set of parameters for our model to select the correct model file.
We can visualize a default set below:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "8e01d109",
- "metadata": {},
- "outputs": [],
- "source": [
- "sgp.DLCModelParams.get_default()"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "8aa565b0-37e4-462f-b0d8-fd1b1686b69c",
- "metadata": {},
- "source": [
- "Here is the syntax to add your own parameter set:\n",
- "\n",
- "```python\n",
- "dlc_model_params_name = \"make_this_yours\"\n",
- "params = {\n",
- " \"params\": {},\n",
- " \"shuffle\": 1,\n",
- " \"trainingsetindex\": 0,\n",
- " \"model_prefix\": \"\",\n",
- "}\n",
- "sgp.DLCModelParams.insert1(\n",
- " {\"dlc_model_params_name\": dlc_model_params_name, \"params\": params},\n",
- " skip_duplicates=True,\n",
- ")\n",
- "```"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "c5acd2c6",
- "metadata": {},
- "source": [
- "We can insert sets of parameters into `DLCModelSelection` and populate\n",
- "`DLCModel`."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "03b10bd6",
- "metadata": {},
- "outputs": [],
- "source": [
- "temp_model_key = (sgp.DLCModelSource.FromImport() & project_key).fetch1(\"KEY\")\n",
- "sgp.DLCModelSelection().insert1(\n",
- " {**temp_model_key, \"dlc_model_params_name\": \"default\"}, skip_duplicates=True\n",
- ")\n",
- "model_key = (sgp.DLCModelSelection & temp_model_key).fetch1(\"KEY\")\n",
- "sgp.DLCModel.populate(model_key)"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "a920fc2d-5b81-4d4b-817b-d7549d2810ac",
- "metadata": {},
- "source": [
- "And of course make sure it populated correctly"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "930df143-c756-4904-b4b6-7eed8c194b9d",
- "metadata": {},
- "outputs": [],
- "source": [
- "sgp.DLCModel() & model_key"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "887c5349",
- "metadata": {},
- "source": [
- "## Next Steps\n",
- "\n",
- "With our trained model in place, we're ready to move on to \n",
- "pose estimation (notebook coming soon!).\n",
- ""
- ]
- },
- {
- "cell_type": "markdown",
- "id": "5dbb3e99",
- "metadata": {},
- "source": [
- "### [`Return To Table of Contents`](#ToC)
"
- ]
- }
- ],
- "metadata": {
- "kernelspec": {
- "display_name": "spy",
- "language": "python",
- "name": "python3"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.9.16"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 5
-}
diff --git a/notebooks/23_Position_DLC_3.ipynb b/notebooks/23_Position_DLC_3.ipynb
deleted file mode 100644
index 00a69cd25..000000000
--- a/notebooks/23_Position_DLC_3.ipynb
+++ /dev/null
@@ -1,963 +0,0 @@
-{
- "cells": [
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "# Position - DeepLabCut Estimation"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Overview\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "_Developer Note:_ if you may make a PR in the future, be sure to copy this\n",
- "notebook, and use the `gitignore` prefix `temp` to avoid future conflicts.\n",
- "\n",
- "This is one notebook in a multi-part series on Spyglass.\n",
- "\n",
- "- To set up your Spyglass environment and database, see\n",
- " [the Setup notebook](./00_Setup.ipynb)\n",
- "- For additional info on DataJoint syntax, including table definitions and\n",
- " inserts, see\n",
- " [the Insert Data notebook](./01_Insert_Data.ipynb)\n",
- "\n",
- "This tutorial will extract position via DeepLabCut (DLC). It will walk through... \n",
- "- executing pose estimation\n",
- "- processing the pose estimation output to extract a centroid and orientation\n",
- "- inserting the resulting information into the `IntervalPositionInfo` table\n",
- "\n",
- "This tutorial assumes you already have a model in your database. If that's not\n",
- "the case, you can either [train one from scratch](./21_Position_DLC_1.ipynb)\n",
- "or [load an existing project](./22_Position_DLC_2.ipynb)."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Here is a schematic showing the tables used in this pipeline.\n",
- "\n",
- "![dlc_scratch.png|2000x900](./../notebook-images/dlc_scratch.png)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Table of Contents\n",
- "\n",
- "- [Imports](#imports)\n",
- "- [GPU](#gpu)\n",
- "- [`DLCPoseEstimation`](#DLCPoseEstimation1)\n",
- "- [`DLCSmoothInterp`](#DLCSmoothInterp1)\n",
- "- [`DLCCentroid`](#DLCCentroid1)\n",
- "- [`DLCOrientation`](#DLCOrientation1)\n",
- "- [`DLCPos`](#DLCPos1)\n",
- "- [`DLCPosVideo`](#DLCPosVideo1)\n",
- "- [`PosSource`](#PosSource1)\n",
- "- [`IntervalPositionInfo`](#IntervalPositionInfo1)\n",
- "\n",
- "__You can click on any header to return to the Table of Contents__"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### [Imports](#TableOfContents)\n"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [
- {
- "name": "stderr",
- "output_type": "stream",
- "text": [
- "[2023-07-28 14:45:50,776][INFO]: Connecting root@localhost:3306\n",
- "[2023-07-28 14:45:50,804][INFO]: Connected root@localhost:3306\n"
- ]
- }
- ],
- "source": [
- "import os\n",
- "import datajoint as dj\n",
- "from pprint import pprint\n",
- "\n",
- "import spyglass.common as sgc\n",
- "import spyglass.position.v1 as sgp\n",
- "\n",
- "# change to the upper level folder to detect dj_local_conf.json\n",
- "if os.path.basename(os.getcwd()) == \"notebooks\":\n",
- " os.chdir(\"..\")\n",
- "dj.config.load(\"dj_local_conf.json\") # load config for database connection info\n",
- "\n",
- "# ignore datajoint+jupyter async warnings\n",
- "import warnings\n",
- "\n",
- "warnings.simplefilter(\"ignore\", category=DeprecationWarning)\n",
- "warnings.simplefilter(\"ignore\", category=ResourceWarning)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### [GPU](#TableOfContents)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "For longer videos, we'll need GPU support. The cell below determines which core\n",
- "has space and set the `gputouse` variable accordingly."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [
- {
- "data": {
- "text/plain": [
- "{0: 80383, 1: 35, 2: 35, 3: 35, 4: 35, 5: 35, 6: 35, 7: 35, 8: 35, 9: 35}"
- ]
- },
- "metadata": {},
- "output_type": "display_data"
- }
- ],
- "source": [
- "sgp.dlc_utils.get_gpu_memory()"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Set GPU core:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "gputouse = 1 ## 1-9"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### [DLCPoseEstimation](#TableOfContents) \n",
- "\n",
- "With our trained model in place, we're ready to set up Pose Estimation on a\n",
- "behavioral video of your choice. We can select a video with `nwb_file_name` and\n",
- "`epoch`, making sure there's an entry in the `VideoFile` table."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "nwb_file_name = \"J1620210604_.nwb\"\n",
- "epoch = 14\n",
- "sgc.VideoFile() & {\"nwb_file_name\": nwb_file_name, \"epoch\": epoch}"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Using `insert_estimation_task` will convert out video to be in .mp4 format (DLC\n",
- "struggles with .h264) and determine the directory in which we'll store the pose\n",
- "estimation results.\n",
- "\n",
- "- `task_mode` (trigger or load) determines whether or not populating\n",
- " `DLCPoseEstimation` triggers a new pose estimation, or loads an existing.\n",
- "- `video_file_num` will be 0 in almost all\n",
- " cases.\n",
- "- `gputouse` was already set during training. It may be a good idea to make sure\n",
- " that core is still free before moving forward."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "pose_estimation_key = sgp.DLCPoseEstimationSelection.insert_estimation_task(\n",
- " {\n",
- " \"nwb_file_name\": nwb_file_name,\n",
- " \"epoch\": epoch,\n",
- " \"video_file_num\": 0,\n",
- " **model_key,\n",
- " },\n",
- " task_mode=\"trigger\",\n",
- " params={\"gputouse\": gputouse, \"videotype\": \"mp4\"},\n",
- ")"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "_Note:_ Populating `DLCPoseEstimation` may take some time for full datasets"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "sgp.DLCPoseEstimation().populate(pose_estimation_key)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Let's visualize the output from Pose Estimation"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "(sgp.DLCPoseEstimation() & pose_estimation_key).fetch_dataframe()"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### [DLCSmoothInterp](#TableOfContents) "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "After pose estimation, we can interpolate over low likelihood periods and smooth\n",
- "the resulting position.\n",
- "\n",
- "First we define some parameters. We can see the default parameter set below."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "pprint(sgp.DLCSmoothInterpParams.get_default())\n",
- "si_params_name = \"default\""
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "To change any of these parameters, one would do the following:\n",
- "\n",
- "```python\n",
- "si_params_name = \"your_unique_param_name\"\n",
- "params = {\n",
- " \"smoothing_params\": {\n",
- " \"smoothing_duration\": 0.00,\n",
- " \"smooth_method\": \"moving_avg\",\n",
- " },\n",
- " \"interp_params\": {\"likelihood_thresh\": 0.00},\n",
- " \"max_plausible_speed\": 0,\n",
- " \"speed_smoothing_std_dev\": 0.000,\n",
- "}\n",
- "sgp.DLCSmoothInterpParams().insert1(\n",
- " {\"dlc_si_params_name\": si_params_name, \"params\": params},\n",
- " skip_duplicates=True,\n",
- ")\n",
- "```"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "We'll create a dictionary with the correct set of keys for the `DLCSmoothInterpSelection` table"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "si_key = pose_estimation_key.copy()\n",
- "fields = list(sgp.DLCSmoothInterpSelection.fetch().dtype.fields.keys())\n",
- "si_key = {key: val for key, val in si_key.items() if key in fields}\n",
- "si_key"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "We can insert all of the bodyparts we want to process into\n",
- "`DLCSmoothInterpSelection`. Here are the bodyparts we have available to us:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "pprint((sgp.DLCPoseEstimation.BodyPart & pose_estimation_key).fetch(\"bodypart\"))"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "We can use `insert1` to insert a single bodypart, but would suggest using `insert` to insert a list of keys with different bodyparts."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "We'll set a list of bodyparts and then insert them into\n",
- "`DLCSmoothInterpSelection`."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "bodyparts = [\"greenLED\", \"redLED_C\"]\n",
- "sgp.DLCSmoothInterpSelection.insert(\n",
- " [\n",
- " {\n",
- " **si_key,\n",
- " \"bodypart\": bodypart,\n",
- " \"dlc_si_params_name\": si_params_name,\n",
- " }\n",
- " for bodypart in bodyparts\n",
- " ],\n",
- " skip_duplicates=True,\n",
- ")"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "And verify the entry:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "sgp.DLCSmoothInterpSelection() & si_key"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Now, we populate `DLCSmoothInterp`, which will perform smoothing and\n",
- "interpolation on all of the bodyparts specified."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "sgp.DLCSmoothInterp().populate(si_key)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "And let's visualize the resulting position data using a scatter plot"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "(\n",
- " sgp.DLCSmoothInterp() & {**si_key, \"bodypart\": bodyparts[0]}\n",
- ").fetch1_dataframe().plot.scatter(x=\"x\", y=\"y\", s=1, figsize=(5, 5))"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### [DLCSmoothInterpCohort](#TableOfContents) "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "After smoothing/interpolation, we need to select bodyparts from which we want to\n",
- "derive a centroid and orientation, which is performed by the\n",
- "`DLCSmoothInterpCohort` table."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "First, let's make a key that represents the 'cohort', using\n",
- "`dlc_si_cohort_selection_name`. We'll need a bodypart dictionary using bodypart\n",
- "keys and smoothing/interpolation parameters used as value."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "cohort_key = si_key.copy()\n",
- "if \"bodypart\" in cohort_key:\n",
- " del cohort_key[\"bodypart\"]\n",
- "if \"dlc_si_params_name\" in cohort_key:\n",
- " del cohort_key[\"dlc_si_params_name\"]\n",
- "cohort_key[\"dlc_si_cohort_selection_name\"] = \"green_red_led\"\n",
- "cohort_key[\"bodyparts_params_dict\"] = {\n",
- " \"greenLED\": si_params_name,\n",
- " \"redLED_C\": si_params_name,\n",
- "}\n",
- "print(cohort_key)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "We'll insert the cohort into `DLCSmoothInterpCohortSelection` and populate `DLCSmoothInterpCohort`, which collates the separately smoothed and interpolated bodyparts into a single entry."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "sgp.DLCSmoothInterpCohortSelection().insert1(cohort_key, skip_duplicates=True)\n",
- "sgp.DLCSmoothInterpCohort.populate(cohort_key)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "And verify the entry:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "sgp.DLCSmoothInterpCohort.BodyPart() & cohort_key"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### [DLCCentroid](#TableOfContents) "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "With this cohort, we can determine a centroid using another set of parameters."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Here is the default set\n",
- "print(sgp.DLCCentroidParams.get_default())\n",
- "centroid_params_name = \"default\""
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Here is the syntax to add your own parameters:\n",
- "\n",
- "```python\n",
- "centroid_params = {\n",
- " \"centroid_method\": \"two_pt_centroid\",\n",
- " \"points\": {\n",
- " \"greenLED\": \"greenLED\",\n",
- " \"redLED_C\": \"redLED_C\",\n",
- " },\n",
- " \"speed_smoothing_std_dev\": 0.100,\n",
- "}\n",
- "centroid_params_name = \"your_unique_param_name\"\n",
- "sgp.DLCCentroidParams.insert1(\n",
- " {\n",
- " \"dlc_centroid_params_name\": centroid_params_name,\n",
- " \"params\": centroid_params,\n",
- " },\n",
- " skip_duplicates=True,\n",
- ")\n",
- "```"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "We'll make a key to insert into `DLCCentroidSelection`."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "centroid_key = cohort_key.copy()\n",
- "fields = list(sgp.DLCCentroidSelection.fetch().dtype.fields.keys())\n",
- "centroid_key = {key: val for key, val in centroid_key.items() if key in fields}\n",
- "centroid_key[\"dlc_centroid_params_name\"] = centroid_params_name\n",
- "pprint(centroid_key)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "After inserting into the selection table, we can populate `DLCCentroid`"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "sgp.DLCCentroidSelection.insert1(centroid_key, skip_duplicates=True)\n",
- "sgp.DLCCentroid.populate(centroid_key)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Here we can visualize the resulting centroid position"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "(sgp.DLCCentroid() & centroid_key).fetch1_dataframe().plot.scatter(\n",
- " x=\"position_x\",\n",
- " y=\"position_y\",\n",
- " c=\"speed\",\n",
- " colormap=\"viridis\",\n",
- " alpha=0.5,\n",
- " s=0.5,\n",
- " figsize=(10, 10),\n",
- ")"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### [DLCOrientation](#TableOfContents) "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "We'll go through a similar process for orientation. "
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "pprint(sgp.DLCOrientationParams.get_default())\n",
- "dlc_orientation_params_name = \"default\""
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "We'll prune the `cohort_key` we used above and add our\n",
- "`dlc_orientation_params_name` to make it suitable for `DLCOrientationSelection`."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "fields = list(sgp.DLCOrientationSelection.fetch().dtype.fields.keys())\n",
- "orient_key = {key: val for key, val in cohort_key.items() if key in fields}\n",
- "orient_key[\"dlc_orientation_params_name\"] = dlc_orientation_params_name\n",
- "print(orient_key)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "We'll insert into `DLCOrientationSelection` and then populate `DLCOrientation`"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "sgp.DLCOrientationSelection().insert1(orient_key, skip_duplicates=True)\n",
- "sgp.DLCOrientation().populate(orient_key)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "We can fetch the orientation as a dataframe as quality assurance."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "(sgp.DLCOrientation() & orient_key).fetch1_dataframe()"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### [DLCPos](#TableOfContents) "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "After processing the position data, we have to do a few table manipulations to standardize various outputs. \n",
- "\n",
- "To summarize, we brought in a pretrained DLC project, used that model to run pose estimation on a new behavioral video, smoothed and interpolated the result, formed a cohort of bodyparts, and determined the centroid and orientation of this cohort.\n",
- "\n",
- "Now we'll populate `DLCPos` with our centroid/orientation entries above."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "fields = list(sgp.DLCPos.fetch().dtype.fields.keys())\n",
- "dlc_key = {key: val for key, val in centroid_key.items() if key in fields}\n",
- "dlc_key[\"dlc_si_cohort_centroid\"] = centroid_key[\"dlc_si_cohort_selection_name\"]\n",
- "dlc_key[\"dlc_si_cohort_orientation\"] = orient_key[\n",
- " \"dlc_si_cohort_selection_name\"\n",
- "]\n",
- "dlc_key[\"dlc_orientation_params_name\"] = orient_key[\n",
- " \"dlc_orientation_params_name\"\n",
- "]\n",
- "pprint(dlc_key)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Now we can insert into `DLCPosSelection` and populate `DLCPos` with our `dlc_key`"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "sgp.DLCPosSelection().insert1(dlc_key, skip_duplicates=True)\n",
- "sgp.DLCPos().populate(dlc_key)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Fetched as a dataframe, we expect the following 8 columns:\n",
- "\n",
- "- time\n",
- "- video_frame_ind\n",
- "- position_x\n",
- "- position_y\n",
- "- orientation\n",
- "- velocity_x\n",
- "- velocity_y\n",
- "- speed"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "(sgp.DLCPos() & dlc_key).fetch1_dataframe()"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "We can also fetch the `pose_eval_result`, which contains the percentage of\n",
- "frames that each bodypart was below the likelihood threshold of 0.95."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "(sgp.DLCPos() & dlc_key).fetch1(\"pose_eval_result\")"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### [DLCPosVideo](#TableOfContents) "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "We can create a video with the centroid and orientation overlaid on the original\n",
- "video. This will also plot the likelihood of each bodypart used in the cohort.\n",
- "This is optional, but a good quality assurance step."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "sgp.DLCPosVideoParams.insert_default()"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "params = {\n",
- " \"percent_frames\": 0.05,\n",
- " \"incl_likelihood\": True,\n",
- "}\n",
- "sgp.DLCPosVideoParams.insert1(\n",
- " {\"dlc_pos_video_params_name\": \"five_percent\", \"params\": params},\n",
- " skip_duplicates=True,\n",
- ")"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "sgp.DLCPosVideoSelection.insert1(\n",
- " {**dlc_key, \"dlc_pos_video_params_name\": \"five_percent\"},\n",
- " skip_duplicates=True,\n",
- ")"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "sgp.DLCPosVideo().populate(dlc_key)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### [PositionOutput](#TableOfContents) "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "`PositionOutput` is the final table of the pipeline and is automatically\n",
- "populated when we populate `DLCPosV1`"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "sgp.PositionOutput() & dlc_key"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "`PositionOutput` also has a part table, similar to the `DLCModelSource` table above. Let's check that out as well."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "PositionOutput.DLCPosV1() & dlc_key"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "(PositionOutput.DLCPosV1() & dlc_key).fetch1_dataframe()"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### [PositionVideo](#TableOfContents)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "We can use the `PositionVideo` table to create a video that overlays just the\n",
- "centroid and orientation on the video. This table uses the parameter `plot` to\n",
- "determine whether to plot the entry deriving from the DLC arm or from the Trodes\n",
- "arm of the position pipeline. This parameter also accepts 'all', which will plot\n",
- "both (if they exist) in order to compare results."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "sgp.PositionVideoSelection().insert1(\n",
- " {\n",
- " \"nwb_file_name\": \"J1620210604_.nwb\",\n",
- " \"interval_list_name\": \"pos 13 valid times\",\n",
- " \"trodes_position_id\": 0,\n",
- " \"dlc_position_id\": 1,\n",
- " \"plot\": \"DLC\",\n",
- " \"output_dir\": \"/home/dgramling/Src/\",\n",
- " }\n",
- ")"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "sgp.PositionVideo.populate({\"plot\": \"DLC\"})"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "CONGRATULATIONS!! Please treat yourself to a nice tea break :-)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### [Return To Table of Contents](#TableOfContents)
"
- ]
- }
- ],
- "metadata": {
- "language_info": {
- "name": "python"
- },
- "orig_nbformat": 4
- },
- "nbformat": 4,
- "nbformat_minor": 2
-}