diff --git a/notebooks/21_DLC.ipynb b/notebooks/21_DLC.ipynb new file mode 100644 index 000000000..1c1756c0d --- /dev/null +++ b/notebooks/21_DLC.ipynb @@ -0,0 +1,2183 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "a93a1550-8a67-4346-a4bf-e5a136f3d903", + "metadata": {}, + "source": [ + "## Position- DeepLabCut from Scratch" + ] + }, + { + "cell_type": "markdown", + "id": "13dd3267", + "metadata": {}, + "source": [ + "### Overview" + ] + }, + { + "cell_type": "markdown", + "id": "b52aff0d", + "metadata": {}, + "source": [ + "_Developer Note:_ if you may make a PR in the future, be sure to copy this\n", + "notebook, and use the `gitignore` prefix `temp` to avoid future conflicts.\n", + "\n", + "This is one notebook in a multi-part series on Spyglass.\n", + "\n", + "- To set up your Spyglass environment and database, see\n", + " [the Setup notebook](./00_Setup.ipynb)\n", + "- For additional info on DataJoint syntax, including table definitions and\n", + " inserts, see\n", + " [the Insert Data notebook](./01_Insert_Data.ipynb)\n", + "\n", + "This tutorial will extract position via DeepLabCut (DLC). It will walk through...\n", + "\n", + "- creating a DLC project\n", + "- extracting and labeling frames\n", + "- training your model\n", + "- executing pose estimation on a novel behavioral video\n", + "- processing the pose estimation output to extract a centroid and orientation\n", + "- inserting the resulting information into the `PositionOutput` table\n", + "\n", + "**Note 2: Make sure you are running this within the spyglass-position Conda environment (instructions for install are in the environment_position.yml)**" + ] + }, + { + "cell_type": "markdown", + "id": "a8b531f7", + "metadata": {}, + "source": [ + "Here is a schematic showing the tables used in this pipeline.\n", + "\n", + "![dlc_scratch.png|2000x900](./../notebook-images/dlc_scratch.png)\n" + ] + }, + { + "cell_type": "markdown", + "id": "0c67d88c-c90e-467b-ae2e-672c49a12f95", + "metadata": {}, + "source": [ + "### Table of Contents\n", + "[`DLCProject`](#DLCProject1)
\n", + "[`DLCModelTraining`](#DLCModelTraining1)
\n", + "[`DLCModel`](#DLCModel1)
\n", + "[`DLCPoseEstimation`](#DLCPoseEstimation1)
\n", + "[`DLCSmoothInterp`](#DLCSmoothInterp1)
\n", + "[`DLCCentroid`](#DLCCentroid1)
\n", + "[`DLCOrientation`](#DLCOrientation1)
\n", + "[`DLCPosV1`](#DLCPosV1-1)
\n", + "[`DLCPosVideo`](#DLCPosVideo1)
\n", + "[`PositionOutput`](#PositionOutput1)
" + ] + }, + { + "cell_type": "markdown", + "id": "70a0a678", + "metadata": {}, + "source": [ + "__You can click on any header to return to the Table of Contents__" + ] + }, + { + "cell_type": "markdown", + "id": "c9b98c3d", + "metadata": {}, + "source": [ + "### Imports" + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "id": "968d5189", + "metadata": {}, + "outputs": [], + "source": [ + "%load_ext autoreload\n", + "%autoreload 2" + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "id": "0f567531", + "metadata": {}, + "outputs": [], + "source": [ + "import os\n", + "import datajoint as dj\n", + "from pprint import pprint\n", + "\n", + "import spyglass.common as sgc\n", + "import spyglass.position.v1 as sgp\n", + "\n", + "from pathlib import Path, PosixPath, PurePath\n", + "import glob\n", + "import numpy as np\n", + "import pandas as pd\n", + "import pynwb\n", + "from spyglass.position import PositionOutput\n", + "\n", + "# change to the upper level folder to detect dj_local_conf.json\n", + "if os.path.basename(os.getcwd()) == \"notebooks\":\n", + " os.chdir(\"..\")\n", + "dj.config.load(\"dj_local_conf.json\") # load config for database connection info\n", + "\n", + "# ignore datajoint+jupyter async warnings\n", + "import warnings\n", + "\n", + "warnings.simplefilter(\"ignore\", category=DeprecationWarning)\n", + "warnings.simplefilter(\"ignore\", category=ResourceWarning)" + ] + }, + { + "cell_type": "markdown", + "id": "5e6221a3-17e5-45c0-aa40-2fd664b02219", + "metadata": {}, + "source": [ + "#### [DLCProject](#TableOfContents) " + ] + }, + { + "cell_type": "markdown", + "id": "27aed0e1-3af7-4499-bae8-96a64e81041e", + "metadata": {}, + "source": [ + "
\n", + " Notes:\n", + "
\n" + ] + }, + { + "cell_type": "markdown", + "id": "50c9f1c9", + "metadata": {}, + "source": [ + "### Body Parts" + ] + }, + { + "cell_type": "markdown", + "id": "96637cb9-519d-41e1-8bfd-69f68dc66b36", + "metadata": {}, + "source": [ + "We'll begin by looking at the `BodyPart` table, which stores standard names of body parts used in DLC models throughout the lab with a concise description." + ] + }, + { + "cell_type": "code", + "execution_count": 14, + "id": "b69f829f-9877-48ae-89d1-f876af2b8835", + "metadata": {}, + "outputs": [ + { + "data": { + "text/html": [ + "\n", + " \n", + " \n", + " \n", + " \n", + "
\n", + " \n", + " \n", + " \n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "
\n", + "

bodypart

\n", + " \n", + "
\n", + "

bodypart_description

\n", + " \n", + "
backmiddle of the rat's back
driveBackback of drive
driveFrontfront of drive
earLleft ear of the rat
earRright ear of the rat
forelimbLleft forelimb of the rat
forelimbRright forelimb of the rat
greenLEDgreenLED
hindlimbLleft hindlimb of the rat
hindlimbRright hindlimb of the rat
nosetip of the nose of the rat
redLED_CredLED_C
\n", + "

...

\n", + "

Total: 23

\n", + " " + ], + "text/plain": [ + "*bodypart bodypart_descr\n", + "+------------+ +------------+\n", + "back middle of the \n", + "driveBack back of drive \n", + "driveFront front of drive\n", + "earL left ear of th\n", + "earR right ear of t\n", + "forelimbL left forelimb \n", + "forelimbR right forelimb\n", + "greenLED greenLED \n", + "hindlimbL left hindlimb \n", + "hindlimbR right hindlimb\n", + "nose tip of the nos\n", + "redLED_C redLED_C \n", + " ...\n", + " (Total: 23)" + ] + }, + "execution_count": 14, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "sgp.BodyPart()" + ] + }, + { + "cell_type": "markdown", + "id": "9616512e", + "metadata": {}, + "source": [ + "If the bodyparts you plan to use in your model are not yet in the table, here is code to add bodyparts:\n", + "\n", + "```python\n", + "sgp.BodyPart.insert(\n", + " [\n", + " {\"bodypart\": \"bp_1\", \"bodypart_description\": \"concise descrip\"},\n", + " {\"bodypart\": \"bp_2\", \"bodypart_description\": \"concise descrip\"},\n", + " ],\n", + " skip_duplicates=True,\n", + ")\n", + "```" + ] + }, + { + "cell_type": "markdown", + "id": "57b590d3", + "metadata": {}, + "source": [ + "### Define videos and camera name (optional) for training set" + ] + }, + { + "cell_type": "markdown", + "id": "5d5aae37", + "metadata": {}, + "source": [ + "To train a model, we'll need to extract frames, which we can label as training data. We can construct a list of videos from which we'll extract frames.\n", + "\n", + "The list can either contain dictionaries identifying behavioral videos for NWB files that have already been added to Spyglass, or absolute file paths to the videos you want to use.\n", + "\n", + "For this tutorial, we'll use two videos for which we already have frames labeled." + ] + }, + { + "cell_type": "markdown", + "id": "7b5e157b", + "metadata": {}, + "source": [ + "Defining camera name is optional: it should be done in cases where there are multiple cameras streaming per epoch, but not necessary otherwise.
\n", + "example:\n", + "`camera_name = \"HomeBox_camera\" \n", + " `" + ] + }, + { + "cell_type": "markdown", + "id": "56f45e7f", + "metadata": {}, + "source": [ + "_NOTE:_ The official release of Spyglass does not yet support multicamera\n", + "projects. You can monitor progress on the effort to add this feature by checking\n", + "[this PR](https://github.com/LorenFrankLab/spyglass/pull/684) or use\n", + "[this experimental branch](https://github.com/dpeg22/spyglass/tree/add-multi-camera),\n", + "which takes the keys nwb_file_name and epoch, and camera_name in the video_list variable.\n" + ] + }, + { + "cell_type": "code", + "execution_count": 15, + "id": "15971506", + "metadata": {}, + "outputs": [], + "source": [ + "video_list = [\n", + " {\"nwb_file_name\": \"J1620210529_.nwb\", \"epoch\": 2},\n", + " {\"nwb_file_name\": \"peanut20201103_.nwb\", \"epoch\": 4},\n", + "]" + ] + }, + { + "cell_type": "markdown", + "id": "a9f8e43d", + "metadata": {}, + "source": [ + "### Path variables\n", + "\n", + "The position pipeline also keeps track of paths for project, video, and output.\n", + "Just like we saw in [Setup](./00_Setup.ipynb), you can manage these either with\n", + "environmental variables...\n", + "\n", + "```bash\n", + "export DLC_PROJECT_DIR=\"/nimbus/deeplabcut/projects\"\n", + "export DLC_VIDEO_DIR=\"/nimbus/deeplabcut/video\"\n", + "export DLC_OUTPUT_DIR=\"/nimbus/deeplabcut/output\"\n", + "```\n", + "\n", + "\n", + "\n", + "Or these can be set in your datajoint config:\n", + "\n", + "```json\n", + "{\n", + " \"custom\": {\n", + " \"dlc_dirs\": {\n", + " \"base\": \"/nimbus/deeplabcut/\",\n", + " \"project\": \"/nimbus/deeplabcut/projects\",\n", + " \"video\": \"/nimbus/deeplabcut/video\",\n", + " \"output\": \"/nimbus/deeplabcut/output\"\n", + " }\n", + " }\n", + "}\n", + "```\n", + "\n", + "_NOTE:_ If only `base` is specified as shown above, spyglass will assume the\n", + "relative directories shown.\n", + "\n", + "You can check the result of this setup process with..." + ] + }, + { + "cell_type": "code", + "execution_count": 16, + "id": "49d7d9fc", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "{'debug_mode': False,\n", + " 'prepopulate': True,\n", + " 'SPYGLASS_BASE_DIR': '/stelmo/nwb',\n", + " 'SPYGLASS_RAW_DIR': '/stelmo/nwb/raw',\n", + " 'SPYGLASS_ANALYSIS_DIR': '/stelmo/nwb/analysis',\n", + " 'SPYGLASS_RECORDING_DIR': '/stelmo/nwb/recording',\n", + " 'SPYGLASS_SORTING_DIR': '/stelmo/nwb/sorting',\n", + " 'SPYGLASS_WAVEFORMS_DIR': '/stelmo/nwb/waveforms',\n", + " 'SPYGLASS_TEMP_DIR': '/stelmo/nwb/tmp/spyglass',\n", + " 'SPYGLASS_VIDEO_DIR': '/stelmo/nwb/video',\n", + " 'KACHERY_CLOUD_DIR': '/stelmo/nwb/.kachery-cloud',\n", + " 'KACHERY_STORAGE_DIR': '/stelmo/nwb/kachery_storage',\n", + " 'KACHERY_TEMP_DIR': '/stelmo/nwb/tmp',\n", + " 'DLC_PROJECT_DIR': '/nimbus/deeplabcut/projects',\n", + " 'DLC_VIDEO_DIR': '/nimbus/deeplabcut/video',\n", + " 'DLC_OUTPUT_DIR': '/nimbus/deeplabcut/output',\n", + " 'KACHERY_ZONE': 'franklab.default',\n", + " 'FIGURL_CHANNEL': 'franklab2',\n", + " 'DJ_SUPPORT_FILEPATH_MANAGEMENT': 'TRUE',\n", + " 'KACHERY_CLOUD_EPHEMERAL': 'TRUE',\n", + " 'HD5_USE_FILE_LOCKING': 'FALSE'}" + ] + }, + "execution_count": 16, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "from spyglass.settings import config\n", + "\n", + "config" + ] + }, + { + "cell_type": "markdown", + "id": "32c023b0-d00d-40b0-9a37-d0d3e4a4ae2a", + "metadata": {}, + "source": [ + "Before creating our project, we need to define a few variables.\n", + "\n", + "- A team name, as shown in `LabTeam` for setting permissions. Here, we'll\n", + " use \"LorenLab\".\n", + "- A `project_name`, as a unique identifier for this DLC project. Here, we'll use\n", + " **\"tutorial_scratch_yourinitials\"**\n", + "- `bodyparts` is a list of body parts for which we want to extract position.\n", + " The pre-labeled frames we're using include the bodyparts listed below.\n", + "- Number of frames to extract/label as `frames_per_video`. Note that the DLC creators recommend having 200 frames as the minimum total number for each project." + ] + }, + { + "cell_type": "code", + "execution_count": 17, + "id": "347e98f1", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "project name: tutorial_scratch_DG is already in use.\n" + ] + } + ], + "source": [ + "team_name = \"LorenLab\"\n", + "project_name = \"tutorial_scratch_DG\"\n", + "frames_per_video = 100\n", + "bodyparts = [\"redLED_C\", \"greenLED\", \"redLED_L\", \"redLED_R\", \"tailBase\"]\n", + "project_key = sgp.DLCProject.insert_new_project(\n", + " project_name=project_name,\n", + " bodyparts=bodyparts,\n", + " lab_team=team_name,\n", + " frames_per_video=frames_per_video,\n", + " video_list=video_list,\n", + " skip_duplicates=True,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "f5d83452-48eb-4669-89eb-a6beb1f2d051", + "metadata": {}, + "source": [ + "Now that we've intialized our project we'll need to extract frames which we will then label. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "7d8b1595", + "metadata": {}, + "outputs": [], + "source": [ + "#comment this line out after you finish frame extraction for each project\n", + "sgp.DLCProject().run_extract_frames(project_key)" + ] + }, + { + "cell_type": "markdown", + "id": "68110734", + "metadata": {}, + "source": [ + "This is the line used to label the frames you extracted, if you wish to use the DLC GUI on the computer you are currently using.\n", + "```#comment this line out after frames are labeled for your project\n", + "sgp.DLCProject().run_label_frames(project_key)\n", + "```" + ] + }, + { + "cell_type": "markdown", + "id": "8b241030", + "metadata": {}, + "source": [ + "Otherwise, it is best/easiest practice to label the frames on your local computer (like a MacBook) that can run DeepLabCut's GUI well. Instructions:
\n", + "1. Install DLC on your local (preferably into a 'Src' folder): https://deeplabcut.github.io/DeepLabCut/docs/installation.html\n", + "2. Upload frames extracted and saved in nimbus (should be `/nimbus/deeplabcut//labeled-data`) AND the project's associated config file (should be `/nimbus/deeplabcut//config.yaml`) to Box (we get free with UCSF)\n", + "3. Download labeled-data and config files on your local from Box\n", + "4. Create a 'projects' folder where you installed DeepLabCut; create a new folder with your complete project name there; save the downloaded files there.\n", + "4. Edit the config.yaml file: line 9 defining `project_path` needs to be the file path where it is saved on your local (ex: `/Users/lorenlab/Src/DeepLabCut/projects/tutorial_sratch_DG-LorenLab-2023-08-16`)\n", + "5. Open the DLC GUI through terminal \n", + "
(ex: `conda activate miniconda/envs/DEEPLABCUT_M1`\n", + "\t\t
`pythonw -m deeplabcut`)\n", + "6. Load an existing project; choose the config.yaml file\n", + "7. Label frames; labeling tutorial: https://www.youtube.com/watch?v=hsA9IB5r73E.\n", + "8. Once all frames are labeled, you should re-upload labeled-data folder back to Box and overwrite it in the original nimbus location so that your completed frames are ready to be used in the model." + ] + }, + { + "cell_type": "markdown", + "id": "c12dd229-2f8b-455a-a7b1-a20916cefed9", + "metadata": {}, + "source": [ + "Now we can check the `DLCProject.File` part table and see all of our training files and videos there!" + ] + }, + { + "cell_type": "code", + "execution_count": 18, + "id": "3d4f3fa6-cce9-4d4a-a252-3424313c6a97", + "metadata": {}, + "outputs": [ + { + "data": { + "text/html": [ + "\n", + " \n", + " \n", + " \n", + " Paths of training files (e.g., labeled pngs, CSV or video)\n", + "
\n", + " \n", + " \n", + " \n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "
\n", + "

project_name

\n", + " name of DLC project\n", + "
\n", + "

file_name

\n", + " Concise name to describe file\n", + "
\n", + "

file_ext

\n", + " extension of file\n", + "
\n", + "

file_path

\n", + " \n", + "
tutorial_scratch_DG20201103_peanut_04_r2mp4/nimbus/deeplabcut/projects/tutorial_scratch_DG-LorenLab-2023-08-16/videos/20201103_peanut_04_r2.mp4
tutorial_scratch_DG20201103_peanut_04_r2_labeled_datah5/nimbus/deeplabcut/projects/tutorial_scratch_DG-LorenLab-2023-08-16/labeled-data/20201103_peanut_04_r2/CollectedData_LorenLab.h5
tutorial_scratch_DG20210529_J16_02_r1mp4/nimbus/deeplabcut/projects/tutorial_scratch_DG-LorenLab-2023-08-16/videos/20210529_J16_02_r1.mp4
tutorial_scratch_DG20210529_J16_02_r1_labeled_datah5/nimbus/deeplabcut/projects/tutorial_scratch_DG-LorenLab-2023-08-16/labeled-data/20210529_J16_02_r1/CollectedData_LorenLab.h5
\n", + " \n", + "

Total: 4

\n", + " " + ], + "text/plain": [ + "*project_name *file_name *file_ext file_path \n", + "+------------+ +------------+ +----------+ +------------+\n", + "tutorial_scrat 20201103_peanu mp4 /nimbus/deepla\n", + "tutorial_scrat 20201103_peanu h5 /nimbus/deepla\n", + "tutorial_scrat 20210529_J16_0 mp4 /nimbus/deepla\n", + "tutorial_scrat 20210529_J16_0 h5 /nimbus/deepla\n", + " (Total: 4)" + ] + }, + "execution_count": 18, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "sgp.DLCProject.File & project_key" + ] + }, + { + "cell_type": "markdown", + "id": "7e2e3eab-60c7-4a3c-bc8f-fd4e8dcf52a2", + "metadata": {}, + "source": [ + "
\n", + " This step and beyond should be run on a GPU-enabled machine.\n", + "
" + ] + }, + { + "cell_type": "markdown", + "id": "0e48ecf0", + "metadata": {}, + "source": [ + "#### [DLCModelTraining](#ToC)\n", + "\n", + "Please make sure you're running this notebook on a GPU-enabled machine.\n", + "\n", + "Now that we've imported existing frames, we can get ready to train our model.\n", + "\n", + "First, we'll need to define a set of parameters for `DLCModelTrainingParams`, which will get used by DeepLabCut during training. Let's start with `gputouse`,\n", + "which determines which GPU core to use.\n", + "\n", + "The cell below determines which core has space and set the `gputouse` variable\n", + "accordingly.\n" + ] + }, + { + "cell_type": "code", + "execution_count": 19, + "id": "a8fc5bb7", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "{0: 305}" + ] + }, + "execution_count": 19, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "sgp.dlc_utils.get_gpu_memory()" + ] + }, + { + "cell_type": "markdown", + "id": "bca035a9", + "metadata": {}, + "source": [ + "Set GPU core:\n" + ] + }, + { + "cell_type": "code", + "execution_count": 20, + "id": "1ff0e393", + "metadata": {}, + "outputs": [], + "source": [ + "gputouse = 1 # 1-9" + ] + }, + { + "cell_type": "markdown", + "id": "2b047686", + "metadata": {}, + "source": [ + "Now we'll define the rest of our parameters and insert the entry.\n", + "\n", + "To see all possible parameters, try:\n", + "\n", + "```python\n", + "sgp.DLCModelTrainingParams.get_accepted_params()\n", + "```\n" + ] + }, + { + "cell_type": "code", + "execution_count": 21, + "id": "399581ee", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "New param set not added\n", + "A param set with name: tutorial already exists\n" + ] + } + ], + "source": [ + "training_params_name = \"tutorial\"\n", + "sgp.DLCModelTrainingParams.insert_new_params(\n", + " paramset_name=training_params_name,\n", + " params={\n", + " \"trainingsetindex\": 0,\n", + " \"shuffle\": 1,\n", + " \"gputouse\": gputouse,\n", + " \"net_type\": \"resnet_50\",\n", + " \"augmenter_type\": \"imgaug\",\n", + " },\n", + " skip_duplicates=True,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "6b6cc709", + "metadata": {}, + "source": [ + "Next we'll modify the `project_key` from above to include the necessary entries for `DLCModelTraining`" + ] + }, + { + "cell_type": "code", + "execution_count": 22, + "id": "7acd150b", + "metadata": {}, + "outputs": [], + "source": [ + "# project_key['project_path'] = os.path.dirname(project_key['config_path'])\n", + "if \"config_path\" in project_key:\n", + " del project_key[\"config_path\"]" + ] + }, + { + "cell_type": "markdown", + "id": "0bc7ddaa", + "metadata": {}, + "source": [ + "We can insert an entry into `DLCModelTrainingSelection` and populate `DLCModelTraining`.\n", + "\n", + "_Note:_ You can stop training at any point using `I + I` or interrupt the Kernel. \n", + "\n", + "The maximum total number of training iterations is 1030000; you can end training before this amount if the loss rate (lr) and total loss plateau and are very close to 0.\n" + ] + }, + { + "cell_type": "code", + "execution_count": 23, + "id": "3c252541", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "project_name : varchar(100) # name of DLC project\n", + "dlc_training_params_name : varchar(50) # descriptive name of parameter set\n", + "training_id : int # unique integer,\n", + "---\n", + "model_prefix=\"\" : varchar(32) # " + ] + }, + "execution_count": 23, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "sgp.DLCModelTrainingSelection.heading" + ] + }, + { + "cell_type": "code", + "execution_count": 24, + "id": "139d2f30", + "metadata": { + "tags": [] + }, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "2024-01-18 10:23:30.406102: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: SSE4.1 SSE4.2 AVX AVX2 FMA\n", + "To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.\n" + ] + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Loading DLC 2.2.3...\n", + "OpenCV is built with OpenMP support. This usually results in poor performance. For details, see https://github.com/tensorpack/benchmarks/blob/master/ImageNet/benchmark-opencv-resize.py\n" + ] + }, + { + "ename": "PermissionError", + "evalue": "[Errno 13] Permission denied: '/nimbus/deeplabcut/projects/tutorial_scratch_DG-LorenLab-2023-08-16/log.log'", + "output_type": "error", + "traceback": [ + "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", + "\u001b[0;31mPermissionError\u001b[0m Traceback (most recent call last)", + "Cell \u001b[0;32mIn[24], line 16\u001b[0m\n\u001b[1;32m 1\u001b[0m sgp\u001b[38;5;241m.\u001b[39mDLCModelTrainingSelection()\u001b[38;5;241m.\u001b[39minsert1(\n\u001b[1;32m 2\u001b[0m {\n\u001b[1;32m 3\u001b[0m \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mproject_key,\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 7\u001b[0m }\n\u001b[1;32m 8\u001b[0m )\n\u001b[1;32m 9\u001b[0m model_training_key \u001b[38;5;241m=\u001b[39m (\n\u001b[1;32m 10\u001b[0m sgp\u001b[38;5;241m.\u001b[39mDLCModelTrainingSelection\n\u001b[1;32m 11\u001b[0m \u001b[38;5;241m&\u001b[39m {\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 14\u001b[0m }\n\u001b[1;32m 15\u001b[0m )\u001b[38;5;241m.\u001b[39mfetch1(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mKEY\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n\u001b[0;32m---> 16\u001b[0m \u001b[43msgp\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mDLCModelTraining\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mpopulate\u001b[49m\u001b[43m(\u001b[49m\u001b[43mmodel_training_key\u001b[49m\u001b[43m)\u001b[49m\n", + "File \u001b[0;32m~/anaconda3/envs/spyglass-position/lib/python3.9/site-packages/datajoint/autopopulate.py:241\u001b[0m, in \u001b[0;36mAutoPopulate.populate\u001b[0;34m(self, suppress_errors, return_exception_objects, reserve_jobs, order, limit, max_calls, display_progress, processes, make_kwargs, *restrictions)\u001b[0m\n\u001b[1;32m 237\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m processes \u001b[38;5;241m==\u001b[39m \u001b[38;5;241m1\u001b[39m:\n\u001b[1;32m 238\u001b[0m \u001b[38;5;28;01mfor\u001b[39;00m key \u001b[38;5;129;01min\u001b[39;00m (\n\u001b[1;32m 239\u001b[0m tqdm(keys, desc\u001b[38;5;241m=\u001b[39m\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m\u001b[38;5;18m__class__\u001b[39m\u001b[38;5;241m.\u001b[39m\u001b[38;5;18m__name__\u001b[39m) \u001b[38;5;28;01mif\u001b[39;00m display_progress \u001b[38;5;28;01melse\u001b[39;00m keys\n\u001b[1;32m 240\u001b[0m ):\n\u001b[0;32m--> 241\u001b[0m error \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_populate1\u001b[49m\u001b[43m(\u001b[49m\u001b[43mkey\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mjobs\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mpopulate_kwargs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 242\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m error \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m:\n\u001b[1;32m 243\u001b[0m error_list\u001b[38;5;241m.\u001b[39mappend(error)\n", + "File \u001b[0;32m~/anaconda3/envs/spyglass-position/lib/python3.9/site-packages/datajoint/autopopulate.py:292\u001b[0m, in \u001b[0;36mAutoPopulate._populate1\u001b[0;34m(self, key, jobs, suppress_errors, return_exception_objects, make_kwargs)\u001b[0m\n\u001b[1;32m 290\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m\u001b[38;5;18m__class__\u001b[39m\u001b[38;5;241m.\u001b[39m_allow_insert \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;01mTrue\u001b[39;00m\n\u001b[1;32m 291\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[0;32m--> 292\u001b[0m \u001b[43mmake\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;28;43mdict\u001b[39;49m\u001b[43m(\u001b[49m\u001b[43mkey\u001b[49m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43m(\u001b[49m\u001b[43mmake_kwargs\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;129;43;01mor\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[43m{\u001b[49m\u001b[43m}\u001b[49m\u001b[43m)\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 293\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m (\u001b[38;5;167;01mKeyboardInterrupt\u001b[39;00m, \u001b[38;5;167;01mSystemExit\u001b[39;00m, \u001b[38;5;167;01mException\u001b[39;00m) \u001b[38;5;28;01mas\u001b[39;00m error:\n\u001b[1;32m 294\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n", + "File \u001b[0;32m~/Src/spyglass/src/spyglass/position/v1/position_dlc_training.py:150\u001b[0m, in \u001b[0;36mDLCModelTraining.make\u001b[0;34m(self, key)\u001b[0m\n\u001b[1;32m 144\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mdeeplabcut\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mutils\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mauxiliaryfunctions\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m (\n\u001b[1;32m 145\u001b[0m GetModelFolder \u001b[38;5;28;01mas\u001b[39;00m get_model_folder,\n\u001b[1;32m 146\u001b[0m )\n\u001b[1;32m 147\u001b[0m config_path, project_name \u001b[38;5;241m=\u001b[39m (DLCProject() \u001b[38;5;241m&\u001b[39m key)\u001b[38;5;241m.\u001b[39mfetch1(\n\u001b[1;32m 148\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mconfig_path\u001b[39m\u001b[38;5;124m\"\u001b[39m, \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mproject_name\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m 149\u001b[0m )\n\u001b[0;32m--> 150\u001b[0m \u001b[38;5;28;01mwith\u001b[39;00m \u001b[43mOutputLogger\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m 151\u001b[0m \u001b[43m \u001b[49m\u001b[43mname\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mDLC_project_\u001b[39;49m\u001b[38;5;132;43;01m{project_name}\u001b[39;49;00m\u001b[38;5;124;43m_training\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[1;32m 152\u001b[0m \u001b[43m \u001b[49m\u001b[43mpath\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;124;43mf\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;132;43;01m{\u001b[39;49;00m\u001b[43mos\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mpath\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mdirname\u001b[49m\u001b[43m(\u001b[49m\u001b[43mconfig_path\u001b[49m\u001b[43m)\u001b[49m\u001b[38;5;132;43;01m}\u001b[39;49;00m\u001b[38;5;124;43m/log.log\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[1;32m 153\u001b[0m \u001b[43m \u001b[49m\u001b[43mprint_console\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43;01mTrue\u001b[39;49;00m\u001b[43m,\u001b[49m\n\u001b[1;32m 154\u001b[0m \u001b[43m\u001b[49m\u001b[43m)\u001b[49m \u001b[38;5;28;01mas\u001b[39;00m logger:\n\u001b[1;32m 155\u001b[0m dlc_config \u001b[38;5;241m=\u001b[39m read_config(config_path)\n\u001b[1;32m 156\u001b[0m project_path \u001b[38;5;241m=\u001b[39m dlc_config[\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mproject_path\u001b[39m\u001b[38;5;124m\"\u001b[39m]\n", + "File \u001b[0;32m~/Src/spyglass/src/spyglass/position/v1/dlc_utils.py:192\u001b[0m, in \u001b[0;36mOutputLogger.__init__\u001b[0;34m(self, name, path, level, **kwargs)\u001b[0m\n\u001b[1;32m 191\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21m__init__\u001b[39m(\u001b[38;5;28mself\u001b[39m, name, path, level\u001b[38;5;241m=\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mINFO\u001b[39m\u001b[38;5;124m\"\u001b[39m, \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mkwargs):\n\u001b[0;32m--> 192\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mlogger \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43msetup_logger\u001b[49m\u001b[43m(\u001b[49m\u001b[43mname\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mpath\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 193\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mname \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mlogger\u001b[38;5;241m.\u001b[39mname\n\u001b[1;32m 194\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mlevel \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mgetattr\u001b[39m(logging, level)\n", + "File \u001b[0;32m~/Src/spyglass/src/spyglass/position/v1/dlc_utils.py:244\u001b[0m, in \u001b[0;36mOutputLogger.setup_logger\u001b[0;34m(self, name_logfile, path_logfile, print_console)\u001b[0m\n\u001b[1;32m 241\u001b[0m logger\u001b[38;5;241m.\u001b[39maddHandler(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_get_stream_handler())\n\u001b[1;32m 243\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[0;32m--> 244\u001b[0m file_handler \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_get_file_handler\u001b[49m\u001b[43m(\u001b[49m\u001b[43mpath_logfile\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 245\u001b[0m logger\u001b[38;5;241m.\u001b[39maddHandler(file_handler)\n\u001b[1;32m 246\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m print_console:\n", + "File \u001b[0;32m~/Src/spyglass/src/spyglass/position/v1/dlc_utils.py:255\u001b[0m, in \u001b[0;36mOutputLogger._get_file_handler\u001b[0;34m(self, path)\u001b[0m\n\u001b[1;32m 253\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m os\u001b[38;5;241m.\u001b[39mpath\u001b[38;5;241m.\u001b[39mexists(output_dir):\n\u001b[1;32m 254\u001b[0m output_dir\u001b[38;5;241m.\u001b[39mmkdir(parents\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mTrue\u001b[39;00m, exist_ok\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mTrue\u001b[39;00m)\n\u001b[0;32m--> 255\u001b[0m file_handler \u001b[38;5;241m=\u001b[39m \u001b[43mlogging\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mFileHandler\u001b[49m\u001b[43m(\u001b[49m\u001b[43mpath\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mmode\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43ma\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m)\u001b[49m\n\u001b[1;32m 256\u001b[0m file_handler\u001b[38;5;241m.\u001b[39msetFormatter(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_get_formatter())\n\u001b[1;32m 257\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m file_handler\n", + "File \u001b[0;32m~/anaconda3/envs/spyglass-position/lib/python3.9/logging/__init__.py:1146\u001b[0m, in \u001b[0;36mFileHandler.__init__\u001b[0;34m(self, filename, mode, encoding, delay, errors)\u001b[0m\n\u001b[1;32m 1144\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mstream \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;01mNone\u001b[39;00m\n\u001b[1;32m 1145\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[0;32m-> 1146\u001b[0m StreamHandler\u001b[38;5;241m.\u001b[39m\u001b[38;5;21m__init__\u001b[39m(\u001b[38;5;28mself\u001b[39m, \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_open\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m)\n", + "File \u001b[0;32m~/anaconda3/envs/spyglass-position/lib/python3.9/logging/__init__.py:1175\u001b[0m, in \u001b[0;36mFileHandler._open\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m 1170\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21m_open\u001b[39m(\u001b[38;5;28mself\u001b[39m):\n\u001b[1;32m 1171\u001b[0m \u001b[38;5;250m \u001b[39m\u001b[38;5;124;03m\"\"\"\u001b[39;00m\n\u001b[1;32m 1172\u001b[0m \u001b[38;5;124;03m Open the current base file with the (original) mode and encoding.\u001b[39;00m\n\u001b[1;32m 1173\u001b[0m \u001b[38;5;124;03m Return the resulting stream.\u001b[39;00m\n\u001b[1;32m 1174\u001b[0m \u001b[38;5;124;03m \"\"\"\u001b[39;00m\n\u001b[0;32m-> 1175\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mopen\u001b[39;49m\u001b[43m(\u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mbaseFilename\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mmode\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mencoding\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mencoding\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 1176\u001b[0m \u001b[43m \u001b[49m\u001b[43merrors\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43merrors\u001b[49m\u001b[43m)\u001b[49m\n", + "\u001b[0;31mPermissionError\u001b[0m: [Errno 13] Permission denied: '/nimbus/deeplabcut/projects/tutorial_scratch_DG-LorenLab-2023-08-16/log.log'" + ] + } + ], + "source": [ + "sgp.DLCModelTrainingSelection().insert1(\n", + " {\n", + " **project_key,\n", + " \"dlc_training_params_name\": training_params_name,\n", + " \"training_id\": 0,\n", + " \"model_prefix\": \"\",\n", + " }\n", + ")\n", + "model_training_key = (\n", + " sgp.DLCModelTrainingSelection\n", + " & {\n", + " **project_key,\n", + " \"dlc_training_params_name\": training_params_name,\n", + " }\n", + ").fetch1(\"KEY\")\n", + "sgp.DLCModelTraining.populate(model_training_key)" + ] + }, + { + "cell_type": "markdown", + "id": "da004b3e", + "metadata": {}, + "source": [ + "Here we'll make sure that the entry made it into the table properly!" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e5306fd9", + "metadata": { + "scrolled": true + }, + "outputs": [], + "source": [ + "sgp.DLCModelTraining() & model_training_key" + ] + }, + { + "cell_type": "markdown", + "id": "ac5b7687", + "metadata": {}, + "source": [ + "Populating `DLCModelTraining` automatically inserts the entry into\n", + "`DLCModelSource`, which is used to select between models trained using Spyglass\n", + "vs. other tools." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a349dc3d", + "metadata": {}, + "outputs": [], + "source": [ + "sgp.DLCModelSource() & model_training_key" + ] + }, + { + "cell_type": "markdown", + "id": "92cb8969", + "metadata": {}, + "source": [ + "The `source` field will only accept _\"FromImport\"_ or _\"FromUpstream\"_ as entries. Let's checkout the `FromUpstream` part table attached to `DLCModelSource` below." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "b0cc1afa", + "metadata": {}, + "outputs": [], + "source": [ + "sgp.DLCModelSource.FromUpstream() & model_training_key" + ] + }, + { + "cell_type": "markdown", + "id": "67a9b2c6", + "metadata": {}, + "source": [ + "#### [DLCModel](#TableOfContents) \n", + "\n", + "Next we'll populate the `DLCModel` table, which holds all the relevant\n", + "information for all trained models.\n", + "\n", + "First, we'll need to determine a set of parameters for our model to select the\n", + "correct model file. Here is the default:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "bb663861", + "metadata": {}, + "outputs": [], + "source": [ + "sgp.DLCModelParams.get_default()" + ] + }, + { + "cell_type": "markdown", + "id": "8b45a6ed", + "metadata": {}, + "source": [ + "Here is the syntax to add your own parameter set:\n", + "\n", + "```python\n", + "dlc_model_params_name = \"make_this_yours\"\n", + "params = {\n", + " \"params\": {},\n", + " \"shuffle\": 1,\n", + " \"trainingsetindex\": 0,\n", + " \"model_prefix\": \"\",\n", + "}\n", + "sgp.DLCModelParams.insert1(\n", + " {\"dlc_model_params_name\": dlc_model_params_name, \"params\": params},\n", + " skip_duplicates=True,\n", + ")\n", + "```\n" + ] + }, + { + "cell_type": "markdown", + "id": "7bce9696", + "metadata": {}, + "source": [ + "We can insert sets of parameters into `DLCModelSelection` and populate\n", + "`DLCModel`." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "eaa23fab", + "metadata": {}, + "outputs": [], + "source": [ + "temp_model_key = (sgp.DLCModelSource & model_training_key).fetch1(\"KEY\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4e418eba", + "metadata": {}, + "outputs": [], + "source": [ + "#comment these lines out after successfully inserting, for each project\n", + "sgp.DLCModelSelection().insert1({\n", + " **temp_model_key,\n", + " \"dlc_model_params_name\": \"default\"},\n", + " skip_duplicates=True)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ccae03bb", + "metadata": {}, + "outputs": [], + "source": [ + "model_key = (sgp.DLCModelSelection & temp_model_key).fetch1(\"KEY\")\n", + "sgp.DLCModel.populate(model_key)" + ] + }, + { + "cell_type": "markdown", + "id": "f8f1b839", + "metadata": {}, + "source": [ + "Again, let's make sure that everything looks correct in `DLCModel`." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c39f72ca", + "metadata": {}, + "outputs": [], + "source": [ + "sgp.DLCModel() & model_key" + ] + }, + { + "cell_type": "markdown", + "id": "53ce4ee4", + "metadata": {}, + "source": [ + "#### [DLCPoseEstimation](#TableOfContents) \n", + "\n", + "Alright, now that we've trained model and populated the `DLCModel` table, we're ready to set-up Pose Estimation on a behavioral video of your choice.

For this tutorial, you can choose to use an epoch of your choice, we can also use the one specified below. If you'd like to use your own video, just specify the `nwb_file_name` and `epoch` number and make sure it's in the `VideoFile` table!" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "id": "fc2a8dab-7caf-4389-8494-9158d2ec5b20", + "metadata": {}, + "outputs": [ + { + "data": { + "text/html": [ + "\n", + " \n", + " \n", + " \n", + " \n", + "
\n", + " \n", + " \n", + " \n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "
\n", + "

nwb_file_name

\n", + " name of the NWB file\n", + "
\n", + "

epoch

\n", + " the session epoch for this task and apparatus(1 based)\n", + "
\n", + "

video_file_num

\n", + " \n", + "
\n", + "

camera_name

\n", + " \n", + "
\n", + "

video_file_object_id

\n", + " the object id of the file object\n", + "
J1620210604_.nwb10178f5746-30e3-4957-891e-8024e23522dc
J1620210604_.nwb20d64ec979-326b-429f-b3fe-1bbfbf806293
J1620210604_.nwb30cf14bcd2-c0a9-457b-8791-42f3f28dd912
J1620210604_.nwb40183c9910-36fd-46c1-a24c-8d1c306d7248
J1620210604_.nwb504677c7cd-8cd8-4801-8f6e-5b7bb14a6d6b
J1620210604_.nwb600e46532b-483f-43af-ba6e-ba75ccf340ea
J1620210604_.nwb70c6d1d037-44ec-4d91-99d1-172d371bf82a
J1620210604_.nwb804d7e070c-6220-47de-8173-993f013fafa8
J1620210604_.nwb90b50108ec-f587-46df-b1c8-3ca23091bde0
J1620210604_.nwb100b9b5da20-da39-4274-9be2-55610cfd1b5b
J1620210604_.nwb1106c827b8d-513c-4dba-ae75-0b36dcf4811f
J1620210604_.nwb12041bd2344-1b41-4737-8dfb-7c860d089155
\n", + "

...

\n", + "

Total: 20

\n", + " " + ], + "text/plain": [ + "*nwb_file_name *epoch *video_file_nu camera_name video_file_obj\n", + "+------------+ +-------+ +------------+ +------------+ +------------+\n", + "J1620210604_.n 1 0 178f5746-30e3-\n", + "J1620210604_.n 2 0 d64ec979-326b-\n", + "J1620210604_.n 3 0 cf14bcd2-c0a9-\n", + "J1620210604_.n 4 0 183c9910-36fd-\n", + "J1620210604_.n 5 0 4677c7cd-8cd8-\n", + "J1620210604_.n 6 0 0e46532b-483f-\n", + "J1620210604_.n 7 0 c6d1d037-44ec-\n", + "J1620210604_.n 8 0 4d7e070c-6220-\n", + "J1620210604_.n 9 0 b50108ec-f587-\n", + "J1620210604_.n 10 0 b9b5da20-da39-\n", + "J1620210604_.n 11 0 6c827b8d-513c-\n", + "J1620210604_.n 12 0 41bd2344-1b41-\n", + " ...\n", + " (Total: 20)" + ] + }, + "execution_count": 3, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "nwb_file_name = \"J1620210604_.nwb\"\n", + "sgc.VideoFile() & {\"nwb_file_name\": nwb_file_name}" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "id": "4140ece8", + "metadata": {}, + "outputs": [], + "source": [ + "epoch = 14 #change based on VideoFile entry\n", + "video_file_num = 0 #change based on VideoFile entry" + ] + }, + { + "cell_type": "markdown", + "id": "0f26a081-859d-4dff-bb58-84cec2ff4b3f", + "metadata": {}, + "source": [ + "Using `insert_estimation_task` will convert out video to be in .mp4 format (DLC\n", + "struggles with .h264) and determine the directory in which we'll store the pose\n", + "estimation results.\n", + "\n", + "- `task_mode` (trigger or load) determines whether or not populating\n", + " `DLCPoseEstimation` triggers a new pose estimation, or loads an existing.\n", + "- `video_file_num` will be 0 in almost all\n", + " cases.\n", + "- `gputouse` was already set during training. It may be a good idea to make sure\n", + " that core is still free before moving forward." + ] + }, + { + "cell_type": "markdown", + "id": "e60eb2fc", + "metadata": {}, + "source": [ + "The `DLCPoseEstimationSelection` insertion step will convert your .h264 video to an .mp4 first and save it in `/nimbus/deeplabcut/video`. If this video already exists here, the insertion will never complete.\n", + "\n", + "We first delete any .mp4 that exists for this video from the nimbus folder:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "130d85d0", + "metadata": {}, + "outputs": [], + "source": [ + "! find /nimbus/deeplabcut/video -type f -name '*20210604_J16*' -delete # change based on date and rat with which you are training the model" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "id": "9df5644f-febc-49d7-a60d-6991798c20d7", + "metadata": {}, + "outputs": [ + { + "ename": "NameError", + "evalue": "name 'model_key' is not defined", + "output_type": "error", + "traceback": [ + "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", + "\u001b[0;31mNameError\u001b[0m Traceback (most recent call last)", + "Cell \u001b[0;32mIn[6], line 6\u001b[0m\n\u001b[1;32m 1\u001b[0m pose_estimation_key \u001b[38;5;241m=\u001b[39m sgp\u001b[38;5;241m.\u001b[39mDLCPoseEstimationSelection\u001b[38;5;241m.\u001b[39minsert_estimation_task(\n\u001b[1;32m 2\u001b[0m {\n\u001b[1;32m 3\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mnwb_file_name\u001b[39m\u001b[38;5;124m\"\u001b[39m: nwb_file_name,\n\u001b[1;32m 4\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mepoch\u001b[39m\u001b[38;5;124m\"\u001b[39m: epoch,\n\u001b[1;32m 5\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mvideo_file_num\u001b[39m\u001b[38;5;124m\"\u001b[39m: video_file_num,\n\u001b[0;32m----> 6\u001b[0m \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39m\u001b[43mmodel_key\u001b[49m,\n\u001b[1;32m 7\u001b[0m },\n\u001b[1;32m 8\u001b[0m task_mode\u001b[38;5;241m=\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mtrigger\u001b[39m\u001b[38;5;124m\"\u001b[39m, \u001b[38;5;66;03m#trigger or load\u001b[39;00m\n\u001b[1;32m 9\u001b[0m params\u001b[38;5;241m=\u001b[39m{\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mgputouse\u001b[39m\u001b[38;5;124m\"\u001b[39m: gputouse, \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mvideotype\u001b[39m\u001b[38;5;124m\"\u001b[39m: \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mmp4\u001b[39m\u001b[38;5;124m\"\u001b[39m},\n\u001b[1;32m 10\u001b[0m )\n", + "\u001b[0;31mNameError\u001b[0m: name 'model_key' is not defined" + ] + } + ], + "source": [ + "pose_estimation_key = sgp.DLCPoseEstimationSelection.insert_estimation_task(\n", + " {\n", + " \"nwb_file_name\": nwb_file_name,\n", + " \"epoch\": epoch,\n", + " \"video_file_num\": video_file_num,\n", + " **model_key,\n", + " },\n", + " task_mode=\"trigger\", #trigger or load\n", + " params={\"gputouse\": gputouse, \"videotype\": \"mp4\"},\n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "d19390eb", + "metadata": {}, + "source": [ + "If the above insertion step fails in either trigger or load mode for an epoch, run the following lines:\n", + "```\n", + "(pose_estimation_key = sgp.DLCPoseEstimationSelection.insert_estimation_task(\n", + " {\n", + " \"nwb_file_name\": nwb_file_name,\n", + " \"epoch\": epoch,\n", + " \"video_file_num\": video_file_num,\n", + " **model_key,\n", + " }).delete()\n", + "```" + ] + }, + { + "cell_type": "markdown", + "id": "5feb2a26-fae1-41ca-828f-cc6c73ebd24e", + "metadata": {}, + "source": [ + "And now we populate `DLCPoseEstimation`! This might take some time for full datasets." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "88f28ecc-d3a4-40f9-a1fb-afb4bdd04497", + "metadata": {}, + "outputs": [], + "source": [ + "sgp.DLCPoseEstimation().populate(pose_estimation_key)" + ] + }, + { + "cell_type": "markdown", + "id": "88757488-cfa4-4e7c-b965-7dacac43810a", + "metadata": {}, + "source": [ + "Let's visualize the output from Pose Estimation" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "45dd4f3b-7bf4-41b7-be5f-820fe3ee9f69", + "metadata": {}, + "outputs": [], + "source": [ + "(sgp.DLCPoseEstimation() & pose_estimation_key).fetch_dataframe()" + ] + }, + { + "cell_type": "markdown", + "id": "52f45ab3-9344-4975-b5ff-f80a5727cdac", + "metadata": {}, + "source": [ + "#### [DLCSmoothInterp](#TableOfContents) " + ] + }, + { + "cell_type": "markdown", + "id": "0ccd5dbe-097a-4138-a234-da78a5902684", + "metadata": {}, + "source": [ + "Now that we've completed pose estimation, it's time to identify NaNs and optionally interpolate over low likelihood periods and smooth the resulting positions.
First we need to define some parameters for smoothing and interpolation. We can see the default parameter set below.
__Note__: it is recommended to use the `just_nan` parameters here and save interpolation and smoothing for the centroid step as this provides for a better end result." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f6e44a34-8d6d-4206-b02a-9ca38a68f1c0", + "metadata": {}, + "outputs": [], + "source": [ + "# The default parameter set to interpolate and smooth over each LED individually\n", + "print(sgp.DLCSmoothInterpParams.get_default())" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "3bc4f13c", + "metadata": {}, + "outputs": [], + "source": [ + "# The just_nan parameter set that identifies NaN indices and leaves smoothing and interpolation to the centroid step\n", + "print(sgp.DLCSmoothInterpParams.get_nan_params())\n", + "si_params_name = \"just_nan\" #could also use \"default\"" + ] + }, + { + "cell_type": "markdown", + "id": "a245c9e5-e8f6-4c6f-b9e1-d71ab3e06d59", + "metadata": {}, + "source": [ + "To change any of these parameters, one would do the following:\n", + "\n", + "```python\n", + "si_params_name = \"your_unique_param_name\"\n", + "params = {\n", + " \"smoothing_params\": {\n", + " \"smoothing_duration\": 0.00,\n", + " \"smooth_method\": \"moving_avg\",\n", + " },\n", + " \"interp_params\": {\"likelihood_thresh\": 0.00},\n", + " \"max_plausible_speed\": 0,\n", + " \"speed_smoothing_std_dev\": 0.000,\n", + "}\n", + "sgp.DLCSmoothInterpParams().insert1(\n", + " {\"dlc_si_params_name\": si_params_name, \"params\": params},\n", + " skip_duplicates=True,\n", + ")\n", + "```" + ] + }, + { + "cell_type": "markdown", + "id": "8139036e-ce7e-41ec-be78-aa15a4b0b795", + "metadata": {}, + "source": [ + "We'll create a dictionary with the correct set of keys for the `DLCSmoothInterpSelection` table" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ec730b91-a974-4f54-9d55-35f52e08487f", + "metadata": {}, + "outputs": [], + "source": [ + "si_key = pose_estimation_key.copy()\n", + "fields = list(sgp.DLCSmoothInterpSelection.fetch().dtype.fields.keys())\n", + "si_key = {key: val for key, val in si_key.items() if key in fields}\n", + "si_key" + ] + }, + { + "cell_type": "markdown", + "id": "9a47a6de-51ff-4980-b105-42a75ef7f7a3", + "metadata": {}, + "source": [ + "We can insert all of the bodyparts we want to process into `DLCSmoothInterpSelection`
\n", + "First lets visualize the bodyparts we have available to us.
" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "6e5fcad0-e211-4bd7-82b1-d69bec0eb3d7", + "metadata": {}, + "outputs": [], + "source": [ + "print((sgp.DLCPoseEstimation.BodyPart & pose_estimation_key).fetch(\"bodypart\"))" + ] + }, + { + "cell_type": "markdown", + "id": "7c6e3ad2-1960-43cd-a223-784c08211013", + "metadata": {}, + "source": [ + "We can use `insert1` to insert a single bodypart, but would suggest using `insert` to insert a list of keys with different bodyparts." + ] + }, + { + "cell_type": "markdown", + "id": "1a93ba8d", + "metadata": {}, + "source": [ + "To insert a single bodypart, one would do the following:\n", + "\n", + "```python\n", + "sgp.DLCSmoothInterpSelection.insert1(\n", + " {\n", + " **si_key,\n", + " 'bodypart': 'greenLED',\n", + " 'dlc_si_params_name': si_params_name,\n", + " },\n", + " skip_duplicates=True)\n", + "```" + ] + }, + { + "cell_type": "markdown", + "id": "3e2f73cd-2534-40a2-86e6-948ccd902812", + "metadata": {}, + "source": [ + "We'll see a list of bodyparts and then insert them into `DLCSmoothInterpSelection`." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "819e826d-38ef-4219-8d52-5353c6b4b61a", + "metadata": {}, + "outputs": [], + "source": [ + "bodyparts = [\"greenLED\", \"redLED_C\"]\n", + "sgp.DLCSmoothInterpSelection.insert(\n", + " [\n", + " {\n", + " **si_key,\n", + " \"bodypart\": bodypart,\n", + " \"dlc_si_params_name\": si_params_name,\n", + " }\n", + " for bodypart in bodyparts\n", + " ],\n", + " skip_duplicates=True,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "6dca5640-3e9a-42b7-bc61-7f3e1a219619", + "metadata": {}, + "source": [ + "And verify the entry:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "3b347b29-1583-4fbc-9b35-8e062b611d59", + "metadata": {}, + "outputs": [], + "source": [ + "sgp.DLCSmoothInterpSelection() & si_key" + ] + }, + { + "cell_type": "markdown", + "id": "af8f0d26-3879-4f50-a076-e60685028083", + "metadata": {}, + "source": [ + "Now, we populate `DLCSmoothInterp`, which will perform smoothing and\n", + "interpolation on all of the bodyparts specified." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "9bf16c32-0f5e-4cd2-b814-56745e836599", + "metadata": {}, + "outputs": [], + "source": [ + "sgp.DLCSmoothInterp().populate(si_key)" + ] + }, + { + "cell_type": "markdown", + "id": "3d3af0a2-16cc-43dc-af9c-0ec606cfe1e1", + "metadata": {}, + "source": [ + "And let's visualize the resulting position data using a scatter plot" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ced96b05-e6dc-4771-bfb8-bcbddfb8e494", + "metadata": {}, + "outputs": [], + "source": [ + "(sgp.DLCSmoothInterp() & {**si_key, \"bodypart\": bodyparts[0]}\n", + ").fetch1_dataframe().plot.scatter(x=\"x\", y=\"y\", s=1, figsize=(5, 5))" + ] + }, + { + "cell_type": "markdown", + "id": "a838e4c4-8ff9-4b73-aee5-00eb91ea899f", + "metadata": {}, + "source": [ + "#### [DLCSmoothInterpCohort](#TableOfContents) " + ] + }, + { + "cell_type": "markdown", + "id": "3cf3d882-2c24-46ca-bfcc-72f21712e47b", + "metadata": {}, + "source": [ + "After smoothing/interpolation, we need to select bodyparts from which we want to\n", + "derive a centroid and orientation, which is performed by the\n", + "`DLCSmoothInterpCohort` table." + ] + }, + { + "cell_type": "markdown", + "id": "5017fd46-2bb9-4349-981b-f9789ffec338", + "metadata": {}, + "source": [ + "First, let's make a key that represents the 'cohort', using\n", + "`dlc_si_cohort_selection_name`. We'll need a bodypart dictionary using bodypart\n", + "keys and smoothing/interpolation parameters used as value." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "92fb1af9-20cf-46d9-a518-a7f551334bc8", + "metadata": {}, + "outputs": [], + "source": [ + "cohort_key = si_key.copy()\n", + "if \"bodypart\" in cohort_key:\n", + " del cohort_key[\"bodypart\"]\n", + "if \"dlc_si_params_name\" in cohort_key:\n", + " del cohort_key[\"dlc_si_params_name\"]\n", + "cohort_key[\"dlc_si_cohort_selection_name\"] = \"green_red_led\"\n", + "cohort_key[\"bodyparts_params_dict\"] = {\n", + " \"greenLED\": si_params_name,\n", + " \"redLED_C\": si_params_name,\n", + "}\n", + "print(cohort_key)" + ] + }, + { + "cell_type": "markdown", + "id": "11c6a327-d4b0-4de1-a2c6-10a0443a3f96", + "metadata": {}, + "source": [ + "We'll insert the cohort into `DLCSmoothInterpCohortSelection` and populate `DLCSmoothInterpCohort`, which collates the separately smoothed and interpolated bodyparts into a single entry." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "805f55c1-3c7b-4cf9-bdd7-98743810c671", + "metadata": {}, + "outputs": [], + "source": [ + "sgp.DLCSmoothInterpCohortSelection().insert1(cohort_key, skip_duplicates=True)\n", + "sgp.DLCSmoothInterpCohort.populate(cohort_key)" + ] + }, + { + "cell_type": "markdown", + "id": "a6b7d361-47c5-4748-ac59-f51b897f7fe6", + "metadata": {}, + "source": [ + "And verify the entry:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e7672b63-6dfc-46db-b8df-95c1e6730b6c", + "metadata": {}, + "outputs": [], + "source": [ + "sgp.DLCSmoothInterpCohort.BodyPart() & cohort_key" + ] + }, + { + "cell_type": "markdown", + "id": "d871bdca-2278-43ec-a70c-52257ad26170", + "metadata": {}, + "source": [ + "#### [DLCCentroid](#TableOfContents) " + ] + }, + { + "cell_type": "markdown", + "id": "4cc37edb-fdd3-4a05-8cd5-91f3c5f7cbbb", + "metadata": {}, + "source": [ + "With this cohort, we can determine a centroid using another set of parameters." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4e31c8db-0396-475a-af71-ae38433d2b7d", + "metadata": {}, + "outputs": [], + "source": [ + "# Here is the default set\n", + "print(sgp.DLCCentroidParams.get_default())\n", + "centroid_params_name = \"default\"" + ] + }, + { + "cell_type": "markdown", + "id": "852948f7-e743-4319-be6b-265dadfca713", + "metadata": {}, + "source": [ + "Here is the syntax to add your own parameters:\n", + "\n", + "```python\n", + "centroid_params = {\n", + " \"centroid_method\": \"two_pt_centroid\",\n", + " \"points\": {\n", + " \"greenLED\": \"greenLED\",\n", + " \"redLED_C\": \"redLED_C\",\n", + " },\n", + " \"speed_smoothing_std_dev\": 0.100,\n", + "}\n", + "centroid_params_name = \"your_unique_param_name\"\n", + "sgp.DLCCentroidParams.insert1(\n", + " {\n", + " \"dlc_centroid_params_name\": centroid_params_name,\n", + " \"params\": centroid_params,\n", + " },\n", + " skip_duplicates=True,\n", + ")\n", + "```" + ] + }, + { + "cell_type": "markdown", + "id": "85ad4e53-43dd-4e05-84c4-7d4504766746", + "metadata": {}, + "source": [ + "We'll make a key to insert into `DLCCentroidSelection`." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "28ac17cb-4bb3-47b2-b1b9-1c4b37797591", + "metadata": {}, + "outputs": [], + "source": [ + "centroid_key = cohort_key.copy()\n", + "fields = list(sgp.DLCCentroidSelection.fetch().dtype.fields.keys())\n", + "centroid_key = {key: val for key, val in centroid_key.items() if key in fields}\n", + "centroid_key[\"dlc_centroid_params_name\"] = centroid_params_name\n", + "print(centroid_key)" + ] + }, + { + "cell_type": "markdown", + "id": "2674c0d3-d3fd-4cd9-a843-260c442c2d23", + "metadata": {}, + "source": [ + "After inserting into the selection table, we can populate `DLCCentroid`" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "47fccef4-2fef-4f74-b7a4-8564328b14d4", + "metadata": {}, + "outputs": [], + "source": [ + "sgp.DLCCentroidSelection.insert1(centroid_key, skip_duplicates=True)\n", + "sgp.DLCCentroid.populate(centroid_key)" + ] + }, + { + "cell_type": "markdown", + "id": "6e49c5ad-909f-4f1a-a156-f8f8a84fb78a", + "metadata": {}, + "source": [ + "Here we can visualize the resulting centroid position" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "29e7e447-fa6f-4f06-9ec9-4b9838b7255e", + "metadata": {}, + "outputs": [], + "source": [ + "(sgp.DLCCentroid() & centroid_key).fetch1_dataframe().plot.scatter(\n", + " x=\"position_x\",\n", + " y=\"position_y\",\n", + " c=\"speed\",\n", + " colormap=\"viridis\",\n", + " alpha=0.5,\n", + " s=0.5,\n", + " figsize=(10, 10),\n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "cb513a9d-5250-404c-8887-639f785516c7", + "metadata": {}, + "source": [ + "#### [DLCOrientation](#TableOfContents) " + ] + }, + { + "cell_type": "markdown", + "id": "509076f0-f0b8-4fd0-8884-32c48ca4a125", + "metadata": {}, + "source": [ + "We'll now go through a similar process to identify the orientation." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "faf244b3-7295-48ed-90ea-cf878e85e122", + "metadata": {}, + "outputs": [], + "source": [ + "print(sgp.DLCOrientationParams.get_default())\n", + "dlc_orientation_params_name = \"default\"" + ] + }, + { + "cell_type": "markdown", + "id": "8ec170be-7a7a-4a20-986c-d055aee1a08b", + "metadata": {}, + "source": [ + "We'll prune the `cohort_key` we used above and add our `dlc_orientation_params_name` to make it suitable for `DLCOrientationSelection`." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "09e4a6cf-472e-43e3-90aa-f7ff7fb9dc72", + "metadata": {}, + "outputs": [], + "source": [ + "fields = list(sgp.DLCOrientationSelection.fetch().dtype.fields.keys())\n", + "orient_key = {key: val for key, val in cohort_key.items() if key in fields}\n", + "orient_key[\"dlc_orientation_params_name\"] = dlc_orientation_params_name\n", + "print(orient_key)" + ] + }, + { + "cell_type": "markdown", + "id": "9406d2de-9b71-4591-82f6-ed53f2d4f220", + "metadata": {}, + "source": [ + "We'll insert into `DLCOrientationSelection` and populate `DLCOrientation`" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f5d23302-02e3-427a-ac35-2f648e3ae674", + "metadata": {}, + "outputs": [], + "source": [ + "sgp.DLCOrientationSelection().insert1(orient_key, skip_duplicates=True)\n", + "sgp.DLCOrientation().populate(orient_key)" + ] + }, + { + "cell_type": "markdown", + "id": "36f62da0-0cc5-4ffb-b2df-7b68c3f6e268", + "metadata": {}, + "source": [ + "We can fetch the orientation as a dataframe as quality assurance." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c5eba7f4-0b32-486a-894a-c97404c74d2b", + "metadata": {}, + "outputs": [], + "source": [ + "(sgp.DLCOrientation() & orient_key).fetch1_dataframe()" + ] + }, + { + "cell_type": "markdown", + "id": "dc75aeaf-018a-46ed-83a8-6603ae100791", + "metadata": {}, + "source": [ + "#### [DLCPosV1](#TableOfContents) " + ] + }, + { + "cell_type": "markdown", + "id": "21d3f9ba-dc89-4c32-a125-1fa85cd4132d", + "metadata": {}, + "source": [ + "After processing the position data, we have to do a few table manipulations to standardize various outputs. \n", + "\n", + "To summarize, we brought in a pretrained DLC project, used that model to run pose estimation on a new behavioral video, smoothed and interpolated the result, formed a cohort of bodyparts, and determined the centroid and orientation of this cohort.\n", + "\n", + "Now we'll populate `DLCPos` with our centroid/orientation entries above." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "2a166dd6-3863-4349-97ac-19d7d6a841b4", + "metadata": {}, + "outputs": [], + "source": [ + "fields = list(sgp.DLCPosV1.fetch().dtype.fields.keys())\n", + "dlc_key = {key: val for key, val in centroid_key.items() if key in fields}\n", + "dlc_key[\"dlc_si_cohort_centroid\"] = centroid_key[\"dlc_si_cohort_selection_name\"]\n", + "dlc_key[\"dlc_si_cohort_orientation\"] = orient_key[\n", + " \"dlc_si_cohort_selection_name\"\n", + "]\n", + "dlc_key[\"dlc_orientation_params_name\"] = orient_key[\n", + " \"dlc_orientation_params_name\"\n", + "]\n", + "print(dlc_key)" + ] + }, + { + "cell_type": "markdown", + "id": "551e4c5e-7c32-46b0-a138-80064a212fbe", + "metadata": {}, + "source": [ + "Now we can insert into `DLCPosSelection` and populate `DLCPos` with our `dlc_key`" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "7d7badff-0ad7-48cf-aef6-a4f55df8ded9", + "metadata": {}, + "outputs": [], + "source": [ + "sgp.DLCPosSelection().insert1(dlc_key, skip_duplicates=True)\n", + "sgp.DLCPosV1().populate(dlc_key)" + ] + }, + { + "cell_type": "markdown", + "id": "412f1cff-2ead-4489-8a10-9fa7a5d33292", + "metadata": {}, + "source": [ + "We can also make sure that all of our data made it through by fetching the dataframe attached to this entry.
We should expect 8 columns:\n", + ">time
video_frame_ind
position_x
position_y
orientation
velocity_x
velocity_y
speed" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "853db96b-1cd4-4ff6-91ea-aca7f7d3851d", + "metadata": {}, + "outputs": [], + "source": [ + "(sgp.DLCPosV1() & dlc_key).fetch1_dataframe()" + ] + }, + { + "cell_type": "markdown", + "id": "2d8623a8-1725-4e02-b1a2-d2f993988102", + "metadata": {}, + "source": [ + "And even more, we can fetch the `pose_eval_result` that is calculated during this step. This field contains the percentage of frames that each bodypart was below the likelihood threshold of 0.95 as a means of assessing the quality of the pose estimation." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "d4f06244-9d59-44d4-bcbb-062809b3ea6e", + "metadata": {}, + "outputs": [], + "source": [ + "(sgp.DLCPosV1() & dlc_key).fetch1(\"pose_eval_result\")" + ] + }, + { + "cell_type": "markdown", + "id": "b2303147-3657-479c-8f72-b3fc6905a596", + "metadata": {}, + "source": [ + "#### [DLCPosVideo](#TableOfContents) " + ] + }, + { + "cell_type": "markdown", + "id": "af0b081d-f619-4c38-ba48-6ae1c0c5ff2b", + "metadata": {}, + "source": [ + "We can create a video with the centroid and orientation overlaid on the original\n", + "video. This will also plot the likelihood of each bodypart used in the cohort.\n", + "This is optional, but a good quality assurance step." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "0a725c08-a616-43a0-8925-4a82bf872ba3", + "metadata": {}, + "outputs": [], + "source": [ + "sgp.DLCPosVideoParams.insert_default()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "84e2f782-ba45-487a-8e8f-e80dd33d9c31", + "metadata": {}, + "outputs": [], + "source": [ + "params = {\n", + " \"percent_frames\": 0.05,\n", + " \"incl_likelihood\": True,\n", + "}\n", + "sgp.DLCPosVideoParams.insert1(\n", + " {\"dlc_pos_video_params_name\": \"five_percent\", \"params\": params},\n", + " skip_duplicates=True,\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "5758e2fc-13e6-46cb-9a93-ae1b4c1f4741", + "metadata": {}, + "outputs": [], + "source": [ + "sgp.DLCPosVideoSelection.insert1(\n", + " {**dlc_key, \"dlc_pos_video_params_name\": \"five_percent\"},\n", + " skip_duplicates=True,\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "2887c0a5-77c8-421e-935e-0692f3f1fd68", + "metadata": {}, + "outputs": [], + "source": [ + "sgp.DLCPosVideo().populate(dlc_key)" + ] + }, + { + "cell_type": "markdown", + "id": "5a68bba8-9871-40ac-84c9-51ac0e76d44e", + "metadata": {}, + "source": [ + "#### [PositionOutput](#TableOfContents) " + ] + }, + { + "cell_type": "markdown", + "id": "25325173-bbaf-4b85-aef6-201384d9933b", + "metadata": {}, + "source": [ + "`PositionOutput` is the final table of the pipeline and is automatically\n", + "populated when we populate `DLCPosV1`" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "59ec40c9-78d8-4edd-8158-be91fb15af3e", + "metadata": {}, + "outputs": [], + "source": [ + "sgp.PositionOutput.merge_get_part(dlc_key)" + ] + }, + { + "cell_type": "markdown", + "id": "c414d9e0-e495-42ef-a8b0-1c7d53aed02e", + "metadata": {}, + "source": [ + "`PositionOutput` also has a part table, similar to the `DLCModelSource` table above. Let's check that out as well." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "50760123-7f09-4a94-a1f7-41a037914fd7", + "metadata": {}, + "outputs": [], + "source": [ + "PositionOutput.DLCPosV1() & dlc_key" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c96daaa9-5e70-4a2c-b0a4-c2849e3a1440", + "metadata": {}, + "outputs": [], + "source": [ + "(PositionOutput.DLCPosV1() & dlc_key).fetch1_dataframe()" + ] + }, + { + "cell_type": "markdown", + "id": "e48c7a4e-0bbc-4101-baf2-e84f1f5739d5", + "metadata": {}, + "source": [ + "#### [PositionVideo](#TableOfContents)" + ] + }, + { + "cell_type": "markdown", + "id": "388e6602-8e80-47fa-be78-4ae120d52e41", + "metadata": {}, + "source": [ + "We can use the `PositionVideo` table to create a video that overlays just the\n", + "centroid and orientation on the video. This table uses the parameter `plot` to\n", + "determine whether to plot the entry deriving from the DLC arm or from the Trodes\n", + "arm of the position pipeline. This parameter also accepts 'all', which will plot\n", + "both (if they exist) in order to compare results." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "b2a782ce-0a14-4725-887f-ae6f341635f8", + "metadata": {}, + "outputs": [], + "source": [ + "sgp.PositionVideoSelection().insert1(\n", + " {\n", + " \"nwb_file_name\": \"J1620210604_.nwb\",\n", + " \"interval_list_name\": \"pos 13 valid times\",\n", + " \"trodes_position_id\": 0,\n", + " \"dlc_position_id\": 1,\n", + " \"plot\": \"DLC\",\n", + " \"output_dir\": \"/home/dgramling/Src/\",\n", + " }\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c32993e7-5b32-46f9-a2f9-9634aef785f2", + "metadata": {}, + "outputs": [], + "source": [ + "sgp.PositionVideo.populate({\"plot\": \"DLC\"})" + ] + }, + { + "cell_type": "markdown", + "id": "be097052-3789-4d55-aca1-e44d426c39b4", + "metadata": {}, + "source": [ + "### _CONGRATULATIONS!!_\n", + "Please treat yourself to a nice tea break :-)" + ] + }, + { + "cell_type": "markdown", + "id": "c71c90a2", + "metadata": {}, + "source": [ + "### [Return To Table of Contents](#TableOfContents)
" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.9.16" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/notebooks/21_Position_DLC_1.ipynb b/notebooks/22_DLC_Loop.ipynb similarity index 54% rename from notebooks/21_Position_DLC_1.ipynb rename to notebooks/22_DLC_Loop.ipynb index dee0d2594..4d9d33e77 100644 --- a/notebooks/21_Position_DLC_1.ipynb +++ b/notebooks/22_DLC_Loop.ipynb @@ -5,20 +5,20 @@ "id": "a93a1550-8a67-4346-a4bf-e5a136f3d903", "metadata": {}, "source": [ - "# Position - DeepLabCut from Scratch\n" + "## Position- DeepLabCut from Scratch" ] }, { "cell_type": "markdown", - "id": "cbf56794", + "id": "13dd3267", "metadata": {}, "source": [ - "### Overview\n" + "### Overview" ] }, { "cell_type": "markdown", - "id": "de29d04e", + "id": "b52aff0d", "metadata": {}, "source": [ "_Developer Note:_ if you may make a PR in the future, be sure to copy this\n", @@ -37,11 +37,11 @@ "- creating a DLC project\n", "- extracting and labeling frames\n", "- training your model\n", + "- executing pose estimation on a novel behavioral video\n", + "- processing the pose estimation output to extract a centroid and orientation\n", + "- inserting the resulting information into the `PositionOutput` table\n", "\n", - "If you have a pre-trained project, you can either skip to the\n", - "[next tutorial](./22_Position_DLC_2.ipynb) to load it into the database, or skip\n", - "to the [following tutorial](./23_Position_DLC_3.ipynb) to start pose estimation\n", - "with a model that is already inserted.\n" + "**Note 2: Make sure you are running this within the spyglass-position Conda environment (instructions for install are in the environment_position.yml)**" ] }, { @@ -59,36 +59,62 @@ "id": "0c67d88c-c90e-467b-ae2e-672c49a12f95", "metadata": {}, "source": [ - "### Table of Contents\n" + "### Table of Contents\n", + "[`DLCProject`](#DLCProject1)
\n", + "[`DLCModelTraining`](#DLCModelTraining1)
\n", + "[`DLCModel`](#DLCModel1)
\n", + "[`DLCPoseEstimation`](#DLCPoseEstimation1)
\n", + "[`DLCSmoothInterp`](#DLCSmoothInterp1)
\n", + "[`DLCCentroid`](#DLCCentroid1)
\n", + "[`DLCOrientation`](#DLCOrientation1)
\n", + "[`DLCPosV1`](#DLCPosV1-1)
\n", + "[`DLCPosVideo`](#DLCPosVideo1)
\n", + "[`PositionOutput`](#PositionOutput1)
" ] }, { "cell_type": "markdown", - "id": "3ece5c05", + "id": "70a0a678", "metadata": {}, "source": [ - "- [Imports](#imports)\n", - "- [`DLCProject`](#DLCProject1)\n", - "- [`DLCModelTraining`](#DLCModelTraining1)\n", - "- [`DLCModel`](#DLCModel1)\n", - "\n", - "**You can click on any header to return to the Table of Contents**\n" + "__You can click on any header to return to the Table of Contents__" ] }, { "cell_type": "markdown", - "id": "c52f2a05", + "id": "c9b98c3d", "metadata": {}, "source": [ - "### Imports\n" + "### Imports" ] }, { "cell_type": "code", - "execution_count": 2, - "id": "5ddbc468", + "execution_count": 1, + "id": "b36026fa", "metadata": {}, "outputs": [], + "source": [ + "%load_ext autoreload\n", + "%autoreload 2" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "0f567531", + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "[2024-01-18 10:12:13,219][INFO]: Connecting ebroyles@lmf-db.cin.ucsf.edu:3306\n", + "[2024-01-18 10:12:13,255][INFO]: Connected ebroyles@lmf-db.cin.ucsf.edu:3306\n", + "OMP: Info #276: omp_set_nested routine deprecated, please use omp_set_max_active_levels instead.\n" + ] + } + ], "source": [ "import os\n", "import datajoint as dj\n", @@ -97,6 +123,13 @@ "import spyglass.common as sgc\n", "import spyglass.position.v1 as sgp\n", "\n", + "from pathlib import Path, PosixPath, PurePath\n", + "import glob\n", + "import numpy as np\n", + "import pandas as pd\n", + "import pynwb\n", + "from spyglass.position import PositionOutput\n", + "\n", "# change to the upper level folder to detect dj_local_conf.json\n", "if os.path.basename(os.getcwd()) == \"notebooks\":\n", " os.chdir(\"..\")\n", @@ -114,7 +147,7 @@ "id": "5e6221a3-17e5-45c0-aa40-2fd664b02219", "metadata": {}, "source": [ - "#### [DLCProject](#TableOfContents) \n" + "#### [DLCProject](#TableOfContents) " ] }, { @@ -126,7 +159,7 @@ " Notes: