Skip to content
Justin Wang edited this page Mar 19, 2024 · 27 revisions

This page contains frequently asked questions and common mistakes.

Why does FIX (FMRIB's ICA-based Xnoiseifier) perform poorly?

FIX performance will depend on the training dataset. In our repository, we include trained-weights files for the ADNI3 and Cam-CAN datasets. If you are not processing data from these datasets, we recommend creating a trained-weights (.RData) file for FIX specific to your dataset. More information on creating the .RData file can be found in the FIX User Guide here. Steps to using your own .RData file in the pipeline can be found here.

Which parcellation should I use?

The choice of parcellation depends on your research question as well as the modality and acquisition parameters of the data you have. We provide a few parcellations in our repository but you can use any parcellation you wish. More details on implementing your own parcellation can be found here.

Can I use the structural connectivity matrices as-is?

Although the pipeline outputs are directly compatible with TheVirtualBrain, we remind users that probabilistic tractography is prone to generating spurious connections. Other than symmetrizing both the weights and tract lengths matrices, no other post-processing is performed on the structural connectivity outputs. We encourage users to consider thresholding their matrices, and we leave it up to individual users to choose the thresholding method best suited for their needs.

I need to reprocess my subject. Is there anything I need to watch out for?

If you'd like to re-process an previously processed subject (including failed processing), you will need to clear your subject's folder so that it only contains the rawdata subdirectory. The exception for this requirement is when reparcellating, in which case the subject folder can be left intact.

I used the reparcellation tool but received incomplete outputs. What went wrong?

One possible issue/solution is the following: The subject you are processing with the reparcellation tool MUST have been already processed with the regular pipeline and MUST contain parcellation-specific outputs with parcellation-specific filenames (for your original parcellation). If your subject has been processed by an earlier version of the regular TVB-UKBB pipeline and does not contain output files with parcellation-specific filenames, then you can rename their parcellation-specific outputs to contain the parcellation name with bb_pipeline_tools/parcellation_filename_updater.sh. Once the filenames are updated for your subject, you can process it with the reparcellation tool.

What is the order of ROIs in the TVB output files?

The order of ROIs in the TVB output files corresponds to the sequence in which ROIs appear in your parcellation's LUT, rather than by the ROI number listed in the first column of the LUT. For example, when using the TVB_SchaeferTian_220 parcellation, the sequence of ROIs will follow this order: left hemisphere cortex, left hemisphere subcortex, right hemisphere cortex, and right hemisphere subcortex.

My QC Pipeline isn't running correctly. I am getting a AttributeError: 'GLXPlatform' object has no attribute 'OSMesa' error.

We suggest using or adapting from the Cam-CAN_dev_CPU or ADNI3_dev_CPU branches. There is a slightly different environment setup and a corresponding container that should be used with these branches. We are currently working on a GPU implementation of the pipeline that also addresses this error.

You may also choose to run most of the pipeline with a GPU-compatible branch (e.g. Cam-CAN or ADNI3_dev) and run only the QC subpipeline with Cam-CAN_dev_CPU or ADNI3_dev_CPU. Until we publish a GPU implementation of the pipeline that also addresses the OSMesa error, this fix will allow you to take advantage of GPU acceleration while avoiding the OSMesa error. These instructions assume you're working with the ADNI3_dev and ADNI3_dev_CPU branches in Compute Canada. Exact line numbers and commands may differ if you're working wit Cam-CAN and Cam-CAN_dev_CPU

  1. Clone ADNI3_dev and ADNI3_dev_CPU branches into two separate directories.
  2. Comment out the QC subpipeline in bb_pipeline_tools/bb_pipeline.py of your ADNI3 pipeline and comment all other subpipelines in bb_pipeline_tools/bb_pipeline.py of your ADNI3_dev_CPU pipeline. Specifically, comment out lines 126-129 in ADNI3_dev:
tvb_bb_QC(
            subject,
            fileConfig
        )

and comment out lines 87,89,113,139,140,142-144 in ADNI3_dev_CPU:

fileConfig = bb_basic_QC(subject, fileConfig)
logger.info("File configuration after running file manager: " + str(fileConfig))
bb_pipeline_struct(subject, runTopup, fileConfig)
bb_pipeline_func(subject, fileConfig)
bb_pipeline_diff(subject, fileConfig)
bb_IDP(
            subject, fileConfig
        )
  1. Edit your submission script so that it calls both pipelines (in correct order) and points to both corresponding containers:
#!/bin/bash
#SBATCH --account=ACCOUNT
#SBATCH --mail-user=EMAIL
#SBATCH --mail-type=FAIL
#SBATCH --gres=gpu:1
#SBATCH --cpus-per-task=6
#SBATCH --mem=128000MB
#SBATCH --time=0-23:00


singularity run --nv -B /scratch -B /cvmfs -B /project -B /home -B ~/.Xauthority /path/to/GPU-SIF/tvb-ukbb_1.0.sif ${1} /path/to/ADNI3_dev/tvb-ukbb
apptainer run --nv -B /scratch -B /cvmfs -B /project -B /home -B ~/.Xauthority /path/to/CPU-SIF/tvb-ukbb.sif ${1} /path/to/ADNI3_dev_CPU/tvb-ukbb
  1. Use the submission script as usual!

Looking for more help?

If you haven't been able to find the answer to your question in the wiki or in the FAQ, you may post a question in the Discussions section here.

Clone this wiki locally