This repository contains processing, analysis, and visualisation scripts that were performed in the following paper:
- Perry, A., Hughes, L., Adams, N., Naessens, M., Murley, A., Rouse, M., ... & Rowe, J. (2022). The neurophysiological effect of NMDA-R antagonism of frontotemporal lobar degeneration is conditional on individual GABA concentration. Translational Psychiatry, available at https://www.nature.com/articles/s41398-022-02114-6
This study investigated the influence of the pharmacological agent memantine on frontotemporal brain networks in persons with frontotemporal lobar degeneration (FTLD).
The resources contained here can be allocated into 3 sections:
- Processing pipeline for MEG data
- Analysis calculation of MEG responses, and group and drug differences
- Code to reproduce publication plots in R, Python, and Markdown
Most importantly, the following information details how the main findings were produced, relating to sections 2/3.
These scripts will produce the following findings:
-
MEG responses across placebo and mematine sessions for control and patient populations:
source("MMNmean_ConPatDrugInt_Source.R")
And the principal finding
-
Responses to drug (in auditory cortex) in patients are conditional on GABA concentrations:
source("LMM_druggabaint.R")
Included also is RMarkdown file and resultant pdf output, which detail the workflow steps to reproduce the two main findings above. An example:
Contained are principal findings reproduced with Python Libraries (i.e. pandas, scypy, seaborn, sklearn)
The added bonus is that using machine learning tools it assesses how well we can predict drug responses based on patients GABA concentrations.
With Random Forest Regression and Leave-One-Out-Cross-Validation, indeed we find a relationship between predicted drug responses (y-axis) from our model based on patients actual responses (x-axis):
The Leave-One-Out Cross-Validation (or LOOCV) works by fitting a (Random Forest) Regression model to all but one subject, and then repeated with each subject left out. By design it assesses how good our model is based upon unseen data (i.e. left out subject).
We can run this with the following code:
from sklearn.model_selection import LeaveOneOut
from sklearn.model_selection import cross_val_score
from sklearn.ensemble import RandomForestRegressor
#Create LOOCV procedure
CrossVal = LeaveOneOut()
#Create model
RF_model = RandomForestRegressor(random_state=1)
#Evaluate model
CrossVal_scores = cross_val_score(RF_model, X, y, scoring='neg_mean_squared_error', cv=CrossVal)
Where X (patients disease state and age) and y (change drug brain response) represent independent and dependent variables, respectively.
And we can use the mean squared error (MSE) to assess model performance which is appropriate for regression:
#Report performance
print('Mean squared error is: %.3f (%.3f)' % (np.mean(CrossVal_scores), np.std(CrossVal_scores)))
Which reveals the mean squared error across cross-validations in predicting patients drug responses is -0.002 (with SD = 0.002).