diff --git a/INSTALL.md b/INSTALL.md deleted file mode 100644 index 978af83..0000000 --- a/INSTALL.md +++ /dev/null @@ -1,103 +0,0 @@ -## Installing Python and Django and Databases - -The most important parts of these tools are the database and its interfaces. -The database schema is installed and configured using [Django](https://www.djangoproject.com/download/), - a [Python](https://www.python.org/)-based web framework. - -To get up and running, we need a database server, Python, Django, the python-database connector, - Python's scientific packages, and some other Python packages. Platform-specific instructions are below. - -### On OS X - -1. Install [homebrew](http://brew.sh/). -1. `brew install mysql` -1. Optional - Change default data dir - * `mkdir /path/to/datadir` - * `sudo mysqld --initialize-insecure --user=`whoami` --basedir="$(brew --prefix mysql)" --datadir=/path/to/datadir` - * You may need to `chown -R mysql:wheel /path/to/datadir` and `chmod -R 755 /path/to/datadir` - * Add a `datadir` line to my.cnf (see below)) -1. Edit ~/.profile and add the following lines. - `export PATH=/usr/local/mysql/bin:$PATH` - `export PATH=/usr/local/mysql/lib:$PATH` -1. `source ~/.profile` or you may have to close and reopen terminal. -1. Run the server `mysql.server start` or `mysqld_safe &` -1. [Install conda](https://docs.conda.io/projects/conda/en/latest/user-guide/install/macos.html#installing-on-macos) -1. Create or activate your conda environment. e.g. to create: `conda create -n sql python=3.6` - * `source activate sql` -1. `conda install django numpy scipy pyqt qtpy pyqtgraph mysqlclient openssl` - -### On Linux - -First we'll install and configure MySQL server. -1. [Follow these instructions](https://www.digitalocean.com/community/tutorials/how-to-install-mysql-on-ubuntu-18-04) -1. Optional - [Change default datadir](https://www.digitalocean.com/community/tutorials/how-to-move-a-mysql-data-directory-to-a-new-location-on-ubuntu-18-04) -1. Create or activate your conda environment. e.g. to create: `conda create -n sql python=3.6` - * `source activate sql` -1. `conda install django numpy scipy pyqt qtpy pyqtgraph mysqlclient openssl=1.0.2r` - -### On Windows -1. Install Miniconda and setup conda environment in Anaconda Prompt: - * `conda create -n sql python=3.6 django numpy scipy pyqt qtpy pyqtgraph mysqlclient openssl` - * Activate your environment with `conda activate sql` -1. While inside Anaconda Prompt, navigate to the directory `eerf/django-eerf` and run: - * `python setup.py install` -1. Install [MySQL Community Server](https://dev.mysql.com/downloads/mysql/) - * Choose the Developer package (MySQL Server, Workbench, and Shell) - * Work through pre-requisites - * MySQL80 will be your service name unless you change this - -## Configuring the database server and Django - -Though any DBMS should work, I use MySQL because some of the data I work with were originally recorded into MySQL. -I use the MyISAM storage engine because I had trouble getting good performance out of InnoDB when working with these old data. -If you followed the instructions above then mysql and its python connector should already be installed. - -* You need a settings file to specify the MyISAM storage engine and some other options. - (Use any of the following locations for your settings file: /etc/my.cnf /etc/mysql/my.cnf /usr/local/etc/my.cnf ~/.my.cnf) - ``` - [mysqld] - default-storage-engine = MyISAM - query_cache_type = 1 - key_buffer_size = 2G - query_cache_limit = 400M - ``` - -Many of the instructions are from the [Django tutorial](https://docs.djangoproject.com/en/2.2/intro/tutorial02/): - -1. Create a Django project - 1. `cd ~` - 1. `mkdir django_eerf` - 1. `cd django_eerf` - 1. `django-admin startproject expdb` - 1. `cd expdb` -1. Configure the Django project. - 1. Edit expdb/settings.py . In the Database section, under 'default', change the ENGINE and NAME, and add OPTIONS: - ``` - ’ENGINE’: ‘django.db.backends.mysql’, - ’NAME’: ‘expdb’, - 'HOST': '127.0.0.1', - 'USER': 'username', - 'PASSWORD': 'password', - 'OPTIONS': {'read_default_file': '/path/to/my.cnf'} - ``` -1. Create the Django project database. - - `mysql -uroot -p` - - `create database expdb character set utf8;` - - `exit;` - - On Windows (Workbench GUI): - - Open Workbench, connect to database (Database > Connect to Database) - - Enter your username - - Make a new schema called `expdb` - - [Refer to this tutorial for a visual guide](https://www.mysqltutorial.org/mysql-create-database/) -1. Install the base Django tables. From ~/Documents/django_eerf/expdb/ `python manage.py migrate` -1. Test Django - - `python manage.py runserver` - - Navigate to `http://127.0.0.1:8000/` - -You are now ready to install the Django [eerf web application](django-eerf/README.md). - -## Additional Tips for installing MySQL -Create a defaults file (usually /etc/my.cnf) with all of your settings. You can use the provided my.cnf to start. -Run `sudo mysql_install_db --user=mysql --defaults-file=/etc/my.cnf` -Run `mysqld_safe & --defaults-file=/etc/my.cnf` -It is not necessary to specify the defaults file when using the default location (/etc/my.cnf). diff --git a/INSTALL_MYSQL.md b/INSTALL_MYSQL.md new file mode 100644 index 0000000..63fba8f --- /dev/null +++ b/INSTALL_MYSQL.md @@ -0,0 +1,83 @@ +## Installing MySQL Database and Python connector + +We assume that you followed other instructions to install Python and Django. The rest of the instructions are for installing the MySQL database server and its interfaces. Though any DBMS should work, I use MySQL because some of the data I work with were originally recorded into MySQL. + +### On OS X + +1. Install [homebrew](http://brew.sh/) if you don't have it already. +1. `brew install mysql` +1. Optional - Change default data dir + * `mkdir /path/to/datadir` + * `sudo mysqld --initialize-insecure --user=`whoami` --basedir="$(brew --prefix mysql)" --datadir=/path/to/datadir` + * You may need to `chown -R mysql:wheel /path/to/datadir` and `chmod -R 755 /path/to/datadir` + * Add a `datadir` line to my.cnf (see below)) +1. Edit ~/.profile and add the following lines. + `export PATH=/usr/local/mysql/bin:$PATH` + `export PATH=/usr/local/mysql/lib:$PATH` +1. `source ~/.profile` or you may have to close and reopen terminal. +1. Run the server `mysql.server start` or `mysqld_safe &` +1. Your Python environment will require some additional packages. e.g.: `conda install mysqlclient openssl` + +### On Linux + +First we'll install and configure MySQL server. +1. [Follow these instructions](https://www.digitalocean.com/community/tutorials/how-to-install-mysql-on-ubuntu-18-04) +1. Optional - [Change default datadir](https://www.digitalocean.com/community/tutorials/how-to-move-a-mysql-data-directory-to-a-new-location-on-ubuntu-18-04) +1. `conda install mysqlclient openssl=1.0.2r` <-- haven't tested the openssl version requirement in a long time. + +### On Windows + +1. Install [MySQL Community Server](https://dev.mysql.com/downloads/mysql/) + * Choose the Developer package (MySQL Server, Workbench, and Shell) + * Work through pre-requisites + * MySQL80 will be your service name unless you change this +1. `conda install mysqlclient openssl` + +## Configuring the database server for use with serf + +The MySQL server will need to have some minimal configuration done to it. The easiest way to do this is to open MySQL Workbench with Administrator privileges (admin needed if mysql ini file stored system directory). +You can also edit the settings file manually. Possible locations: +* /etc/my.cnf +* /etc/mysql/my.cnf +* /usr/local/etc/my.cnf +* ~/.my.cnf +* C:\ProgramData\MySQL\MySQL Server 8.0\my.ini + +You'll likely want to change the data directory. In the past I've also needed to change from the default storage engine (InnoDB) to the MyISAM storage engine because I had trouble getting good performance out of InnoDB when working with some old rat data. Here are some changes I've made in the past: + +``` +[mysqld] +datadir = /Volumes/STORE/eerfdata +default-storage-engine = MyISAM +default_tmp_storage_engine = MyISAM +query_cache_type = 1 +key_buffer_size = 2G +query_cache_limit = 400M +``` + +If you changed the datadir then you'll need to copy the original data structure to the new location (Windows) or use `sudo mysql_install_db --user=root --defaults-file=/etc/my.cnf`. + +Then restart the server: `mysqld_safe & --defaults-file=/etc/my.cnf` + +It is not necessary to specify the defaults file when using the default location (/etc/my.cnf). + +1. Create the root database for the application: + - Mac/Linux: + - `mysql -uroot -p` + - `create database serf character set utf8;` + - `exit;` + - On Windows (Workbench GUI): + - Open Workbench, connect to database (Database > Connect to Database) + - Enter your username and password + - Make a new schema called `serf` + - [Refer to this tutorial for a visual guide](https://www.mysqltutorial.org/mysql-create-database/) + +You are now ready to install the database for the serf app. + +## Additional Tips for installing MySQL + +1. Optional (mandatory for Matlab ORM): Install additional SQL triggers to + - Automatically log a new entry or a change to subject_detail_value + - Automatically set the stop_time field of a datum to +1s for trials or +1 day for days/periods. + - Automatically set the number of a new datum to be the next integer greater than the latest for that subject/span_type. + - After installing the databases for the serf app, from shell/terminal, run `mysql -uroot mysite < serf.sql` diff --git a/django-eerf/README.md b/MISC.md similarity index 50% rename from django-eerf/README.md rename to MISC.md index 5b282e1..cda338a 100644 --- a/django-eerf/README.md +++ b/MISC.md @@ -1,76 +1,16 @@ -===== -EERFAPP -===== - -eerfapp is a Django app. The core of the app is its database. The database schema is designed to provide a complete and flexible representation of epoched neurophysiological data. - -![Database Schema](/models.png?raw=true "Database Schema") - -# Setup Instructions -Before proceeding, consult our guide for [Installing Python and Django and Databases](https://github.com/cboulay/EERF/blob/master/INSTALL.md). -Ensure that you have already run `python setup.py install` in the `eerf/django-eerf`. - - From `eerf/django-eerf/build/lib`, copy `eerfapp` and `eerfhelper` folders into your python environment's `site-packages` folder. - -## Installing the Django EERF Web Application -1. Navigate inside your expdb project - - If you haven't created it already, `django-admin startproject expdb` -1. Include the eerfapp URL configurations in your project's `urls.py`: - - Add this to your import statements `from django.conf.urls import url, include` - - Add the following under `urlpatterns` list: - - `url(r'^eerfapp/', include(('eerfapp.urls','eerfapp'), namespace="eerfapp")),` -1. Open `settings.py` and apply the following changes: - - Under `INSTALLED_APPS` add `eerfapp,` to the list - - Double check that you have these changes under `DATABASES`: - - `'ENGINE': 'django.db.backends.mysql'` - - `'NAME': 'expdb'` - - Credentials defined here must be correct - - [Create a MySQL OPTIONS file](https://docs.djangoproject.com/en/2.2/ref/databases/#connecting-to-the-database) and assign an username and password - -1. Activate your conda environment (e.g. environment name sql) - - Linux/Mac: `source ~/miniconda3/bin/activate && conda activate sql` - - Windows: Open Anaconda Prompt and type `conda activate sql` -1. Run `python manage.py makemigrations eerfapp` in your expdb project -1. You should see the following output: - ``` -Migrations for 'eerfapp': - /Users/mikkeyboi/miniconda3/envs/sql/lib/python3.6/site-packages/eerfapp/migrations/0001_initial.py - - Create model Datum - - Create model DatumFeatureValue - - Create model DetailType - - Create model FeatureType - - Create model Subject - - Create model System - - Create model DatumFeatureStore - - Create model DatumStore - - Create model SubjectLog - - Add field feature_type to datumfeaturevalue - - Add field subject to datum - - Add field trials to datum - - Create model SubjectDetailValue - - Alter unique_together for datumfeaturevalue (1 constraint(s)) - - Create model DatumDetailValue - - Alter unique_together for datum (1 constraint(s)) - - ``` - -1. Run `python manage.py migrate`, and it should `Applying eerfapp.0001_initial... OK` -1. Create a superuser for administrative control on the django side `python manage.py createsuperuser` - - Take note of these credentials. The username will default as your account name -1. Test your server `python manage.py runserver` and go to `localhost:8000/admin` - - ## Developing in Django -Django has a [good tutorial](https://docs.djangoproject.com/en/3.0/intro/tutorial01/) to familiarize you with the structure of the project you just built. -To send commands to your server, run `python manage.py shell` in your expdb project. [Here are some commands you can try](https://docs.djangoproject.com/en/3.0/intro/tutorial02/#playing-with-the-api). -Running commands like this in REPL may not be ideal for development. If you wish to run scripts that call and write to the database, you have to include the following in the script you wish to run: - ``` python + +Django has a [good tutorial](https://docs.djangoproject.com/en/3.0/intro/tutorial01/) to familiarize you with the structure of the project you just built. To send commands to your server, run `python manage.py shell` in your expdb project. [Here are some commands you can try](https://docs.djangoproject.com/en/3.0/intro/tutorial02/#playing-with-the-api). Running commands like this in REPL may not be ideal for development. If you wish to run scripts that call and write to the database, you have to include the following in the script you wish to run: + +```python from django.core.management import execute_from_command_line os.environ['DJANGO_SETTINGS_MODULE'] = 'expdb.settings' execute_from_command_line() - ``` +``` + Below is an example where I pass some multidimensional random data into the server, retrieve it, and compare the their values (sent as a blob, retrieved in bytes). -``` python +```python import os import pdb import numpy as np diff --git a/README.md b/README.md index bc96baa..7bf1c07 100644 --- a/README.md +++ b/README.md @@ -1,38 +1,103 @@ -# Evoked Electrophysiological Response Feedback (EERF) +# Segmented Electrophys Recordings and Features Database -EERF is a [Django](https://www.djangoproject.com/) web app and some helper tools to manage and analyze evoked electrophysiological response data. -The data can be analyzed in real-time and complex features can be extracted from the data to drive feedback. +SERF-DB is a database schema, designed to facilitate collection and analysis of segmented electrophysiological recordings and features. ![Database Schema](/models.png?raw=true "Database Schema") -## Contents List +- In the `python` folder we provide a python package `serf` comprising a [Django](https://www.djangoproject.com/) application to administer the database and act as an object relational map (ORM), and a `tools` module to help with feature calculation and data analysis. Using this schema, and interfacing with the Django ORM, it is easy to work with the data in real-time in Python. +- The [matlab](matlab/README.md) folder contains some (very outdated) code for interfacing with the database in Matlab. +- serf.sql is some SQL to add some functionality when using non-Django API. -- [django-eerf](django-eerf/README.md) is a python package containing my eerfapp Django web app and eerfhelper to facilitate use of this app outside of the web server context. -- eerfapp.sql Is some SQL to add some functionality when using non-Django API. -- [eerfmatlab](eerfmatlab/REAMDE.md) contains some tools for working with the data in Matlab (this is very outdated). -- standalone.py has some examples for how to interact with the data in Python without running a webserver. +> Django applications are normally run in conjunction with a Django **project**, but in this case we are mostly only interested in the ORM. Therefore we default to the standalone approach, but we do provide some untested guidance below on how to use the application with a Django webserver. ## Installation and setup -1. Install Django and all its dependencies. See [INSTALL.md](./INSTALL.md) for how I setup my system. -2. Install [django-eerf](django-eerf/README.md) -3. Optional (mandatory if you will use the database backend outside the Django/Python context (e.g., Matlab ORM)): Additional SQL triggers to - - Automatically log a new entry or a change to subject_detail_value - - Automatically set the stop_time field of a datum to +1s for trials or +1 day for days/periods. - - Automatically set the number of a new datum to be the next integer greater than the latest for that subject/span_type. - - From shell/terminal, run `mysql -uroot expdb < eerfapp.sql` - -The web app, and especially its database backend, should now be installed. - -## Using EERFAPP... - -### ...In a web browser (i.e., in Django) - -See [django-eerf](django-eerf/README.md). +1. Install Python and Django. If you came here from the NeuroportDBS repository then you should have already done this. +1. pip install serf + * Option 1: Download the `serf` wheel from the [releases page](https://github.com/cboulay/SERF/releases) and install it with `pip install {name of wheel.whl}`. + * Option 2: `pip install git+https://github.com/cboulay/SERF.git#subdirectory=python` +1. Install MySql. + * See [INSTALL_MYSQL.md](./INSTALL_MYSQL.md) for how I do it (Mac / Linux / Win) +1. Install the serf schema + 1. Copy [my_serf.cnf](https://raw.githubusercontent.com/cboulay/SERF/master/my_serf.cnf) to where Python thinks is the home directory. The easiest way to check this is to open a command prompt in the correct python environment and run `python -c "import os; print(os.path.expanduser('~'))"`. + 1. Edit the copied file to make sure its database settings are correct. `[client]` `user` and `password` are important. + 1. + ``` + $ serf-makemigrations + $ serf-migrate + ``` + You should get output like the following: + ``` + Migrations for 'serf': + SERF\python\serf\migrations\0001_initial.py + - Create model Datum + - Create model DatumFeatureValue + - Create model DetailType + - Create model FeatureType + - Create model Subject + - Create model System + - Create model DatumFeatureStore + - Create model DatumStore + - Create model SubjectLog + - Create model Procedure + - Add field feature_type to datumfeaturevalue + - Add field procedure to datum + - Add field trials to datum + - Create model SubjectDetailValue + - Alter unique_together for datumfeaturevalue (1 constraint(s)) + - Create model DatumDetailValue + - Alter unique_together for datum (1 constraint(s)) + ``` + `Applying serf.0001_initial... OK` + +## Using SERF ### ...In a custom Python program -See [standalone.py](./standalone.py) for an example of how to load the data into Python without using a web server. - -[BCPyElectrophys](https://github.com/cboulay/BCPyElectrophys) should now be able to use this ORM. +```python +import serf +serf.boot_django() +from serf.models import * +print(Subject.objects.get_or_create(name='Test')[0]) +``` + +> [BCPyElectrophys](https://github.com/cboulay/BCPyElectrophys) would normally now be able to use this ORM, except it is out of date. I have some work to do there to get it working again. + +### ...In a web browser (i.e., in a Django project) + +We assume you have already created your Django project using instructions similar to [the online tutorial up until "Creating the Polls app"](https://docs.djangoproject.com/en/3.1/intro/tutorial01/#creating-a-project). + +Instead of continuing the tutorial to create a new app, edit your Django project to add the pip-installed serf app. + +In settings.py, make sure the database info is correct ([online documentation](https://docs.djangoproject.com/en/3.1/ref/databases/#connecting-to-the-database)) and `'serf'` is in the list of INSTALLED_APPS: +```Python +DATABASES = { + 'default': { + 'ENGINE': 'django.db.backends.mysql', + # 'NAME': 'serf', + # 'HOST': '127.0.0.1', + # 'USER': 'username', + # 'PASSWORD': 'password', + # above options can also be defined in config file + 'OPTIONS': {'read_default_file': '/path/to/my_serf.cnf'}, + } +} + +INSTALLED_APPS = [ + ... + 'serf', +] +``` + +Edit urls.py +```Python +from django.urls import include, path +url_patterns = [ + ... + path('serf/', include('serf.urls')), + #url(r'^serf/', include(('serf.urls','serf'), namespace="serf")), +] +``` + +Test your server: `python manage.py runserver` and go to `localhost:8000/serf` ### ...In a custom non-Python program diff --git a/django-eerf/MANIFEST.in b/django-eerf/MANIFEST.in deleted file mode 100644 index cbe8dcb..0000000 --- a/django-eerf/MANIFEST.in +++ /dev/null @@ -1,5 +0,0 @@ -include LICENSE -include README.md -recursive-include eerfapp/static * -recursive-include eerfapp/templates * -recursive-include docs * diff --git a/django-eerf/eerfapp/admin.py b/django-eerf/eerfapp/admin.py deleted file mode 100644 index c8c8f3a..0000000 --- a/django-eerf/eerfapp/admin.py +++ /dev/null @@ -1,15 +0,0 @@ -from eerfapp.models import * -from django.contrib import admin - -class SubjectAdmin(admin.ModelAdmin): - fieldsets = [ - (None, {'fields': ['name','id']}), - ('Descriptors', {'fields': ['weight','height','birthday','headsize','sex','handedness']}), - ('Health', {'fields': ['smoking','alcohol_abuse','drug_abuse','medication','visual_impairment','heart_impairment']}), - ] - -admin.site.register(Subject, SubjectAdmin) - -admin.site.register(SubjectLog) -admin.site.register(DetailType) -admin.site.register(FeatureType) \ No newline at end of file diff --git a/django-eerf/eerfhelper/feature_functions.py b/django-eerf/eerfhelper/feature_functions.py deleted file mode 100644 index b82c073..0000000 --- a/django-eerf/eerfhelper/feature_functions.py +++ /dev/null @@ -1,144 +0,0 @@ -import numpy as np -from scipy.optimize import curve_fit -#import statsmodels.api as sm - -#helper functions -def get_submat_for_datum_start_stop_chans(datum,x_start,x_stop,chan_label): - if isinstance(x_start,unicode): x_start=float(x_start) - if isinstance(x_stop,unicode): x_stop=float(x_stop) - temp_store=datum.store - x_vec=temp_store.x_vec - y_mat=temp_store.data - chan_list=temp_store.channel_labels - chan_bool=np.asarray([cl==chan_label for cl in chan_list]) - y_mat = y_mat[chan_bool,:] - for cc in range(0,y_mat.shape[0]): - y_mat[cc,:]=y_mat[cc,:]-np.mean(y_mat[cc,x_vec<-5]) - x_bool=np.logical_and(x_vec>=x_start,x_vec<=x_stop) - return y_mat[:,x_bool] if not isinstance(y_mat,basestring) else None - -def get_aaa_for_datum_start_stop(datum,x_start,x_stop,chan_label): - sub_mat=get_submat_for_datum_start_stop_chans(datum,x_start,x_stop,chan_label) - if not np.any(sub_mat): - return None - sub_mat=np.abs(sub_mat) - ax_ind = 1 if sub_mat.ndim==2 else 0 - return np.average(sub_mat,axis=ax_ind)[0] - -def get_p2p_for_datum_start_stop(datum,x_start,x_stop,chan_label): - sub_mat=get_submat_for_datum_start_stop_chans(datum,x_start,x_stop,chan_label) - if not np.any(sub_mat): - return None - ax_ind = 1 if sub_mat.ndim==2 else 0 - p2p = np.nanmax(sub_mat,axis=ax_ind)-np.nanmin(sub_mat,axis=ax_ind) - return p2p[0] - -def get_ddvs(datum, refdatum=None, keys=None): - if keys: - if refdatum is None: - ddvs = datum.subject.detail_values_dict() - else: - ddvs = refdatum.detail_values_dict() - values = [ddvs[key] for key in keys] - return values - else: - return None - -#feature_functions -def BEMG_aaa(datum, refdatum=None): - my_keys = ['BG_start_ms','BG_stop_ms','BG_chan_label'] - x_start,x_stop,chan_label = get_ddvs(datum, refdatum, my_keys) - return get_aaa_for_datum_start_stop(datum,x_start,x_stop,chan_label) - -def MR_aaa(datum, refdatum=None): - my_keys = ['MR_start_ms','MR_stop_ms','MR_chan_label'] - x_start,x_stop,chan_label = get_ddvs(datum, refdatum, my_keys) - return get_aaa_for_datum_start_stop(datum,x_start,x_stop,chan_label) - -def HR_aaa(datum, refdatum=None): - my_keys = ['HR_start_ms','HR_stop_ms','HR_chan_label'] - x_start,x_stop,chan_label = get_ddvs(datum, refdatum, my_keys) - return get_aaa_for_datum_start_stop(datum,x_start,x_stop,chan_label) - -def MEP_aaa(datum, refdatum=None): - my_keys = ['MEP_start_ms','MEP_stop_ms','MEP_chan_label'] - x_start,x_stop,chan_label = get_ddvs(datum, refdatum, my_keys) - return get_aaa_for_datum_start_stop(datum,x_start,x_stop,chan_label) - -def MEP_p2p(datum, refdatum=None): - my_keys = ['MEP_start_ms','MEP_stop_ms','MEP_chan_label'] - x_start,x_stop,chan_label = get_ddvs(datum, refdatum, my_keys) - return get_p2p_for_datum_start_stop(datum,x_start,x_stop,chan_label) - -def HR_res(datum, refdatum=None): - print ("TODO: HR_res") - -def sig_func(x, x0, k): - return 1 / (1 + np.exp(-k*(x-x0))) - -def MEP_res(datum, refdatum=None): - #=========================================================================== - # The MEP residual is the amplitude of the MEP after subtracting the effects - # of the background EMG and the stimulus amplitude. - #=========================================================================== - mep_feat = 'MEP_p2p' #Change this to 'MEP_aaa' if preferred. - prev_trial_limit = 100 - - # Residuals only make sense when calculating for a single trial. - if datum.span_type=='period': - return None - - #TODO: Add a check for enough trials to fill the model. - - - #Get the refdatum - if refdatum is None or refdatum.span_type=='trial': - refdatum = datum.periods.order_by('-datum_id').all()[0] - - #Get the X and Y for this trial - my_bg, my_mep = [datum.calculate_value_for_feature_name(fname, refdatum=refdatum) for fname in ['BEMG_aaa', mep_feat]] - my_stim = datum.detail_values_dict()['TMS_powerA'] - - #Get background EMG, stimulus amplitude, and MEP_p2p for all trials (lim 100?) for this period. - stim_ddvs = DatumDetailValue.objects.filter(datum__periods__pk=refdatum.datum_id, detail_type__name__contains='TMS_powerA').order_by('-id').all()[:prev_trial_limit] - dd_ids = [temp.datum_id for temp in stim_ddvs] - stim_vals = np.array([temp.value for temp in stim_ddvs],dtype=float) - - all_dfvs = DatumFeatureValue.objects.filter(datum__periods__pk=refdatum.datum_id) - bg_dfvs = all_dfvs.filter(feature_type__name__contains='BEMG_aaa').order_by('-id').all()[:prev_trial_limit] - df_ids = [temp.datum_id for temp in bg_dfvs] - bg_vals = np.array([temp.value for temp in bg_dfvs]) - mep_dfvs = all_dfvs.filter(feature_type__name__contains=mep_feat).order_by('-id').all()[:prev_trial_limit] - mep_vals = np.array([temp.value for temp in mep_dfvs]) - - #Restrict ourselves to trials where dd_ids and df_ids match. - uids = np.intersect1d(dd_ids,df_ids,assume_unique=True) - stim_vals = stim_vals[np.in1d(dd_ids, uids)] - bg_vals = bg_vals[np.in1d(df_ids, uids)] - mep_vals = mep_vals[np.in1d(df_ids, uids)] - - #Transform stimulus amplitude into a linear predictor of MEP size. - p0=((np.max(stim_vals)-np.min(stim_vals))/2,0.1) #x0, k for sig_func - y = mep_vals - np.min(mep_vals) - mep_scale = np.max(y) - y = y / mep_scale - popt, pcov = curve_fit(sig_func, stim_vals, y, p0) - stim_vals_sig = np.min(mep_vals) + (mep_scale * sig_func(stim_vals, popt[0], popt[1])) - my_stim_sig = np.min(mep_vals) + (mep_scale * sig_func(my_stim, popt[0], popt[1])) - - return get_residual(np.column_stack((my_bg, my_stim_sig)), np.array(my_mep), np.column_stack((bg_vals, stim_vals_sig)), np.array(mep_vals))[0] - -def get_residual(test_x, test_y, train_x, train_y): - #Convert the input into z-scores - x_means = np.mean(train_x,0) - x_std = np.std(train_x,0) - zx = (train_x-x_means)/x_std #Built-in broadcasting - - #Calculate the coefficients for zy = a zx. Prepend zx with column of ones - coeffs = np.linalg.lstsq(np.column_stack((np.ones(zx.shape[0],),zx)),train_y)[0] - - #Calculate expected_y using the coefficients and test_x - test_zx = (test_x - x_means)/x_std - expected_y = dot(coeffs, np.column_stack((np.ones(test_zx.shape[0]),test_zx)).T) - - return test_y - expected_y \ No newline at end of file diff --git a/django-eerf/eerfhelper/online.py b/django-eerf/eerfhelper/online.py deleted file mode 100644 index eead7e3..0000000 --- a/django-eerf/eerfhelper/online.py +++ /dev/null @@ -1,156 +0,0 @@ -#This is pretty old. I'm keeping it around because it has some useful snippets I will likely need. -import numpy as np -import time, os, datetime -from scipy.optimize import curve_fit -#from EERF.API import * -#from sqlalchemy.orm import query -#from sqlalchemy import desc -import BCPy2000.BCI2000Tools.FileReader as FileReader -from matplotlib.mlab import find - -#sigmoid function used for fitting response data -def my_sigmoid(x, x0, k, a, c): return a / (1 + np.exp(-1*k*(x-x0))) + c -#x0 = half-max, k = slope, a = max, c = min -def my_simp_sigmoid(x, x0, k): return 1 / (1 + np.exp(-1*k*(x-x0))) - -#Calculate and return _halfmax and _halfmax err -def model_sigmoid(x,y, mode=None): - #Fit a sigmoid to those values for trials in this period. - n_trials = x.shape[0] - if n_trials>4: - if not mode or mode=='halfmax': - sig_func = my_sigmoid - p0=(np.median(x),0.1,np.max(y)-np.min(y),np.min(y)) #x0, k, a, c - nvars = 4 - elif mode=="threshold": - sig_func = my_simp_sigmoid - p0=(np.median(x),0.1) #x0, k - nvars = 2 - try: popt, pcov = curve_fit(sig_func, x, y, p0=p0) - except RuntimeError: - print("Error - curve_fit failed") - popt=np.empty((nvars,)) - popt.fill(np.NAN) - pcov = np.Inf #So the err is set to nan - #popt = x0, k, a, c - #diagonal pcov is variance of parameter estimates. - if np.isinf(pcov).all(): - perr=np.empty((nvars,)) - perr.fill(np.NAN) - else: perr = np.sqrt(pcov.diagonal()) - return popt,perr - -def _recent_stream_for_dir(dir, maxdate=None): - dir=os.path.abspath(dir) - files=FileReader.ListDatFiles(d=dir) - #The returned list is in ascending order, assume the last is most recent - best_stream = None - for fn in files: - temp_stream = FileReader.bcistream(fn) - temp_date = datetime.datetime.fromtimestamp(temp_stream.datestamp) - if not best_stream\ - or (maxdate and temp_date<=maxdate)\ - or (not maxdate and temp_date > datetime.datetime.fromtimestamp(best_stream.datestamp)): - best_stream=temp_stream - return best_stream - -#http://code.activestate.com/recipes/412717-extending-classes/ -def get_obj(name):return eval(name) -class ExtendInplace(type):#This class enables class definitions here to _extend_ parent classes. - def __new__(self, name, bases, dict): - prevclass = get_obj(name) - del dict['__module__'] - del dict['__metaclass__'] - for k,v in dict.iteritems(): - setattr(prevclass, k, v) - return prevclass - -class Subject: - __metaclass__=ExtendInplace - -class Datum: - __metaclass__=ExtendInplace - - def model_erp(self,model_type='halfmax'): - if self.span_type=='period': - - fts = self.datum_type.feature_types - isp2p = any([ft for ft in fts if 'p2p' in ft.Name]) - - if 'hr' in self.type_name: - stim_det_name='dat_Nerve_stim_output' - erp_name= 'HR_p2p' if isp2p else 'HR_aaa' - elif 'mep' in self.type_name: - stim_det_name='dat_TMS_powerA' - erp_name= 'MEP_p2p' if isp2p else 'MEP_aaa' - #get xy_array as dat_TMS_powerA, MEP_aaa - x=self._get_child_details(stim_det_name) - x=x.astype(np.float) - x_bool = ~np.isnan(x) - y=self._get_child_features(erp_name) - if model_type=='threshold': - y=y>self.detection_limit - y=y.astype(int) - elif 'hr' in self.type_name:#Not threshold, and hr, means cut off trials > h-max - h_max = np.max(y) - y_max_ind = find(y==h_max)[0] - x_at_h_max = x[y_max_ind] - x_bool = x <= x_at_h_max - n_trials = 1 if x.size==1 else x[x_bool].shape[0] - #Should data be scaled/standardized? - if n_trials>4: - return model_sigmoid(x[x_bool],y[x_bool],mode=model_type) + (x,) + (y,) - else: return None,None,None,None - - def assign_coords(self, space='brainsight'): - if self.span_type=='period' and self.datum_type.Name=='mep_mapping': - #Find and load the brainsight file - dir_stub=get_or_create(System, Name='bci_dat_dir').Value - bs_file_loc=dir_stub + '/' + self.subject.Name + '/mapping/' + str(self.Number) + '_' + space + '.txt' - #Parse the brainsight file for X-Y coordinates - data = [line.split('\t') for line in file(bs_file_loc)] - data = [line for line in data if 'Sample' in line[0]] - starti = find(['#' in line[0] for line in data])[0] - data = data[starti:] - headers = data[0] - data = data[1:] - x_ind = find(['Loc. X' in col for col in headers])[0] - y_ind = find(['Loc. Y' in col for col in headers])[0] - z_ind = find(['Loc. Z' in col for col in headers])[0] - - i = 0 - for tt in self.trials: - tt.detail_values['dat_TMS_coil_x']=float(data[i][x_ind]) - tt.detail_values['dat_TMS_coil_y']=float(data[i][y_ind]) - tt.detail_values['dat_TMS_coil_z']=float(data[i][z_ind]) - i = i+1 - - def add_trials_from_file(self, filename): - if self.span_type=='period' and filename: - bci_stream=FileReader.bcistream(filename) - sig,states=bci_stream.decode(nsamp='all') - sig,chan_labels=bci_stream.spatialfilteredsig(sig) - erpwin = [int(bci_stream.msec2samples(ww)) for ww in bci_stream.params['ERPWindow']] - x_vec = np.arange(bci_stream.params['ERPWindow'][0],bci_stream.params['ERPWindow'][1],1000/bci_stream.samplingfreq_hz,dtype=float) - trigchan = bci_stream.params['TriggerInputChan'] - trigchan_ix = find(trigchan[0] in chan_labels) - trigthresh = bci_stream.params['TriggerThreshold'] - trigdetect = find(np.diff(np.asmatrix(sig[trigchan_ix,:]>trigthresh,dtype='int16'))>0)+1 - intensity_detail_name = 'dat_TMS_powerA' if self.detail_values.has_key('dat_TMS_powerA') else 'dat_Nerve_stim_output' - #Get approximate data segments for each trial - trig_ix = find(np.diff(states['Trigger'])>0)+1 - for i in np.arange(len(trigdetect)): - ix = trigdetect[i] - dat = sig[:,ix+erpwin[0]:ix+erpwin[1]] - self.trials.append(Datum(subject_id=self.subject_id\ - , datum_type_id=self.datum_type_id\ - , span_type='trial'\ - , parent_datum_id=self.datum_id\ - , IsGood=1, Number=0)) - my_trial=self.trials[-1] - my_trial.detail_values[intensity_detail_name]=str(states['StimulatorIntensity'][0,trig_ix[i]]) - if int(bci_stream.params['ExperimentType']) == 1:#SICI intensity - my_trial.detail_values['dat_TMS_powerB']=str(bci_stream.params['StimIntensityB'])#TODO: Use the state. - my_trial.detail_values['dat_TMS_ISI']=str(bci_stream.params['PulseInterval']) - my_trial.store={'x_vec':x_vec, 'data':dat, 'channel_labels': chan_labels} - Session.commit() \ No newline at end of file diff --git a/django-eerf/setup.py b/django-eerf/setup.py deleted file mode 100644 index b9a8ebd..0000000 --- a/django-eerf/setup.py +++ /dev/null @@ -1,31 +0,0 @@ -import os -from setuptools import setup - -with open(os.path.join(os.path.dirname(__file__), 'README.md')) as readme: - README = readme.read() - -# allow setup.py to be run from any path -os.chdir(os.path.normpath(os.path.join(os.path.abspath(__file__), os.pardir))) - -setup( - name='django-eerfapp', - version='0.8', - packages=['eerfapp', 'eerfhelper'], - include_package_data=True, - license='BSD License', # example license - description='A simple Django app to...', - long_description=README, - url='https://github.com/cboulay/EERF', - author='Chadwick Boulay', - author_email='chadwick.boulay@gmail.com', - classifiers=[ - 'Environment :: Web Environment', - 'Framework :: Django', - 'Intended Audience :: Developers', - 'License :: OSI Approved :: BSD License', - 'Operating System :: OS Independent', - 'Programming Language :: Python', - 'Topic :: Internet :: WWW/HTTP', - 'Topic :: Internet :: WWW/HTTP :: Dynamic Content', - ], -) diff --git a/eerfmatlab/+EERF/Datum.m b/matlab/+SERF/Datum.m similarity index 97% rename from eerfmatlab/+EERF/Datum.m rename to matlab/+SERF/Datum.m index 996fd0a..39d2249 100644 --- a/eerfmatlab/+EERF/Datum.m +++ b/matlab/+SERF/Datum.m @@ -1,233 +1,233 @@ -classdef Datum < EERF.Db_obj - properties (Constant) %These are abstract in parent class - table_name='datum'; - key_names={'datum_id'}; - end - properties (Hidden = true) - datum_id; - end - properties (Dependent = true, Transient = true) - subject; %subject - Number; %number - span_type; %span_type - IsGood; %is_good - StartTime; %start_time - StopTime; %stop_time - erp; - xvec; - n_channels; - n_samples; - channel_labels; - features; - details; - end - methods - function obj = Datum(varargin) - obj = obj@EERF.Db_obj(varargin{:}); - end - - function subject=get.subject(self) - subject=self.get_x_to_one('subject_id',... - 'Subject', 'subject_id'); - end - %TODO: FIXME -% function set.subject(self,subject) -% self.set_x_to_one(subject,'subject_type_id','subject_type_id'); -% end - - function value=get.Number(obj) - value=obj.get_col_value('number'); - end - - function set.Number(obj,Number) - obj.set_col_value('number',Number); - end - - function value=get.span_type(obj) - value=obj.get_col_value('span_type'); - end - - function set.span_type(obj,span_type) - obj.set_col_value('span_type',span_type); - end - - function value=get.IsGood(obj) - value=obj.get_col_value('is_good'); - end - - function set.IsGood(obj,IsGood) - obj.set_col_value('is_good',IsGood); - end - - function value=get.StartTime(obj) - value=obj.get_col_value('start_time'); - end - - function set.StartTime(obj,StartTime) - StartTime = datestr(StartTime, 'yyyy-mm-dd HH:MM:SS');%reformat StartTime to something mysql likes - obj.set_col_value('start_time',StartTime); - end - - function value=get.StopTime(obj) - value=obj.get_col_value('stop_time'); - end - - function set.StopTime(obj,StopTime) - StopTime = datestr(StopTime, 'yyyy-mm-dd HH:MM:SS');%reformat EndTime to something mysql likes - obj.set_col_value('stop_time', StopTime); - end - - function erp=get.erp(datum)% - sel_stmnt=['SELECT erp FROM datum_store WHERE datum_id=',num2str(datum.datum_id)]; - mo=datum.dbx.statement(sel_stmnt); - erp=mo.erp{1}; %size(erp)=38400,1, but 2 chans and 2400 samps. i.e. 8 erp entries per actual value. - erp=typecast(erp,'double'); - erp=reshape(erp,datum.n_samples,datum.n_channels); - end - - function set.erp(datum, values) - [n_samples, n_channels] = size(values); - values = reshape(values,[],1); - values = typecast(values, 'uint8'); - stmnt = 'UPDATE datum_store SET erp = "{uB}", n_channels={Si}, n_samples={Si} WHERE datum_id={Si}'; - parms = {values;n_channels;n_samples;datum.datum_id}; - datum.dbx.statement(stmnt, parms); - end - - function xvec=get.xvec(datum)% - sel_stmnt=['SELECT x_vec FROM datum_store WHERE datum_id=',num2str(datum.datum_id)]; - mo=datum.dbx.statement(sel_stmnt); - xvec=mo.x_vec{1}; - xvec=typecast(xvec,'double'); - end - - function set.xvec(datum,xvec) - %TODO: Convert xvec to vector of uint8 from vector of double. - xvec = typecast(xvec,'uint8'); - stmnt = 'UPDATE datum_store SET x_vec = "{uB}" WHERE datum_id={Si}'; - parms = {xvec;datum.datum_id}; - datum.dbx.statement(stmnt, parms); - end - - function n_channels=get.n_channels(datum)% - sel_stmnt=['SELECT n_channels FROM datum_store WHERE datum_id=',num2str(datum.datum_id)]; - mo=datum.dbx.statement(sel_stmnt); - n_channels=mo.n_channels(1); - end - - function n_samples=get.n_samples(datum)% - sel_stmnt=['SELECT n_samples FROM datum_store WHERE datum_id=',num2str(datum.datum_id)]; - mo=datum.dbx.statement(sel_stmnt); - n_samples=mo.n_samples(1); - end - - function channel_labels=get.channel_labels(datum)% - sel_stmnt=['SELECT channel_labels FROM datum_store WHERE datum_id=',num2str(datum.datum_id)]; - mo=datum.dbx.statement(sel_stmnt); - channel_labels = mo.channel_labels{1}; - channel_labels=char(channel_labels)'; - channel_labels=textscan(channel_labels,'%s','delimiter',','); - channel_labels=channel_labels{1}; - end - - function set.channel_labels(datum,channel_labels) - n_chans = length(channel_labels); - channel_labels = cellstr(channel_labels); - if size(channel_labels,1)==1 && size(channel_labels,2)>1 - channel_labels = channel_labels'; - end - channel_labels = [channel_labels repmat({','},n_chans,1)]; - channel_labels = reshape(channel_labels',1,[]); - channel_labels = cell2mat(channel_labels); - channel_labels = channel_labels(1:end-1); -% channel_labels = cast(channel_labels','uint8'); - stmnt = 'UPDATE datum_store SET channel_labels = "{S}" WHERE datum_id={Si}'; - parms = {channel_labels;datum.datum_id}; - datum.dbx.statement(stmnt, parms); - end - - %TODO: Setters for features and details? -> Subclasses? - %TODO: Further parameters to splice features/details? - function features = get.features(datum)% - features = EERF.Db_obj.get_obj_array(datum.dbx,'DatumFeature','datum_id',datum.datum_id); - end - - function feature = get_single_feature(datum, feature_name) - %Instead of relying on trial.features, it is faster to call upon - %it directly. - stmnt = ['SELECT dfv.value as val FROM datum_feature_value AS dfv, feature_type as ft',... - ' WHERE ft.name LIKE "{S}"',... - ' AND dfv.feature_type_id = ft.feature_type_id',... - ' AND dfv.datum_id = {Si}']; - mo = datum.dbx.statement(stmnt,{feature_name,datum.datum_id}); - feature = mo.val; - end - - function set_single_feature(datum, feature_type, value) - %TODO: Use Db_obj functions. - %TODO: If feature_type is a string it is the feature_name - stmnt = ['INSERT INTO datum_feature_value (datum_id, feature_type_id, value) '... - 'VALUES ({Si}, {Si}, {S4}) '... - 'ON DUPLICATE KEY UPDATE value={S4}']; - mo = datum.dbx.statement(stmnt, {datum.datum_id, feature_type.feature_type_id, value, value}); - end - - function details = get.details(datum) - details = EERF.Db_obj.get_obj_array(datum.dbx, 'DatumDetail', 'datum_id', datum.datum_id); - end - - function detail = get_single_detail(datum, detail_name) - %Instead of relying on trial.details, it is faster to call upon - %it directly. - stmnt = ['SELECT ddv.Value as val FROM datum_detail_value AS ddv, detail_type as dt',... - ' WHERE dt.name LIKE "{S}"',... - ' AND ddv.detail_type_id = dt.detail_type_id',... - ' AND ddv.datum_id = {Si}']; - mo = datum.dbx.statement(stmnt,{detail_name,datum.datum_id}); - detail = mo.val{1}; - end - - function result=calculate_feature(datum, feature_type) - if strcmpi(feature_type.Name,'MEP_p2p') - x_start_name = 'MEP_start_ms'; - x_stop_name = 'MEP_stop_ms'; - chan_label_name = 'MEP_chan_label'; - - det_names = {datum.details.name}; - x_start_ms = str2double(datum.details(strcmpi(det_names, x_start_name)).Value); - x_stop_ms = str2double(datum.details(strcmpi(det_names, x_stop_name)).Value); - chan_name = datum.details(strcmpi(det_names, chan_label_name)).Value; - - x_vec = datum.xvec; - x_bool = x_vec >= x_start_ms & x_vec <= x_stop_ms; - y_dat = datum.erp(x_bool,strcmpi(datum.channel_labels,chan_name)); - result=max(y_dat)-min(y_dat); - - dfv=datum.features(strcmpi({datum.features.Name},feature_type.Name)); - dfv.Value=result; - else - result=NaN; - end - end - - function plot(datum) - plot(datum.xvec, datum.erp) - legend(datum.channel_labels); - end - end - - %TODO: Move these to a different class. - methods (Static) - function yhat=sigmoid(b,X) - %b(1) = max value - %b(2) = slope - %b(3) = x at which y is halfmax - %b(4) = offset (when x=0) - yhat = b(1) ./ (1 + exp(-1*b(2)*(X-b(3)))) + b(4); - %yhat = X(:,2) ./ (1 + exp(-1*b(1)*(X(:,1)-b(2)))) + X(:,3); - end - function yhat=sigmoid_simple(b,X) - yhat = 1 ./ (1 + exp(-1*b(1)*(X-b(2)))); - end - end +classdef Datum < EERF.Db_obj + properties (Constant) %These are abstract in parent class + table_name='datum'; + key_names={'datum_id'}; + end + properties (Hidden = true) + datum_id; + end + properties (Dependent = true, Transient = true) + subject; %subject + Number; %number + span_type; %span_type + IsGood; %is_good + StartTime; %start_time + StopTime; %stop_time + erp; + xvec; + n_channels; + n_samples; + channel_labels; + features; + details; + end + methods + function obj = Datum(varargin) + obj = obj@EERF.Db_obj(varargin{:}); + end + + function subject=get.subject(self) + subject=self.get_x_to_one('subject_id',... + 'Subject', 'subject_id'); + end + %TODO: FIXME +% function set.subject(self,subject) +% self.set_x_to_one(subject,'subject_type_id','subject_type_id'); +% end + + function value=get.Number(obj) + value=obj.get_col_value('number'); + end + + function set.Number(obj,Number) + obj.set_col_value('number',Number); + end + + function value=get.span_type(obj) + value=obj.get_col_value('span_type'); + end + + function set.span_type(obj,span_type) + obj.set_col_value('span_type',span_type); + end + + function value=get.IsGood(obj) + value=obj.get_col_value('is_good'); + end + + function set.IsGood(obj,IsGood) + obj.set_col_value('is_good',IsGood); + end + + function value=get.StartTime(obj) + value=obj.get_col_value('start_time'); + end + + function set.StartTime(obj,StartTime) + StartTime = datestr(StartTime, 'yyyy-mm-dd HH:MM:SS');%reformat StartTime to something mysql likes + obj.set_col_value('start_time',StartTime); + end + + function value=get.StopTime(obj) + value=obj.get_col_value('stop_time'); + end + + function set.StopTime(obj,StopTime) + StopTime = datestr(StopTime, 'yyyy-mm-dd HH:MM:SS');%reformat EndTime to something mysql likes + obj.set_col_value('stop_time', StopTime); + end + + function erp=get.erp(datum)% + sel_stmnt=['SELECT erp FROM datum_store WHERE datum_id=',num2str(datum.datum_id)]; + mo=datum.dbx.statement(sel_stmnt); + erp=mo.erp{1}; %size(erp)=38400,1, but 2 chans and 2400 samps. i.e. 8 erp entries per actual value. + erp=typecast(erp,'double'); + erp=reshape(erp,datum.n_samples,datum.n_channels); + end + + function set.erp(datum, values) + [n_samples, n_channels] = size(values); + values = reshape(values,[],1); + values = typecast(values, 'uint8'); + stmnt = 'UPDATE datum_store SET erp = "{uB}", n_channels={Si}, n_samples={Si} WHERE datum_id={Si}'; + parms = {values;n_channels;n_samples;datum.datum_id}; + datum.dbx.statement(stmnt, parms); + end + + function xvec=get.xvec(datum)% + sel_stmnt=['SELECT x_vec FROM datum_store WHERE datum_id=',num2str(datum.datum_id)]; + mo=datum.dbx.statement(sel_stmnt); + xvec=mo.x_vec{1}; + xvec=typecast(xvec,'double'); + end + + function set.xvec(datum,xvec) + %TODO: Convert xvec to vector of uint8 from vector of double. + xvec = typecast(xvec,'uint8'); + stmnt = 'UPDATE datum_store SET x_vec = "{uB}" WHERE datum_id={Si}'; + parms = {xvec;datum.datum_id}; + datum.dbx.statement(stmnt, parms); + end + + function n_channels=get.n_channels(datum)% + sel_stmnt=['SELECT n_channels FROM datum_store WHERE datum_id=',num2str(datum.datum_id)]; + mo=datum.dbx.statement(sel_stmnt); + n_channels=mo.n_channels(1); + end + + function n_samples=get.n_samples(datum)% + sel_stmnt=['SELECT n_samples FROM datum_store WHERE datum_id=',num2str(datum.datum_id)]; + mo=datum.dbx.statement(sel_stmnt); + n_samples=mo.n_samples(1); + end + + function channel_labels=get.channel_labels(datum)% + sel_stmnt=['SELECT channel_labels FROM datum_store WHERE datum_id=',num2str(datum.datum_id)]; + mo=datum.dbx.statement(sel_stmnt); + channel_labels = mo.channel_labels{1}; + channel_labels=char(channel_labels)'; + channel_labels=textscan(channel_labels,'%s','delimiter',','); + channel_labels=channel_labels{1}; + end + + function set.channel_labels(datum,channel_labels) + n_chans = length(channel_labels); + channel_labels = cellstr(channel_labels); + if size(channel_labels,1)==1 && size(channel_labels,2)>1 + channel_labels = channel_labels'; + end + channel_labels = [channel_labels repmat({','},n_chans,1)]; + channel_labels = reshape(channel_labels',1,[]); + channel_labels = cell2mat(channel_labels); + channel_labels = channel_labels(1:end-1); +% channel_labels = cast(channel_labels','uint8'); + stmnt = 'UPDATE datum_store SET channel_labels = "{S}" WHERE datum_id={Si}'; + parms = {channel_labels;datum.datum_id}; + datum.dbx.statement(stmnt, parms); + end + + %TODO: Setters for features and details? -> Subclasses? + %TODO: Further parameters to splice features/details? + function features = get.features(datum)% + features = EERF.Db_obj.get_obj_array(datum.dbx,'DatumFeature','datum_id',datum.datum_id); + end + + function feature = get_single_feature(datum, feature_name) + %Instead of relying on trial.features, it is faster to call upon + %it directly. + stmnt = ['SELECT dfv.value as val FROM datum_feature_value AS dfv, feature_type as ft',... + ' WHERE ft.name LIKE "{S}"',... + ' AND dfv.feature_type_id = ft.feature_type_id',... + ' AND dfv.datum_id = {Si}']; + mo = datum.dbx.statement(stmnt,{feature_name,datum.datum_id}); + feature = mo.val; + end + + function set_single_feature(datum, feature_type, value) + %TODO: Use Db_obj functions. + %TODO: If feature_type is a string it is the feature_name + stmnt = ['INSERT INTO datum_feature_value (datum_id, feature_type_id, value) '... + 'VALUES ({Si}, {Si}, {S4}) '... + 'ON DUPLICATE KEY UPDATE value={S4}']; + mo = datum.dbx.statement(stmnt, {datum.datum_id, feature_type.feature_type_id, value, value}); + end + + function details = get.details(datum) + details = EERF.Db_obj.get_obj_array(datum.dbx, 'DatumDetail', 'datum_id', datum.datum_id); + end + + function detail = get_single_detail(datum, detail_name) + %Instead of relying on trial.details, it is faster to call upon + %it directly. + stmnt = ['SELECT ddv.Value as val FROM datum_detail_value AS ddv, detail_type as dt',... + ' WHERE dt.name LIKE "{S}"',... + ' AND ddv.detail_type_id = dt.detail_type_id',... + ' AND ddv.datum_id = {Si}']; + mo = datum.dbx.statement(stmnt,{detail_name,datum.datum_id}); + detail = mo.val{1}; + end + + function result=calculate_feature(datum, feature_type) + if strcmpi(feature_type.Name,'MEP_p2p') + x_start_name = 'MEP_start_ms'; + x_stop_name = 'MEP_stop_ms'; + chan_label_name = 'MEP_chan_label'; + + det_names = {datum.details.name}; + x_start_ms = str2double(datum.details(strcmpi(det_names, x_start_name)).Value); + x_stop_ms = str2double(datum.details(strcmpi(det_names, x_stop_name)).Value); + chan_name = datum.details(strcmpi(det_names, chan_label_name)).Value; + + x_vec = datum.xvec; + x_bool = x_vec >= x_start_ms & x_vec <= x_stop_ms; + y_dat = datum.erp(x_bool,strcmpi(datum.channel_labels,chan_name)); + result=max(y_dat)-min(y_dat); + + dfv=datum.features(strcmpi({datum.features.Name},feature_type.Name)); + dfv.Value=result; + else + result=NaN; + end + end + + function plot(datum) + plot(datum.xvec, datum.erp) + legend(datum.channel_labels); + end + end + + %TODO: Move these to a different class. + methods (Static) + function yhat=sigmoid(b,X) + %b(1) = max value + %b(2) = slope + %b(3) = x at which y is halfmax + %b(4) = offset (when x=0) + yhat = b(1) ./ (1 + exp(-1*b(2)*(X-b(3)))) + b(4); + %yhat = X(:,2) ./ (1 + exp(-1*b(1)*(X(:,1)-b(2)))) + X(:,3); + end + function yhat=sigmoid_simple(b,X) + yhat = 1 ./ (1 + exp(-1*b(1)*(X-b(2)))); + end + end end \ No newline at end of file diff --git a/eerfmatlab/+EERF/DatumDetail.m b/matlab/+SERF/DatumDetail.m similarity index 96% rename from eerfmatlab/+EERF/DatumDetail.m rename to matlab/+SERF/DatumDetail.m index 5550410..1e85e37 100644 --- a/eerfmatlab/+EERF/DatumDetail.m +++ b/matlab/+SERF/DatumDetail.m @@ -1,14 +1,14 @@ -classdef DatumDetail < EERF.GenericDetail - properties (Constant) %These are abstract in parent class - table_name = 'datum_detail_value'; - key_names = {'datum_id','detail_type_id'}; - end - properties (Hidden=true) - datum_id; - end - methods - function obj = DatumDetail(varargin) - obj = obj@EERF.GenericDetail(varargin{:}); - end - end +classdef DatumDetail < EERF.GenericDetail + properties (Constant) %These are abstract in parent class + table_name = 'datum_detail_value'; + key_names = {'datum_id','detail_type_id'}; + end + properties (Hidden=true) + datum_id; + end + methods + function obj = DatumDetail(varargin) + obj = obj@EERF.GenericDetail(varargin{:}); + end + end end \ No newline at end of file diff --git a/eerfmatlab/+EERF/DatumFeature.m b/matlab/+SERF/DatumFeature.m similarity index 97% rename from eerfmatlab/+EERF/DatumFeature.m rename to matlab/+SERF/DatumFeature.m index 8a6a360..07bccec 100644 --- a/eerfmatlab/+EERF/DatumFeature.m +++ b/matlab/+SERF/DatumFeature.m @@ -1,41 +1,41 @@ -classdef DatumFeature < EERF.Db_obj - properties (Constant) %These are abstract in parent class - table_name='datum_feature_value'; - key_names={'datum_id','feature_type_id'}; - end - properties (Hidden=true) - datum_id; - feature_type_id; - end - properties (Dependent=true) - feature_type; - Value; - Name; %Read-only - Description; %Read-only - end - methods - function obj = DatumFeature(varargin) - obj = obj@EERF.Db_obj(varargin{:}); - end - function value=get.Value(feature) - value=feature.get_col_value('value'); - end - function set.Value(feature,value) - feature.set_col_value('value',value); - end - function feature_type=get.feature_type(self) - feature_type=self.get_x_to_one('feature_type_id',... - 'FeatureType','feature_type_id'); - end - function Name=get.Name(feature) - stmnt = ['SELECT name FROM feature_type WHERE feature_type_id=',num2str(feature.feature_type_id)]; - mo=feature.dbx.statement(stmnt); - Name=mo.Name{1}; - end - function Description=get.Description(feature) - stmnt = ['SELECT description FROM feature_type WHERE feature_type_id=',num2str(feature.feature_type_id)]; - mo=feature.dbx.statement(stmnt); - Description=mo.Description{1}; - end - end +classdef DatumFeature < EERF.Db_obj + properties (Constant) %These are abstract in parent class + table_name='datum_feature_value'; + key_names={'datum_id','feature_type_id'}; + end + properties (Hidden=true) + datum_id; + feature_type_id; + end + properties (Dependent=true) + feature_type; + Value; + Name; %Read-only + Description; %Read-only + end + methods + function obj = DatumFeature(varargin) + obj = obj@EERF.Db_obj(varargin{:}); + end + function value=get.Value(feature) + value=feature.get_col_value('value'); + end + function set.Value(feature,value) + feature.set_col_value('value',value); + end + function feature_type=get.feature_type(self) + feature_type=self.get_x_to_one('feature_type_id',... + 'FeatureType','feature_type_id'); + end + function Name=get.Name(feature) + stmnt = ['SELECT name FROM feature_type WHERE feature_type_id=',num2str(feature.feature_type_id)]; + mo=feature.dbx.statement(stmnt); + Name=mo.Name{1}; + end + function Description=get.Description(feature) + stmnt = ['SELECT description FROM feature_type WHERE feature_type_id=',num2str(feature.feature_type_id)]; + mo=feature.dbx.statement(stmnt); + Description=mo.Description{1}; + end + end end \ No newline at end of file diff --git a/eerfmatlab/+EERF/DatumType.m b/matlab/+SERF/DatumType.m similarity index 97% rename from eerfmatlab/+EERF/DatumType.m rename to matlab/+SERF/DatumType.m index 8854b0d..b76137e 100644 --- a/eerfmatlab/+EERF/DatumType.m +++ b/matlab/+SERF/DatumType.m @@ -1,41 +1,41 @@ -classdef DatumType < EERF.GenericType - properties (Constant) %These are abstract in parent class - table_name='datum_type'; - key_names={'datum_type_id'}; - end - properties (Hidden=true) - datum_type_id; - end - properties - detail_types - feature_types - TrialClass - end - methods - function obj = DatumType(varargin) - obj = obj@EERF.GenericType(varargin{:}); - end - function detail_types=get.detail_types(self) - detail_types=self.get_many_to_many('datum_type_has_detail_type',... - 'datum_type_id','datum_type_id','detail_type_id','detail_type_id','DetailType'); - end - function set.detail_types(self,detail_types) - self.set_many_to_many(detail_types,'datum_type_has_detail_type',... - 'datum_type_id','datum_type_id','detail_type_id','detail_type_id'); - end - function feature_types=get.feature_types(self) - feature_types=self.get_many_to_many('datum_type_has_feature_type',... - 'datum_type_id','datum_type_id','feature_type_id','feature_type_id','FeatureType'); - end - function set.feature_types(self,feature_types) - self.set_many_to_many(feature_types,'datum_type_has_feature_type',... - 'datum_type_id','datum_type_id','feature_type_id','feature_type_id'); - end - function TrialClass=get.TrialClass(obj) - TrialClass=obj.get_col_value('TrialClass'); - end - function set.TrialClass(obj,TrialClass) - obj.set_col_value('TrialClass',TrialClass); - end - end +classdef DatumType < EERF.GenericType + properties (Constant) %These are abstract in parent class + table_name='datum_type'; + key_names={'datum_type_id'}; + end + properties (Hidden=true) + datum_type_id; + end + properties + detail_types + feature_types + TrialClass + end + methods + function obj = DatumType(varargin) + obj = obj@EERF.GenericType(varargin{:}); + end + function detail_types=get.detail_types(self) + detail_types=self.get_many_to_many('datum_type_has_detail_type',... + 'datum_type_id','datum_type_id','detail_type_id','detail_type_id','DetailType'); + end + function set.detail_types(self,detail_types) + self.set_many_to_many(detail_types,'datum_type_has_detail_type',... + 'datum_type_id','datum_type_id','detail_type_id','detail_type_id'); + end + function feature_types=get.feature_types(self) + feature_types=self.get_many_to_many('datum_type_has_feature_type',... + 'datum_type_id','datum_type_id','feature_type_id','feature_type_id','FeatureType'); + end + function set.feature_types(self,feature_types) + self.set_many_to_many(feature_types,'datum_type_has_feature_type',... + 'datum_type_id','datum_type_id','feature_type_id','feature_type_id'); + end + function TrialClass=get.TrialClass(obj) + TrialClass=obj.get_col_value('TrialClass'); + end + function set.TrialClass(obj,TrialClass) + obj.set_col_value('TrialClass',TrialClass); + end + end end \ No newline at end of file diff --git a/eerfmatlab/+EERF/Db_obj.m b/matlab/+SERF/Db_obj.m similarity index 100% rename from eerfmatlab/+EERF/Db_obj.m rename to matlab/+SERF/Db_obj.m diff --git a/eerfmatlab/+EERF/Dbmym.m b/matlab/+SERF/Dbmym.m similarity index 96% rename from eerfmatlab/+EERF/Dbmym.m rename to matlab/+SERF/Dbmym.m index 5b75f32..35f0a96 100644 --- a/eerfmatlab/+EERF/Dbmym.m +++ b/matlab/+SERF/Dbmym.m @@ -1,69 +1,69 @@ -classdef Dbmym < handle - properties (Constant, Hidden) - host = 'localhost'; - user = 'root'; - pass = ''; - end - properties (Transient) - status - end - properties - cid = -1; - end - properties (Hidden = true) - db; %TODO: Setting this should call obj.keepalive. - end - methods - function obj = Dbmym(db)%constructor - if nargin > 0 - obj.db = db; - keepalive(obj); - else - obj.cid = mym(-1, 'open', obj.host, obj.user, obj.pass); - end - end - function [mo] = statement(obj, SQL_statement, mym_parameters) -% if obj.status~=0 %This probably slows things down. Maybe I should do a try/catch instead. -% obj.keepalive; -% end - repeat = true; - while repeat - if nargin>2 - try - repeat = false; - mo = mym(obj.cid, SQL_statement, mym_parameters{:}); - catch err - if strcmpi(err.message,'Not connected') - obj.keepalive; - repeat = true(1,1); - else - rethrow(err); - end - end - else - try - repeat = false; - mo = mym(obj.cid, SQL_statement); - catch err - if strcmpi(err.message,'Not connected') - keepalive(obj); - repeat = true; - else - rethrow(err); - end - end - end - end - end - function delete(obj) - mym(obj.cid,'close'); - end - function keepalive(obj) - obj.cid = mym(obj.cid,'open',obj.host,obj.user,obj.pass); - [~] = mym(obj.cid,'use',obj.db); - end - function val = get.status(obj) - val = mym(obj.cid); - end - end +classdef Dbmym < handle + properties (Constant, Hidden) + host = 'localhost'; + user = 'root'; + pass = ''; + end + properties (Transient) + status + end + properties + cid = -1; + end + properties (Hidden = true) + db; %TODO: Setting this should call obj.keepalive. + end + methods + function obj = Dbmym(db)%constructor + if nargin > 0 + obj.db = db; + keepalive(obj); + else + obj.cid = mym(-1, 'open', obj.host, obj.user, obj.pass); + end + end + function [mo] = statement(obj, SQL_statement, mym_parameters) +% if obj.status~=0 %This probably slows things down. Maybe I should do a try/catch instead. +% obj.keepalive; +% end + repeat = true; + while repeat + if nargin>2 + try + repeat = false; + mo = mym(obj.cid, SQL_statement, mym_parameters{:}); + catch err + if strcmpi(err.message,'Not connected') + obj.keepalive; + repeat = true(1,1); + else + rethrow(err); + end + end + else + try + repeat = false; + mo = mym(obj.cid, SQL_statement); + catch err + if strcmpi(err.message,'Not connected') + keepalive(obj); + repeat = true; + else + rethrow(err); + end + end + end + end + end + function delete(obj) + mym(obj.cid,'close'); + end + function keepalive(obj) + obj.cid = mym(obj.cid,'open',obj.host,obj.user,obj.pass); + [~] = mym(obj.cid,'use',obj.db); + end + function val = get.status(obj) + val = mym(obj.cid); + end + end end \ No newline at end of file diff --git a/eerfmatlab/+EERF/DetailType.m b/matlab/+SERF/DetailType.m similarity index 96% rename from eerfmatlab/+EERF/DetailType.m rename to matlab/+SERF/DetailType.m index 4ba401a..ef7c60c 100644 --- a/eerfmatlab/+EERF/DetailType.m +++ b/matlab/+SERF/DetailType.m @@ -1,23 +1,23 @@ -classdef DetailType < EERF.GenericType - properties (Constant) %These are abstract in parent class - table_name='detail_type'; - key_names={'detail_type_id'}; - end - properties - detail_type_id; - end - properties (Dependent = true, Transient = true) - DefaultValue; - end - methods - function obj = DetailType(varargin) - obj = obj@EERF.GenericType(varargin{:}); - end - function DefaultValue=get.DefaultValue(obj) - DefaultValue=obj.get_col_value('DefaultValue'); - end - function set.DefaultValue(obj,DefaultValue) - obj.set_col_value('DefaultValue',DefaultValue); - end - end +classdef DetailType < EERF.GenericType + properties (Constant) %These are abstract in parent class + table_name='detail_type'; + key_names={'detail_type_id'}; + end + properties + detail_type_id; + end + properties (Dependent = true, Transient = true) + DefaultValue; + end + methods + function obj = DetailType(varargin) + obj = obj@EERF.GenericType(varargin{:}); + end + function DefaultValue=get.DefaultValue(obj) + DefaultValue=obj.get_col_value('DefaultValue'); + end + function set.DefaultValue(obj,DefaultValue) + obj.set_col_value('DefaultValue',DefaultValue); + end + end end \ No newline at end of file diff --git a/eerfmatlab/+EERF/ERDMEPTrial.m b/matlab/+SERF/ERDMEPTrial.m similarity index 100% rename from eerfmatlab/+EERF/ERDMEPTrial.m rename to matlab/+SERF/ERDMEPTrial.m diff --git a/eerfmatlab/+EERF/ERDTrial.m b/matlab/+SERF/ERDTrial.m similarity index 100% rename from eerfmatlab/+EERF/ERDTrial.m rename to matlab/+SERF/ERDTrial.m diff --git a/eerfmatlab/+EERF/Experiment.m b/matlab/+SERF/Experiment.m similarity index 100% rename from eerfmatlab/+EERF/Experiment.m rename to matlab/+SERF/Experiment.m diff --git a/eerfmatlab/+EERF/FeatureType.m b/matlab/+SERF/FeatureType.m similarity index 96% rename from eerfmatlab/+EERF/FeatureType.m rename to matlab/+SERF/FeatureType.m index 2381273..333b797 100644 --- a/eerfmatlab/+EERF/FeatureType.m +++ b/matlab/+SERF/FeatureType.m @@ -1,30 +1,30 @@ -classdef FeatureType < EERF.Db_obj - properties (Constant) %These are abstract in parent class - table_name='feature_type'; - key_names={'feature_type_id'}; - end - properties - feature_type_id; - end - properties (Dependent = true, Transient = true) - Name; - Description; - end - methods - function obj = FeatureType(varargin) - obj = obj@EERF.Db_obj(varargin{:}); - end - function Name=get.Name(obj) - Name=obj.get_col_value('name'); - end - function set.Name(obj,Name) - obj.set_col_value('name',Name); - end - function Description=get.Description(obj) - Description=obj.get_col_value('description'); - end - function set.Description(obj,Description) - obj.set_col_value('description',Description); - end - end +classdef FeatureType < EERF.Db_obj + properties (Constant) %These are abstract in parent class + table_name='feature_type'; + key_names={'feature_type_id'}; + end + properties + feature_type_id; + end + properties (Dependent = true, Transient = true) + Name; + Description; + end + methods + function obj = FeatureType(varargin) + obj = obj@EERF.Db_obj(varargin{:}); + end + function Name=get.Name(obj) + Name=obj.get_col_value('name'); + end + function set.Name(obj,Name) + obj.set_col_value('name',Name); + end + function Description=get.Description(obj) + Description=obj.get_col_value('description'); + end + function set.Description(obj,Description) + obj.set_col_value('description',Description); + end + end end \ No newline at end of file diff --git a/eerfmatlab/+EERF/Fifobuffer.m b/matlab/+SERF/Fifobuffer.m similarity index 100% rename from eerfmatlab/+EERF/Fifobuffer.m rename to matlab/+SERF/Fifobuffer.m diff --git a/eerfmatlab/+EERF/GenericDetail.m b/matlab/+SERF/GenericDetail.m similarity index 97% rename from eerfmatlab/+EERF/GenericDetail.m rename to matlab/+SERF/GenericDetail.m index 33a6925..6030c97 100644 --- a/eerfmatlab/+EERF/GenericDetail.m +++ b/matlab/+SERF/GenericDetail.m @@ -1,43 +1,43 @@ -classdef GenericDetail < EERF.Db_obj - %Parent of SubjectDetail and DatumDetail - properties (Hidden=true) - detail_type_id; - end - properties (Dependent=true) - Value; - Name; %Read-only - Description; %Read-only - DefaultValue; %Read-only - detail_type; - end - methods - function obj = GenericDetail(varargin) - obj = obj@EERF.Db_obj(varargin{:}); - end - function value=get.Value(detail) - value=detail.get_col_value('Value'); - end - function set.Value(detail,value) - detail.set_col_value('Value',value); - end - function Name=get.Name(self) - stmnt = ['SELECT Name FROM detail_type WHERE detail_type_id=',num2str(self.detail_type_id)]; - mo=self.dbx.statement(stmnt); - Name=mo.Name{1}; - end - function Description=get.Description(self) - stmnt = ['SELECT Description FROM detail_type WHERE detail_type_id=',num2str(self.detail_type_id)]; - mo=self.dbx.statement(stmnt); - Description=mo.Description{1}; - end - function DefaultValue=get.DefaultValue(self) - stmnt = ['SELECT DefaultValue FROM detail_type WHERE detail_type_id=',num2str(self.detail_type_id)]; - mo=self.dbx.statement(stmnt); - DefaultValue=mo.DefaultValue{1}; - end - function detail_type=get.detail_type(self) - detail_type=self.get_x_to_one('detail_type_id',... - 'DetailType','detail_type_id'); - end - end +classdef GenericDetail < EERF.Db_obj + %Parent of SubjectDetail and DatumDetail + properties (Hidden=true) + detail_type_id; + end + properties (Dependent=true) + Value; + Name; %Read-only + Description; %Read-only + DefaultValue; %Read-only + detail_type; + end + methods + function obj = GenericDetail(varargin) + obj = obj@EERF.Db_obj(varargin{:}); + end + function value=get.Value(detail) + value=detail.get_col_value('Value'); + end + function set.Value(detail,value) + detail.set_col_value('Value',value); + end + function Name=get.Name(self) + stmnt = ['SELECT Name FROM detail_type WHERE detail_type_id=',num2str(self.detail_type_id)]; + mo=self.dbx.statement(stmnt); + Name=mo.Name{1}; + end + function Description=get.Description(self) + stmnt = ['SELECT Description FROM detail_type WHERE detail_type_id=',num2str(self.detail_type_id)]; + mo=self.dbx.statement(stmnt); + Description=mo.Description{1}; + end + function DefaultValue=get.DefaultValue(self) + stmnt = ['SELECT DefaultValue FROM detail_type WHERE detail_type_id=',num2str(self.detail_type_id)]; + mo=self.dbx.statement(stmnt); + DefaultValue=mo.DefaultValue{1}; + end + function detail_type=get.detail_type(self) + detail_type=self.get_x_to_one('detail_type_id',... + 'DetailType','detail_type_id'); + end + end end \ No newline at end of file diff --git a/eerfmatlab/+EERF/GenericType.m b/matlab/+SERF/GenericType.m similarity index 96% rename from eerfmatlab/+EERF/GenericType.m rename to matlab/+SERF/GenericType.m index 94215bb..7b296ab 100644 --- a/eerfmatlab/+EERF/GenericType.m +++ b/matlab/+SERF/GenericType.m @@ -1,23 +1,23 @@ -classdef GenericType < EERF.Db_obj - properties (Dependent = true, Transient = true) - Name; - Description; - end - methods - function obj = GenericType(varargin) - obj = obj@EERF.Db_obj(varargin{:}); - end - function Name=get.Name(obj) - Name=obj.get_col_value('Name'); - end - function set.Name(obj,Name) - obj.set_col_value('Name',Name); - end - function Description=get.Description(obj) - Description=obj.get_col_value('Description'); - end - function set.Description(obj,Description) - obj.set_col_value('Description',Description); - end - end +classdef GenericType < EERF.Db_obj + properties (Dependent = true, Transient = true) + Name; + Description; + end + methods + function obj = GenericType(varargin) + obj = obj@EERF.Db_obj(varargin{:}); + end + function Name=get.Name(obj) + Name=obj.get_col_value('Name'); + end + function set.Name(obj,Name) + obj.set_col_value('Name',Name); + end + function Description=get.Description(obj) + Description=obj.get_col_value('Description'); + end + function set.Description(obj,Description) + obj.set_col_value('Description',Description); + end + end end \ No newline at end of file diff --git a/eerfmatlab/+EERF/MEPTrial.m b/matlab/+SERF/MEPTrial.m similarity index 100% rename from eerfmatlab/+EERF/MEPTrial.m rename to matlab/+SERF/MEPTrial.m diff --git a/eerfmatlab/+EERF/Period.m b/matlab/+SERF/Period.m similarity index 98% rename from eerfmatlab/+EERF/Period.m rename to matlab/+SERF/Period.m index 6185119..b575380 100644 --- a/eerfmatlab/+EERF/Period.m +++ b/matlab/+SERF/Period.m @@ -1,123 +1,123 @@ -classdef Period < EERF.Datum - properties (Dependent = true, Transient = true) - trials - end -% properties (Hidden = true) -% trial_class = 'Trial'; %Saved to disk. -% end - methods - function period = Period(varargin) - period = period@EERF.Datum(varargin{:}); - end - function trials = get.trials(period) - %Since I have not implemented span_type="day", the only - %possible children are trials, thus I can use the parent class - %method. - stmnt = sprintf(['SELECT datum_id FROM datum WHERE subject_id={Si} '... - 'AND span_type=''trial'' AND start_time>=''%s'' AND stop_time<=''%s'''], period.StartTime, period.StopTime); - mo = period.dbx.statement(stmnt, {period.subject.subject_id}); - n_trials = length(mo.datum_id); - if n_trials>0 - trials(n_trials) = EERF.Trial(period.dbx); - trial_ids = num2cell(mo.datum_id); - [trials(:).datum_id] = trial_ids{:}; - else - trials = []; - end - end - - %The following functions are provided for convenience to speed up - %the retrieval of data and features without requiring each trial to - %retrieve it by itself. - %TODO: Modify this so that it retrieves as many features as - %feature_names provided. - function varargout=get_trials_features(period,feature_name) - %It is important to use a left join so that all trials get a - %return value, even if null, otherwise period.trials and - %period.get_trials_features won't match up. - stmnt = ['SELECT datum_has_datum.child_datum_id, datum_feature_value.Value, feature_type.Name ',... - 'FROM datum_has_datum, datum_feature_value, feature_type ',... - 'WHERE datum_has_datum.parent_datum_id=',num2str(period.datum_id),... - ' AND feature_type.Name LIKE "',feature_name,... - '" AND datum_feature_value.datum_id=datum_has_datum.child_datum_id',... - ' AND datum_feature_value.feature_type_id=feature_type.feature_type_id']; -% 'FROM ((datum_has_datum LEFT JOIN datum_feature_value ON datum_feature_value.datum_id=datum_has_datum.child_datum_id) ',... -% 'INNER JOIN feature_type ON datum_feature_value.feature_type_id=feature_type.feature_type_id) ',... -% 'WHERE datum_has_datum.parent_datum_id=',num2str(period.datum_id),... -% ' AND feature_type.Name LIKE "',feature_name,'"']; - mo=period.dbx.statement(stmnt); - varargout{1}=mo.Value; - if nargout>1 - varargout{2}=mo.child_datum_id; - end - if nargout>2 - varargout{3}=mo.Name; - end - end - function varargout=get_trials_details(period,detail_name) - %It is important to use a left join so that all trials get a - %return value, even if null, otherwise period.trials and - %period.get_trials_features won't match up. - stmnt = ['SELECT datum_has_datum.child_datum_id, datum_detail_value.Value, detail_type.Name ',... - 'FROM (datum_has_datum LEFT JOIN datum_detail_value ON datum_detail_value.datum_id=datum_has_datum.child_datum_id) ',... - 'LEFT JOIN detail_type ON datum_detail_value.detail_type_id=detail_type.detail_type_id ',... - 'WHERE datum_has_datum.parent_datum_id=',num2str(period.datum_id),... - ' AND detail_type.Name LIKE "',detail_name,'"']; - mo=period.dbx.statement(stmnt); - varargout{1}=mo.Value; - if nargout>1 - varargout{2}=mo.child_datum_id; - end - if nargout>2 - varargout{3}=mo.Name; - end - end - - function set_trials_features(period,feature_names,feature_matrix) - trial_id=[period.trials.datum_id]; - %TODO: Throw an error if trial_id length ~= feature_matrix - %length. - name_stmnt = 'SELECT feature_type_id FROM feature_type WHERE Name LIKE "{S}"'; - val_stmnt = 'UPDATE datum_feature_value SET Value={S4} WHERE datum_id={Si} AND feature_type_id={Si}'; - for ff=1:size(feature_matrix,2) - %Get the feature_type_id - mo = period.dbx.statement(name_stmnt,feature_names(ff)); - ft_id = mo.feature_type_id; - period.dbx.statement('BEGIN'); - for tt=1:size(feature_matrix,1) - fval = feature_matrix(tt,ff); - if ~isnan(fval) %submitting a nan value really screws things up. - period.dbx.statement(val_stmnt,{fval,trial_id(tt),ft_id}); - end - end - period.dbx.statement('COMMIT'); - end - - end - function set_trials_details(period,detail_names,detail_matrix) - trial_id=[period.trials.datum_id]; - %TODO: Throw an error if trial_id length ~= feature_matrix - %length. - name_stmnt = 'SELECT detail_type_id FROM detail_type WHERE Name LIKE "{S}"'; - val_stmnt = 'UPDATE datum_detail_value SET Value="{S}" WHERE datum_id={Si} AND detail_type_id={Si}'; - for dd=1:size(detail_matrix,2) - mo = period.dbx.statement(name_stmnt,detail_names(dd)); - dt_id = mo.detail_type_id; - period.dbx.statement('BEGIN'); - for tt=1:size(detail_matrix,1) - if isnumeric(detail_matrix(tt,dd)) && ~isnan(detail_matrix(tt,dd)) - detail_value=num2str(detail_matrix(tt,dd)); - elseif iscell(detail_matrix(tt,dd)) - detail_value = detail_matrix{tt,dd}; - else - detail_value=detail_matrix(tt,dd); - end - if ischar(detail_value) || ~isnan(detail_value) - period.dbx.statement(val_stmnt,{detail_value,trial_id(tt),dt_id}); - end - end - period.dbx.statement('COMMIT'); - end - end - end +classdef Period < EERF.Datum + properties (Dependent = true, Transient = true) + trials + end +% properties (Hidden = true) +% trial_class = 'Trial'; %Saved to disk. +% end + methods + function period = Period(varargin) + period = period@EERF.Datum(varargin{:}); + end + function trials = get.trials(period) + %Since I have not implemented span_type="day", the only + %possible children are trials, thus I can use the parent class + %method. + stmnt = sprintf(['SELECT datum_id FROM datum WHERE subject_id={Si} '... + 'AND span_type=''trial'' AND start_time>=''%s'' AND stop_time<=''%s'''], period.StartTime, period.StopTime); + mo = period.dbx.statement(stmnt, {period.subject.subject_id}); + n_trials = length(mo.datum_id); + if n_trials>0 + trials(n_trials) = EERF.Trial(period.dbx); + trial_ids = num2cell(mo.datum_id); + [trials(:).datum_id] = trial_ids{:}; + else + trials = []; + end + end + + %The following functions are provided for convenience to speed up + %the retrieval of data and features without requiring each trial to + %retrieve it by itself. + %TODO: Modify this so that it retrieves as many features as + %feature_names provided. + function varargout=get_trials_features(period,feature_name) + %It is important to use a left join so that all trials get a + %return value, even if null, otherwise period.trials and + %period.get_trials_features won't match up. + stmnt = ['SELECT datum_has_datum.child_datum_id, datum_feature_value.Value, feature_type.Name ',... + 'FROM datum_has_datum, datum_feature_value, feature_type ',... + 'WHERE datum_has_datum.parent_datum_id=',num2str(period.datum_id),... + ' AND feature_type.Name LIKE "',feature_name,... + '" AND datum_feature_value.datum_id=datum_has_datum.child_datum_id',... + ' AND datum_feature_value.feature_type_id=feature_type.feature_type_id']; +% 'FROM ((datum_has_datum LEFT JOIN datum_feature_value ON datum_feature_value.datum_id=datum_has_datum.child_datum_id) ',... +% 'INNER JOIN feature_type ON datum_feature_value.feature_type_id=feature_type.feature_type_id) ',... +% 'WHERE datum_has_datum.parent_datum_id=',num2str(period.datum_id),... +% ' AND feature_type.Name LIKE "',feature_name,'"']; + mo=period.dbx.statement(stmnt); + varargout{1}=mo.Value; + if nargout>1 + varargout{2}=mo.child_datum_id; + end + if nargout>2 + varargout{3}=mo.Name; + end + end + function varargout=get_trials_details(period,detail_name) + %It is important to use a left join so that all trials get a + %return value, even if null, otherwise period.trials and + %period.get_trials_features won't match up. + stmnt = ['SELECT datum_has_datum.child_datum_id, datum_detail_value.Value, detail_type.Name ',... + 'FROM (datum_has_datum LEFT JOIN datum_detail_value ON datum_detail_value.datum_id=datum_has_datum.child_datum_id) ',... + 'LEFT JOIN detail_type ON datum_detail_value.detail_type_id=detail_type.detail_type_id ',... + 'WHERE datum_has_datum.parent_datum_id=',num2str(period.datum_id),... + ' AND detail_type.Name LIKE "',detail_name,'"']; + mo=period.dbx.statement(stmnt); + varargout{1}=mo.Value; + if nargout>1 + varargout{2}=mo.child_datum_id; + end + if nargout>2 + varargout{3}=mo.Name; + end + end + + function set_trials_features(period,feature_names,feature_matrix) + trial_id=[period.trials.datum_id]; + %TODO: Throw an error if trial_id length ~= feature_matrix + %length. + name_stmnt = 'SELECT feature_type_id FROM feature_type WHERE Name LIKE "{S}"'; + val_stmnt = 'UPDATE datum_feature_value SET Value={S4} WHERE datum_id={Si} AND feature_type_id={Si}'; + for ff=1:size(feature_matrix,2) + %Get the feature_type_id + mo = period.dbx.statement(name_stmnt,feature_names(ff)); + ft_id = mo.feature_type_id; + period.dbx.statement('BEGIN'); + for tt=1:size(feature_matrix,1) + fval = feature_matrix(tt,ff); + if ~isnan(fval) %submitting a nan value really screws things up. + period.dbx.statement(val_stmnt,{fval,trial_id(tt),ft_id}); + end + end + period.dbx.statement('COMMIT'); + end + + end + function set_trials_details(period,detail_names,detail_matrix) + trial_id=[period.trials.datum_id]; + %TODO: Throw an error if trial_id length ~= feature_matrix + %length. + name_stmnt = 'SELECT detail_type_id FROM detail_type WHERE Name LIKE "{S}"'; + val_stmnt = 'UPDATE datum_detail_value SET Value="{S}" WHERE datum_id={Si} AND detail_type_id={Si}'; + for dd=1:size(detail_matrix,2) + mo = period.dbx.statement(name_stmnt,detail_names(dd)); + dt_id = mo.detail_type_id; + period.dbx.statement('BEGIN'); + for tt=1:size(detail_matrix,1) + if isnumeric(detail_matrix(tt,dd)) && ~isnan(detail_matrix(tt,dd)) + detail_value=num2str(detail_matrix(tt,dd)); + elseif iscell(detail_matrix(tt,dd)) + detail_value = detail_matrix{tt,dd}; + else + detail_value=detail_matrix(tt,dd); + end + if ischar(detail_value) || ~isnan(detail_value) + period.dbx.statement(val_stmnt,{detail_value,trial_id(tt),dt_id}); + end + end + period.dbx.statement('COMMIT'); + end + end + end end \ No newline at end of file diff --git a/eerfmatlab/+EERF/Subject.m b/matlab/+SERF/Subject.m similarity index 97% rename from eerfmatlab/+EERF/Subject.m rename to matlab/+SERF/Subject.m index d709343..b2968cc 100644 --- a/eerfmatlab/+EERF/Subject.m +++ b/matlab/+SERF/Subject.m @@ -1,72 +1,72 @@ -classdef Subject < EERF.Db_obj - properties (Constant) %These are abstract in parent class - table_name = 'subject'; - key_names = {'subject_id'}; - end - properties (Hidden = true) - subject_id; %subject_id - end - properties (Dependent = true, Transient = true) - Name; %name - DateOfBirth; %birthday - Sex; %sex - Weight; %weight - Height; - %HeadSize; %headsize - %Handedness; %handedness - %Smoking; %smoking - %AlcoholAbuse; %alcohol_abuse - %DrugAbuse; %drug_abuse - %Medication; %medication - %VisualImpairment; %visual_impairment - %HeartImpairment; %heart_impairment - periods; - days; - details; - end - - methods - function self = Subject(varargin) -% assert(nargin>=5 && strcmpi(varargin{2},'Name') && strcmpi(varargin{4},'subject_type_id'),... -% 'EERF.Subject needs Name and subject_type_id when instantiated.'); - self = self@EERF.Db_obj(varargin{:}); - end - function value=get.Name(self) - value=self.get_col_value('name'); - end - function set.Name(self,name) - self.set_col_value('name',name); - end - function value=get.DateOfBirth(self) - value=self.get_col_value('birthday'); - end - function set.DateOfBirth(self, DateOfBirth) - self.set_col_value('birthday', DateOfBirth); - end - function value=get.Sex(self) - value=self.get_col_value('sex'); - end - function set.Sex(self, Sex) - self.set_col_value('sex', Sex); - end - function value=get.Weight(self) - value=self.get_col_value('weight'); - end - function set.Weight(self,Weight) - self.set_col_value('weight',Weight); - end - function periods=get.periods(self) - periods=EERF.Db_obj.get_obj_array(self.dbx, 'Period',... - 'subject_id', self.subject_id, 'span_type', 'period'); - end - function days=get.days(self) - days=EERF.Db_obj.get_obj_array(self.dbx, 'Period',... - 'subject_id', self.subject_id, 'span_type', 'day'); - end - %To modify a period's subject, do so on the period object. - function details=get.details(self) - details=EERF.Db_obj.get_obj_array(self.dbx,'SubjectDetail',... - 'subject_id',self.subject_id); - end - end +classdef Subject < EERF.Db_obj + properties (Constant) %These are abstract in parent class + table_name = 'subject'; + key_names = {'subject_id'}; + end + properties (Hidden = true) + subject_id; %subject_id + end + properties (Dependent = true, Transient = true) + Name; %name + DateOfBirth; %birthday + Sex; %sex + Weight; %weight + Height; + %HeadSize; %headsize + %Handedness; %handedness + %Smoking; %smoking + %AlcoholAbuse; %alcohol_abuse + %DrugAbuse; %drug_abuse + %Medication; %medication + %VisualImpairment; %visual_impairment + %HeartImpairment; %heart_impairment + periods; + days; + details; + end + + methods + function self = Subject(varargin) +% assert(nargin>=5 && strcmpi(varargin{2},'Name') && strcmpi(varargin{4},'subject_type_id'),... +% 'EERF.Subject needs Name and subject_type_id when instantiated.'); + self = self@EERF.Db_obj(varargin{:}); + end + function value=get.Name(self) + value=self.get_col_value('name'); + end + function set.Name(self,name) + self.set_col_value('name',name); + end + function value=get.DateOfBirth(self) + value=self.get_col_value('birthday'); + end + function set.DateOfBirth(self, DateOfBirth) + self.set_col_value('birthday', DateOfBirth); + end + function value=get.Sex(self) + value=self.get_col_value('sex'); + end + function set.Sex(self, Sex) + self.set_col_value('sex', Sex); + end + function value=get.Weight(self) + value=self.get_col_value('weight'); + end + function set.Weight(self,Weight) + self.set_col_value('weight',Weight); + end + function periods=get.periods(self) + periods=EERF.Db_obj.get_obj_array(self.dbx, 'Period',... + 'subject_id', self.subject_id, 'span_type', 'period'); + end + function days=get.days(self) + days=EERF.Db_obj.get_obj_array(self.dbx, 'Period',... + 'subject_id', self.subject_id, 'span_type', 'day'); + end + %To modify a period's subject, do so on the period object. + function details=get.details(self) + details=EERF.Db_obj.get_obj_array(self.dbx,'SubjectDetail',... + 'subject_id',self.subject_id); + end + end end \ No newline at end of file diff --git a/eerfmatlab/+EERF/SubjectDetail.m b/matlab/+SERF/SubjectDetail.m similarity index 96% rename from eerfmatlab/+EERF/SubjectDetail.m rename to matlab/+SERF/SubjectDetail.m index 583d001..9b85f01 100644 --- a/eerfmatlab/+EERF/SubjectDetail.m +++ b/matlab/+SERF/SubjectDetail.m @@ -1,14 +1,14 @@ -classdef SubjectDetail < EERF.GenericDetail - properties (Constant) %These are abstract in parent class - table_name='subject_detail_value'; - key_names={'subject_id','detail_type_id'}; - end - properties (Hidden=true) - subject_id; - end - methods - function obj = SubjectDetail(varargin) - obj = obj@EERF.GenericDetail(varargin{:}); - end - end +classdef SubjectDetail < EERF.GenericDetail + properties (Constant) %These are abstract in parent class + table_name='subject_detail_value'; + key_names={'subject_id','detail_type_id'}; + end + properties (Hidden=true) + subject_id; + end + methods + function obj = SubjectDetail(varargin) + obj = obj@EERF.GenericDetail(varargin{:}); + end + end end \ No newline at end of file diff --git a/eerfmatlab/+EERF/SubjectType.m b/matlab/+SERF/SubjectType.m similarity index 97% rename from eerfmatlab/+EERF/SubjectType.m rename to matlab/+SERF/SubjectType.m index 9f4ccc3..398549f 100644 --- a/eerfmatlab/+EERF/SubjectType.m +++ b/matlab/+SERF/SubjectType.m @@ -1,25 +1,25 @@ -classdef SubjectType < EERF.GenericType - properties (Constant) %These are abstract in parent class - table_name='subject_type'; - key_names={'subject_type_id'}; - end - properties (Hidden=true) - subject_type_id; - end - properties (Dependent = true, Transient = true) - detail_types; - end - methods - function obj = SubjectType(varargin) - obj = obj@EERF.GenericType(varargin{:}); - end - function detail_types=get.detail_types(self) - detail_types=self.get_many_to_many('subject_type_has_detail_type',... - 'subject_type_id','subject_type_id','detail_type_id','detail_type_id','DetailType'); - end - function set.detail_types(self,detail_types) - self.set_many_to_many(detail_types,'subject_type_has_detail_type',... - 'subject_type_id','subject_type_id','detail_type_id','detail_type_id'); - end - end +classdef SubjectType < EERF.GenericType + properties (Constant) %These are abstract in parent class + table_name='subject_type'; + key_names={'subject_type_id'}; + end + properties (Hidden=true) + subject_type_id; + end + properties (Dependent = true, Transient = true) + detail_types; + end + methods + function obj = SubjectType(varargin) + obj = obj@EERF.GenericType(varargin{:}); + end + function detail_types=get.detail_types(self) + detail_types=self.get_many_to_many('subject_type_has_detail_type',... + 'subject_type_id','subject_type_id','detail_type_id','detail_type_id','DetailType'); + end + function set.detail_types(self,detail_types) + self.set_many_to_many(detail_types,'subject_type_has_detail_type',... + 'subject_type_id','subject_type_id','detail_type_id','detail_type_id'); + end + end end \ No newline at end of file diff --git a/eerfmatlab/+EERF/System.m b/matlab/+SERF/System.m similarity index 100% rename from eerfmatlab/+EERF/System.m rename to matlab/+SERF/System.m diff --git a/eerfmatlab/+EERF/TFTrial.m b/matlab/+SERF/TFTrial.m similarity index 100% rename from eerfmatlab/+EERF/TFTrial.m rename to matlab/+SERF/TFTrial.m diff --git a/eerfmatlab/+EERF/Trial.m b/matlab/+SERF/Trial.m similarity index 97% rename from eerfmatlab/+EERF/Trial.m rename to matlab/+SERF/Trial.m index 4eba3d3..50b7581 100644 --- a/eerfmatlab/+EERF/Trial.m +++ b/matlab/+SERF/Trial.m @@ -1,24 +1,24 @@ -classdef Trial < EERF.Datum - properties (Dependent = true) - periods; - end - methods - function obj = Trial(varargin) - obj = obj@EERF.Datum(varargin{:}); - end - function periods = get.periods(trial) - stmnt = sprintf(['SELECT datum_id FROM datum WHERE subject_id={Si} '... - 'AND span_type=''period'' AND start_time<=''%s'' AND stop_time>=''%s'''],... - trial.StartTime, trial.StopTime); - mo = trial.dbx.statement(stmnt, {trial.subject.subject_id}); - n_periods = length(mo.datum_id); - if n_periods>0 - periods(n_periods) = EERF.Period(trial.dbx); - trial_ids = num2cell(mo.datum_id); - [periods(:).datum_id] = trial_ids{:}; - else - periods = []; - end - end - end +classdef Trial < EERF.Datum + properties (Dependent = true) + periods; + end + methods + function obj = Trial(varargin) + obj = obj@EERF.Datum(varargin{:}); + end + function periods = get.periods(trial) + stmnt = sprintf(['SELECT datum_id FROM datum WHERE subject_id={Si} '... + 'AND span_type=''period'' AND start_time<=''%s'' AND stop_time>=''%s'''],... + trial.StartTime, trial.StopTime); + mo = trial.dbx.statement(stmnt, {trial.subject.subject_id}); + n_periods = length(mo.datum_id); + if n_periods>0 + periods(n_periods) = EERF.Period(trial.dbx); + trial_ids = num2cell(mo.datum_id); + [periods(:).datum_id] = trial_ids{:}; + else + periods = []; + end + end + end end \ No newline at end of file diff --git a/eerfmatlab/README.md b/matlab/README.md similarity index 68% rename from eerfmatlab/README.md rename to matlab/README.md index 957274c..795b7c1 100644 --- a/eerfmatlab/README.md +++ b/matlab/README.md @@ -1,6 +1,6 @@ # Introduction -This is an API for accessing the EERF database from within the Matlab +This is an API for accessing the SERF database from within the Matlab programming environment. # Setup @@ -10,8 +10,8 @@ programming environment. `sudo mysqd_safe &` or use your GUI. 3. Open Matlab and change to $repopath. 4. Add the database interface to the path: `addpath(fullfile(pwd, 'mym'));` -5. Import the object interfaces: `import EERF.*;` -6. Create a connection to the MySQL server: `dbx = EERF.Dbmym('mysite');` +5. Import the object interfaces: `import SERF.*;` +6. Create a connection to the MySQL server: `dbx = SERF.Dbmym('mysite');` # Using diff --git a/eerfmatlab/mym/LICENSE.txt b/matlab/mym/LICENSE.txt similarity index 100% rename from eerfmatlab/mym/LICENSE.txt rename to matlab/mym/LICENSE.txt diff --git a/eerfmatlab/mym/libmysql.16.dylib b/matlab/mym/libmysql.16.dylib similarity index 100% rename from eerfmatlab/mym/libmysql.16.dylib rename to matlab/mym/libmysql.16.dylib diff --git a/eerfmatlab/mym/mym.cpp b/matlab/mym/mym.cpp similarity index 100% rename from eerfmatlab/mym/mym.cpp rename to matlab/mym/mym.cpp diff --git a/eerfmatlab/mym/mym.h b/matlab/mym/mym.h similarity index 100% rename from eerfmatlab/mym/mym.h rename to matlab/mym/mym.h diff --git a/eerfmatlab/mym/mym.m b/matlab/mym/mym.m similarity index 100% rename from eerfmatlab/mym/mym.m rename to matlab/mym/mym.m diff --git a/eerfmatlab/mym/mym.mexa64 b/matlab/mym/mym.mexa64 similarity index 100% rename from eerfmatlab/mym/mym.mexa64 rename to matlab/mym/mym.mexa64 diff --git a/eerfmatlab/mym/mym.mexmaci b/matlab/mym/mym.mexmaci similarity index 100% rename from eerfmatlab/mym/mym.mexmaci rename to matlab/mym/mym.mexmaci diff --git a/eerfmatlab/mym/mym.mexmaci64 b/matlab/mym/mym.mexmaci64 similarity index 100% rename from eerfmatlab/mym/mym.mexmaci64 rename to matlab/mym/mym.mexmaci64 diff --git a/eerfmatlab/mym/mym.mexw32 b/matlab/mym/mym.mexw32 similarity index 100% rename from eerfmatlab/mym/mym.mexw32 rename to matlab/mym/mym.mexw32 diff --git a/eerfmatlab/mym/mym.mexw64 b/matlab/mym/mym.mexw64 similarity index 100% rename from eerfmatlab/mym/mym.mexw64 rename to matlab/mym/mym.mexw64 diff --git a/eerfmatlab/mym/readme.txt b/matlab/mym/readme.txt similarity index 98% rename from eerfmatlab/mym/readme.txt rename to matlab/mym/readme.txt index 82d5d9d..5d18178 100644 --- a/eerfmatlab/mym/readme.txt +++ b/matlab/mym/readme.txt @@ -1,68 +1,68 @@ -mYm v1.36 readme.txt -Updated May 19, 2010 by J.C. Erlich, J.T. Marsh and Y. Maret - -All feedback appreciated to yannick.maret@epfl.ch - -GPL ---- -mYm is a Matlab interface to MySQL server that support BLOB object - -This program is free software; you can redistribute it and/or modify -it under the terms of the GNU General Public License as published by -the Free Software Foundation; either version 2 of the License, or -(at your option) any later version. - -This program is distributed in the hope that it will be useful, -but WITHOUT ANY WARRANTY; without even the implied warranty of -MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -GNU General Public License for more details. - -You should have received a copy of the GNU General Public License -along with this program; if not, write to the Free Software -Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA - -Copyright notice: some parts of this code (server connection, fancy print) is based on an original code by Robert Almgren (http://www.mmf.utoronto.ca/resrchres/mysql/). The present code is under GPL license with express agreement of Mr. Almgren. - -WHAT IS MYM? ------------- -mYm is a Matlab interface to MySQL server. It is based on the original 'MySQL and Matlab' by Robert Almgren and adds the support for Binary Large Object (BLOB). That is, it can insert matlab objects (e.g. array, structure, cell) into BLOB fields, as well retrieve from them. To save space, the matlab objects is first compressed (using zlib) before storing it into a BLOB field. Like Almgren's original, mYm supports multiple connections to MySQL server. - -INSTALLATION ------------- --Windows run setup.msi --Matlab the source can be compiled using the following command (thanks to Jeffrey Elrich) - mex -I[mysql_include_dir] -I[zlib_include_dir] -L[mysql_lib_dir] -L[zlib_lib_dir] -lz -lmysqlclient mym.cpp - (on Mac OS X you might also need the -lSystemStubs switch to avoid namespace clashes) - Note: to compile, the zlib library should be installed on the system (including the headers). - for more information, cf. http://www.zlib.net/ - -HOW TO USE IT -------------- -see mym.m - -HISTORY -------- +mYm v1.36 readme.txt +Updated May 19, 2010 by J.C. Erlich, J.T. Marsh and Y. Maret + +All feedback appreciated to yannick.maret@epfl.ch + +GPL +--- +mYm is a Matlab interface to MySQL server that support BLOB object + +This program is free software; you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation; either version 2 of the License, or +(at your option) any later version. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with this program; if not, write to the Free Software +Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA + +Copyright notice: some parts of this code (server connection, fancy print) is based on an original code by Robert Almgren (http://www.mmf.utoronto.ca/resrchres/mysql/). The present code is under GPL license with express agreement of Mr. Almgren. + +WHAT IS MYM? +------------ +mYm is a Matlab interface to MySQL server. It is based on the original 'MySQL and Matlab' by Robert Almgren and adds the support for Binary Large Object (BLOB). That is, it can insert matlab objects (e.g. array, structure, cell) into BLOB fields, as well retrieve from them. To save space, the matlab objects is first compressed (using zlib) before storing it into a BLOB field. Like Almgren's original, mYm supports multiple connections to MySQL server. + +INSTALLATION +------------ +-Windows run setup.msi +-Matlab the source can be compiled using the following command (thanks to Jeffrey Elrich) + mex -I[mysql_include_dir] -I[zlib_include_dir] -L[mysql_lib_dir] -L[zlib_lib_dir] -lz -lmysqlclient mym.cpp + (on Mac OS X you might also need the -lSystemStubs switch to avoid namespace clashes) + Note: to compile, the zlib library should be installed on the system (including the headers). + for more information, cf. http://www.zlib.net/ + +HOW TO USE IT +------------- +see mym.m + +HISTORY +------- v1.36 - Fixed case where empty nested structures would cause seg fault - - Support for 64 bit and 32 bit clients. - - WARNING: Version compatibility issue. The output of mym is now a struct with fieldnames for each column -v1.0.9 - a space is now used when the variable corresponding to a string placeholder is empty - - we now use strtod instead of sscanf(s, "%lf") - - add support for stored procedure -v1.0.8 - corrected a problem occurring with MySQL commands that do not return results - - the M$ Windows binary now use the correct runtime DLL (MSVC80.DLL insteaLd of MSVC80D.DL) -v1.0.7 - logical values are now correctly considered as numerical value when using placeholder {Si} - - corrected a bug occuring when closing a connection that was not openned - - added the possibility to get the next free connection ID when oppening a connection -v1.0.6 - corrected a bug where mym('use', 'a_schema') worked fine while mym(conn, 'use', 'a_schema') did not work - - corrected a segmentation violation that happened when issuing a MySQL command when not connected - - corrected the mex command (this file) - - corrected a bug where it was impossible to open a connection silently - - use std::max(a, b) instead of max(a, b) -v1.0.5 - added the preamble 'u', permitting to save binary fields without using compression - - corrected a bug in mym('closeall') - - corrected various mistakes in the help file (thanks to Jörg Buchholz) -v1.0.4 corrected the behaviour of mYm with time fields, now return a string dump of the field -v1.0.3 minor corrections -v1.0.2 put mYm under GPL license, official release -v1.0.1 corrected a bug where non-matlab binary objects were not returned + - Support for 64 bit and 32 bit clients. + - WARNING: Version compatibility issue. The output of mym is now a struct with fieldnames for each column +v1.0.9 - a space is now used when the variable corresponding to a string placeholder is empty + - we now use strtod instead of sscanf(s, "%lf") + - add support for stored procedure +v1.0.8 - corrected a problem occurring with MySQL commands that do not return results + - the M$ Windows binary now use the correct runtime DLL (MSVC80.DLL insteaLd of MSVC80D.DL) +v1.0.7 - logical values are now correctly considered as numerical value when using placeholder {Si} + - corrected a bug occuring when closing a connection that was not openned + - added the possibility to get the next free connection ID when oppening a connection +v1.0.6 - corrected a bug where mym('use', 'a_schema') worked fine while mym(conn, 'use', 'a_schema') did not work + - corrected a segmentation violation that happened when issuing a MySQL command when not connected + - corrected the mex command (this file) + - corrected a bug where it was impossible to open a connection silently + - use std::max(a, b) instead of max(a, b) +v1.0.5 - added the preamble 'u', permitting to save binary fields without using compression + - corrected a bug in mym('closeall') + - corrected various mistakes in the help file (thanks to Jörg Buchholz) +v1.0.4 corrected the behaviour of mYm with time fields, now return a string dump of the field +v1.0.3 minor corrections +v1.0.2 put mYm under GPL license, official release +v1.0.1 corrected a bug where non-matlab binary objects were not returned v1.0.0 initial release \ No newline at end of file diff --git a/eerfmatlab/test.m b/matlab/test.m similarity index 97% rename from eerfmatlab/test.m rename to matlab/test.m index e80bd22..07a7680 100644 --- a/eerfmatlab/test.m +++ b/matlab/test.m @@ -1,22 +1,22 @@ -addpath(genpath(fullfile(pwd, 'mym'))); - -import EERF.* % Object interfaces. -dbx = EERF.Dbmym('mysite'); % Open a connection to the database. - -subjects=EERF.Db_obj.get_obj_array(dbx, 'Subject'); %Get all subjects; -my_sub = subjects(1); - -%Alternatively, if you know the name, you could try -%my_sub = EERF.Subject(dbx, 'name', 'ap4'); %This will search for the -%subject matching this name. - -my_period = my_sub.periods(33); -my_trial = my_period.trials(1); - -%Plot the trial -plot(my_trial); - -%The rest are not working right now. -meps=pp.get_trials_features('MEP_p2p'); %Get all meps from the period. -powerA=pp.get_trials_details('dat_TMS_powerA'); %Get TMS intensity for the period. +addpath(genpath(fullfile(pwd, 'mym'))); + +import EERF.* % Object interfaces. +dbx = EERF.Dbmym('mysite'); % Open a connection to the database. + +subjects=EERF.Db_obj.get_obj_array(dbx, 'Subject'); %Get all subjects; +my_sub = subjects(1); + +%Alternatively, if you know the name, you could try +%my_sub = EERF.Subject(dbx, 'name', 'ap4'); %This will search for the +%subject matching this name. + +my_period = my_sub.periods(33); +my_trial = my_period.trials(1); + +%Plot the trial +plot(my_trial); + +%The rest are not working right now. +meps=pp.get_trials_features('MEP_p2p'); %Get all meps from the period. +powerA=pp.get_trials_details('dat_TMS_powerA'); %Get TMS intensity for the period. powerA=str2double(powerA); \ No newline at end of file diff --git a/my.cnf b/my.cnf deleted file mode 100644 index ac52723..0000000 --- a/my.cnf +++ /dev/null @@ -1,14 +0,0 @@ -[client] -#port = 3306 -#socket = /tmp/mysql.sock - -[mysqld] -datadir = /Volumes/STORE/eerfdata -#port = 3306 -#socket = /tmp/mysql.sock -#pid-file = /Volumes/STORE/eerfdata/Chadwicks-MacBook-Pro.local.pid -default-storage-engine = MyISAM -default_tmp_storage_engine = MyISAM -query_cache_type = 1 -key_buffer_size = 2G -query_cache_limit = 400M diff --git a/my_serf.cnf b/my_serf.cnf new file mode 100644 index 0000000..d1758bc --- /dev/null +++ b/my_serf.cnf @@ -0,0 +1,5 @@ +[client] +#port = 3306 +#socket = /tmp/mysql.sock +user = root +password = mysql diff --git a/python/MANIFEST.in b/python/MANIFEST.in new file mode 100644 index 0000000..66405e1 --- /dev/null +++ b/python/MANIFEST.in @@ -0,0 +1,5 @@ +include LICENSE +include README.md +recursive-include serf/static * +recursive-include serf/templates * +recursive-include docs * diff --git a/python/README.md b/python/README.md new file mode 100644 index 0000000..0e968f8 --- /dev/null +++ b/python/README.md @@ -0,0 +1,16 @@ +Please see the [parent README](../README.md) for information about SERF. + +This is a Python package of SERF including a Django app and some helper modules. + +## Usage + +```python +import serf +serf.boot_django() +from serf.models import * + + +print(Subject.objects.get_or_create(name='Test')[0]) +# ft = ('HR_aaa', 'H-reflex avg abs amp') +# myFT = FeatureType.objects.filter(name=ft[0]) +``` diff --git a/python/serf.egg-info/PKG-INFO b/python/serf.egg-info/PKG-INFO new file mode 100644 index 0000000..1494529 --- /dev/null +++ b/python/serf.egg-info/PKG-INFO @@ -0,0 +1,129 @@ +Metadata-Version: 1.1 +Name: serf +Version: 0.8 +Summary: A simple Django app to... +Home-page: https://github.com/cboulay/SERF +Author: Chadwick Boulay +Author-email: chadwick.boulay@gmail.com +License: BSD License +Description: # Segmented Electrophys Recordings and Features Database + + SERF-DB is a database schema, designed to facilitate collection and analysis of segmented electrophysiological recordings and features. ![Database Schema](/models.png?raw=true "Database Schema") + + - In the `python` folder we provide a python package `serf` comprising a [Django](https://www.djangoproject.com/) application to administer the database and act as an object relational map (ORM), and a `tools` module to help with feature calculation and data analysis. Using this schema, and interfacing with the Django ORM, it is easy to work with the data in real-time in Python. + - The [matlab](matlab/README.md) folder contains some (very outdated) code for interfacing with the database in Matlab. + - serf.sql is some SQL to add some functionality when using non-Django API. + + > Django applications are normally run in conjunction with a Django **project**, but in this case we are mostly only interested in the ORM. Therefore we default to the standalone approach, but we do provide some untested guidance below on how to use the application with a Django webserver. + + ## Installation and setup + + 1. Install Python and Django. If you came here from the NeuroportDBS repository then you should have already done this. + 1. pip install serf + * Option 1: Download the `serf` wheel from the [releases page](https://github.com/cboulay/SERF/releases) and install it with `pip install {name of wheel.whl}`. + * Option 2: `pip install git+https://github.com/cboulay/SERF.git#subdirectory=python` + 1. Install MySql. + * See [INSTALL_MYSQL.md](./INSTALL_MYSQL.md) for how I do it (Mac / Linux / Win) + 1. Install the serf schema + 1. Copy [my_serf.cnf](https://raw.githubusercontent.com/cboulay/SERF/master/my_serf.cnf) to where Python thinks is the home directory. The easiest way to check this is to open a command prompt in the correct python environment and run `python -c "import os; print(os.path.expanduser('~'))"`. + 1. Edit the copied file to make sure its database settings are correct. `[client]` `user` and `password` are important. + 1. + ``` + $ serf-makemigrations + $ serf-migrate + ``` + You should get output like the following: + ``` + Migrations for 'serf': + SERF\python\serf\migrations\0001_initial.py + - Create model Datum + - Create model DatumFeatureValue + - Create model DetailType + - Create model FeatureType + - Create model Subject + - Create model System + - Create model DatumFeatureStore + - Create model DatumStore + - Create model SubjectLog + - Create model Procedure + - Add field feature_type to datumfeaturevalue + - Add field procedure to datum + - Add field trials to datum + - Create model SubjectDetailValue + - Alter unique_together for datumfeaturevalue (1 constraint(s)) + - Create model DatumDetailValue + - Alter unique_together for datum (1 constraint(s)) + ``` + `Applying serf.0001_initial... OK` + + ## Using SERF + + ### ...In a custom Python program + + ```python + import serf + serf.boot_django() + from serf.models import * + print(Subject.objects.get_or_create(name='Test')[0]) + ``` + + > [BCPyElectrophys](https://github.com/cboulay/BCPyElectrophys) would normally now be able to use this ORM, except it is out of date. I have some work to do there to get it working again. + + ### ...In a web browser (i.e., in a Django project) + + We assume you have already created your Django project using instructions similar to [the online tutorial up until "Creating the Polls app"](https://docs.djangoproject.com/en/3.1/intro/tutorial01/#creating-a-project). + + Instead of continuing the tutorial to create a new app, edit your Django project to add the pip-installed serf app. + + In settings.py, make sure the database info is correct ([online documentation](https://docs.djangoproject.com/en/3.1/ref/databases/#connecting-to-the-database)) and `'serf'` is in the list of INSTALLED_APPS: + ```Python + DATABASES = { + 'default': { + 'ENGINE': 'django.db.backends.mysql', + # 'NAME': 'serf', + # 'HOST': '127.0.0.1', + # 'USER': 'username', + # 'PASSWORD': 'password', + # above options can also be defined in config file + 'OPTIONS': {'read_default_file': '/path/to/my_serf.cnf'}, + } + } + + INSTALLED_APPS = [ + ... + 'serf', + ] + ``` + + Edit urls.py + ```Python + from django.urls import include, path + url_patterns = [ + ... + path('serf/', include('serf.urls')), + #url(r'^serf/', include(('serf.urls','serf'), namespace="serf")), + ] + ``` + + Test your server: `python manage.py runserver` and go to `localhost:8000/serf` + + ### ...In a custom non-Python program + + e.g. [Matlab](eerfmatlab/README.md) + + Note that I cannot currently get non-Django interfaces to do CASCADE ON DELETE. + This is because Django creates foreign keys with a unique hash, and I cannot + use custom SQL (e.g., via migrations) to access the key name, drop it, then + add a new foreign key constraint with CASCADE ON DELETE set to on. + + Thus, to delete using an API other than Django, you'll have to delete items + in order so as not to violate foreign key constraints. + For example, to delete a subject, you'll have to delete all of its data in this order: + + DatumFeatureValue > DatumDetailValue > DatumStore > Datum > SubjectDetailValue > SubjectLog > Subject + +Platform: UNKNOWN +Classifier: Framework :: Django +Classifier: Intended Audience :: Developers +Classifier: Operating System :: OS Independent +Classifier: Programming Language :: Python diff --git a/python/serf.egg-info/SOURCES.txt b/python/serf.egg-info/SOURCES.txt new file mode 100644 index 0000000..f1a65ba --- /dev/null +++ b/python/serf.egg-info/SOURCES.txt @@ -0,0 +1,53 @@ +MANIFEST.in +README.md +setup.py +serf/__init__.py +serf/_shared.py +serf/admin.py +serf/boot_django.py +serf/models.py +serf/tests.py +serf/urls.py +serf/views.py +serf.egg-info/PKG-INFO +serf.egg-info/SOURCES.txt +serf.egg-info/dependency_links.txt +serf.egg-info/entry_points.txt +serf.egg-info/top_level.txt +serf/migrations/0001_initial.py +serf/migrations/__init__.py +serf/scripts/Depth_Process.py +serf/scripts/Features_Process.py +serf/scripts/__init__.py +serf/scripts/depth_process.py +serf/scripts/djangoshell.py +serf/scripts/features_process.py +serf/scripts/makemigrations.py +serf/scripts/migrate.py +serf/static/eerfapp/raw_erp.js +serf/static/eerfapp/recruitment_curve.js +serf/static/eerfapp/trial_settings.js +serf/templates/eerfapp/base.html +serf/templates/eerfapp/erp_data.html +serf/templates/eerfapp/index.html +serf/templates/eerfapp/period_detail.html +serf/templates/eerfapp/raw_erp.html +serf/templates/eerfapp/recruitment_curve.html +serf/templates/eerfapp/subject_detail.html +serf/templates/eerfapp/subject_import.html +serf/templates/eerfapp/subject_list.html +serf/templates/eerfapp/subject_view_data.html +serf/templates/eerfapp/trial_settings.html +serf/tools/__init__.py +serf/tools/db_wrap.py +serf/tools/online.py +serf/tools/features/__init__.py +serf/tools/features/dbs_features.py +serf/tools/features/dl_features.py +serf/tools/features/hreflex_features.py +serf/tools/features/lfp_features.py +serf/tools/features/spike_features.py +serf/tools/features/base/FeatureBase.py +serf/tools/features/base/__init__.py +serf/tools/utils/__init__.py +serf/tools/utils/misc_functions.py \ No newline at end of file diff --git a/python/serf.egg-info/dependency_links.txt b/python/serf.egg-info/dependency_links.txt new file mode 100644 index 0000000..8b13789 --- /dev/null +++ b/python/serf.egg-info/dependency_links.txt @@ -0,0 +1 @@ + diff --git a/python/serf.egg-info/entry_points.txt b/python/serf.egg-info/entry_points.txt new file mode 100644 index 0000000..ae0532c --- /dev/null +++ b/python/serf.egg-info/entry_points.txt @@ -0,0 +1,5 @@ +[console_scripts] +serf-makemigrations = serf.scripts.makemigrations:main +serf-migrate = serf.scripts.migrate:main +serf-shell = serf.scripts.djangoshell:main + diff --git a/python/serf.egg-info/top_level.txt b/python/serf.egg-info/top_level.txt new file mode 100644 index 0000000..ad1c7a1 --- /dev/null +++ b/python/serf.egg-info/top_level.txt @@ -0,0 +1 @@ +serf diff --git a/python/serf/__init__.py b/python/serf/__init__.py new file mode 100644 index 0000000..0d8350a --- /dev/null +++ b/python/serf/__init__.py @@ -0,0 +1 @@ +from serf.boot_django import boot_django \ No newline at end of file diff --git a/python/serf/admin.py b/python/serf/admin.py new file mode 100644 index 0000000..672f0af --- /dev/null +++ b/python/serf/admin.py @@ -0,0 +1,7 @@ +from serf.models import * +from django.contrib import admin + +admin.site.register(Subject) +admin.site.register(SubjectLog) +admin.site.register(DetailType) +admin.site.register(FeatureType) diff --git a/python/serf/boot_django.py b/python/serf/boot_django.py new file mode 100644 index 0000000..141a0ad --- /dev/null +++ b/python/serf/boot_django.py @@ -0,0 +1,33 @@ +# boot_django.py +# +# This file sets up and configures Django. It's used by scripts that need to +# execute as if running in a Django server. +import os +import django +from django.conf import settings + + +BASE_DIR = os.path.abspath(os.path.join(os.path.dirname(__file__))) +USER_DIR = os.path.expanduser("~") +db_default_kwargs = {} +if os.path.isfile(os.path.join(USER_DIR, 'my_serf.cnf')): + db_default_kwargs = {"OPTIONS": {"read_default_file": os.path.join(USER_DIR, "my_serf.cnf")}} + + +def boot_django(): + settings.configure( + BASE_DIR=BASE_DIR, + DEBUG=True, + DATABASES={ + "default": { + "ENGINE": "django.db.backends.mysql", + "NAME": "serf", + "USER": "root", + **db_default_kwargs # read_default_file, if found. Specified values can overwrite above values. + } + }, + INSTALLED_APPS=( + "serf", + ), + ) + django.setup() diff --git a/django-eerf/eerfapp/migrations/0001_initial.py b/python/serf/migrations/0001_initial.py similarity index 59% rename from django-eerf/eerfapp/migrations/0001_initial.py rename to python/serf/migrations/0001_initial.py index 736871e..3ef5e14 100644 --- a/django-eerf/eerfapp/migrations/0001_initial.py +++ b/python/serf/migrations/0001_initial.py @@ -1,9 +1,10 @@ -# Generated by Django 2.2.6 on 2019-11-23 07:20 +# Generated by Django 3.0.3 on 2020-08-07 22:59 import datetime from django.db import migrations, models import django.db.models.deletion -import eerfapp.models +import numpy +import serf.models class Migration(migrations.Migration): @@ -19,8 +20,8 @@ class Migration(migrations.Migration): fields=[ ('datum_id', models.AutoField(primary_key=True, serialize=False)), ('number', models.PositiveIntegerField(default=0)), - ('span_type', eerfapp.models.EnumField(choices=[('trial', 'trial'), ('day', 'day'), ('period', 'period')], max_length=104)), - ('is_good', models.BooleanField(default=True)), + ('span_type', serf.models.EnumField(choices=[('trial', 'trial'), ('day', 'day'), ('period', 'period')], max_length=104)), + ('is_good', serf.models.NPArrayBlobField(blank=True, editable=True, np_dtype=bool, null=True)), ('start_time', models.DateTimeField(blank=True, default=datetime.datetime.now, null=True)), ('stop_time', models.DateTimeField(blank=True, default=None, null=True)), ], @@ -31,9 +32,9 @@ class Migration(migrations.Migration): migrations.CreateModel( name='DatumFeatureValue', fields=[ - ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), + ('datum_feature_id', models.AutoField(primary_key=True, serialize=False)), ('value', models.FloatField(blank=True, null=True)), - ('datum', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='_feature_values', to='eerfapp.Datum')), + ('datum', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='_feature_values', to='serf.Datum')), ], options={ 'db_table': 'datum_feature_value', @@ -65,20 +66,10 @@ class Migration(migrations.Migration): name='Subject', fields=[ ('subject_id', models.AutoField(primary_key=True, serialize=False)), - ('name', models.CharField(max_length=135, unique=True)), - ('id', models.CharField(blank=True, max_length=135, null=True)), - ('weight', models.PositiveIntegerField(blank=True, null=True)), - ('height', models.PositiveIntegerField(blank=True, null=True)), + ('id', models.CharField(max_length=135, unique=True)), ('birthday', models.DateField(blank=True, null=True)), - ('headsize', models.CharField(blank=True, max_length=135, null=True)), - ('sex', eerfapp.models.EnumField(choices=[('unknown', 'unknown'), ('male', 'male'), ('female', 'female'), ('unspecified', 'unspecified')], max_length=104)), - ('handedness', eerfapp.models.EnumField(choices=[('unknown', 'unknown'), ('right', 'right'), ('left', 'left'), ('equal', 'equal')], max_length=104)), - ('smoking', eerfapp.models.EnumField(choices=[('unknown', 'unknown'), ('no', 'no'), ('yes', 'yes')], max_length=104)), - ('alcohol_abuse', eerfapp.models.EnumField(choices=[('unknown', 'unknown'), ('no', 'no'), ('yes', 'yes')], max_length=104)), - ('drug_abuse', eerfapp.models.EnumField(choices=[('unknown', 'unknown'), ('no', 'no'), ('yes', 'yes')], max_length=104)), - ('medication', eerfapp.models.EnumField(choices=[('unknown', 'unknown'), ('no', 'no'), ('yes', 'yes')], max_length=104)), - ('visual_impairment', eerfapp.models.EnumField(choices=[('unknown', 'unknown'), ('no', 'no'), ('yes', 'yes'), ('corrected', 'corrected')], max_length=104)), - ('heart_impairment', eerfapp.models.EnumField(choices=[('unknown', 'unknown'), ('no', 'no'), ('yes', 'yes'), ('pacemaker', 'pacemaker')], max_length=104)), + ('sex', serf.models.EnumField(choices=[('unknown', 'unknown'), ('male', 'male'), ('female', 'female'), ('unspecified', 'unspecified')], default='unknown', max_length=104)), + ('name', models.CharField(blank=True, max_length=135)), ], options={ 'db_table': 'subject', @@ -97,12 +88,12 @@ class Migration(migrations.Migration): migrations.CreateModel( name='DatumFeatureStore', fields=[ - ('dfv', models.OneToOneField(on_delete=django.db.models.deletion.CASCADE, primary_key=True, related_name='store', serialize=False, to='eerfapp.DatumFeatureValue')), - ('x_vec', eerfapp.models.NPArrayBlobField(blank=True, null=True)), - ('dat_array', eerfapp.models.NPArrayBlobField(blank=True, null=True)), + ('dfv', models.OneToOneField(on_delete=django.db.models.deletion.CASCADE, primary_key=True, related_name='store', serialize=False, to='serf.DatumFeatureValue')), + ('x_vec', serf.models.NPArrayBlobField(blank=True, editable=True, null=True)), + ('dat_array', serf.models.NPArrayBlobField(blank=True, editable=True, null=True)), ('n_channels', models.PositiveSmallIntegerField(blank=True, null=True)), ('n_features', models.PositiveIntegerField(blank=True, null=True)), - ('channel_labels', eerfapp.models.CSVStringField(blank=True, null=True)), + ('channel_labels', serf.models.CSVStringField(blank=True, null=True)), ], options={ 'db_table': 'datum_feature_value_store', @@ -111,12 +102,12 @@ class Migration(migrations.Migration): migrations.CreateModel( name='DatumStore', fields=[ - ('datum', models.OneToOneField(on_delete=django.db.models.deletion.CASCADE, primary_key=True, related_name='store', serialize=False, to='eerfapp.Datum')), - ('x_vec', eerfapp.models.NPArrayBlobField(blank=True, null=True)), - ('erp', eerfapp.models.NPArrayBlobField(blank=True, null=True)), + ('datum', models.OneToOneField(on_delete=django.db.models.deletion.CASCADE, primary_key=True, related_name='store', serialize=False, to='serf.Datum')), + ('x_vec', serf.models.NPArrayBlobField(blank=True, editable=True, null=True)), + ('dat_array', serf.models.NPArrayBlobField(blank=True, editable=True, np_dtype=numpy.int16, null=True)), ('n_channels', models.PositiveSmallIntegerField(blank=True, null=True)), ('n_samples', models.PositiveIntegerField(blank=True, null=True)), - ('channel_labels', eerfapp.models.CSVStringField(blank=True, null=True)), + ('channel_labels', serf.models.CSVStringField(blank=True, null=True)), ], options={ 'db_table': 'datum_store', @@ -128,34 +119,55 @@ class Migration(migrations.Migration): ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('time', models.DateTimeField(blank=True, default=datetime.datetime.now, null=True)), ('entry', models.TextField(blank=True)), - ('subject', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='eerfapp.Subject')), + ('subject', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='serf.Subject')), ], options={ 'db_table': 'subject_log', }, ), + migrations.CreateModel( + name='Procedure', + fields=[ + ('procedure_id', models.AutoField(primary_key=True, serialize=False)), + ('date', models.DateField(blank=True, default=datetime.date.today, null=True)), + ('type', serf.models.EnumField(choices=[('none', 'none'), ('surgical', 'surgical'), ('experiment', 'experiment'), ('monitoring', 'monitoring'), ('other', 'other')], default='none', max_length=104)), + ('a', serf.models.NPArrayBlobField(blank=True, editable=True, null=True)), + ('distance_to_target', models.FloatField(blank=True, null=True)), + ('e', serf.models.NPArrayBlobField(blank=True, editable=True, null=True)), + ('electrode_config', serf.models.EnumField(choices=[('none', 'none'), ('+', '+'), ('x', 'x'), ('l', 'l')], default='none', max_length=104)), + ('entry', serf.models.NPArrayBlobField(blank=True, editable=True, null=True)), + ('medication_status', serf.models.EnumField(choices=[('none', 'none'), ('on', 'on'), ('off', 'off'), ('half', 'half')], default='none', max_length=104)), + ('name', models.CharField(blank=True, max_length=135)), + ('recording_config', serf.models.EnumField(choices=[('none', 'none'), ('left', 'left'), ('left_2', 'left_2'), ('left_3', 'left_3'), ('left_4', 'left_4'), ('right', 'right'), ('right_2', 'right_2'), ('right_3', 'right_3'), ('right_4', 'right_4'), ('bilateral', 'bilateral'), ('bilateral_2', 'bilateral_2'), ('bilateral_3', 'bilateral_3'), ('bilateral_4', 'bilateral_4'), ('full', 'full'), ('full_2', 'full_2'), ('full_3', 'full_3'), ('full_4', 'full_4'), ('array', 'array'), ('array_2', 'array_2'), ('array_3', 'array_3'), ('array_4', 'array_4')], default='none', max_length=104)), + ('target', serf.models.NPArrayBlobField(blank=True, editable=True, null=True)), + ('subject', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='_procedures', to='serf.Subject')), + ], + options={ + 'db_table': 'procedure', + }, + ), migrations.AddField( model_name='datumfeaturevalue', name='feature_type', - field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='eerfapp.FeatureType'), + field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='serf.FeatureType'), ), migrations.AddField( model_name='datum', - name='subject', - field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='data', to='eerfapp.Subject'), + name='procedure', + field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='procedure', to='serf.Procedure'), ), migrations.AddField( model_name='datum', name='trials', - field=models.ManyToManyField(db_table='datum_has_datum', limit_choices_to={'span_type': 'trial'}, related_name='periods', to='eerfapp.Datum'), + field=models.ManyToManyField(db_table='datum_has_datum', limit_choices_to={'span_type': 'trial'}, related_name='periods', to='serf.Datum'), ), migrations.CreateModel( name='SubjectDetailValue', fields=[ ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('value', models.CharField(blank=True, max_length=135, null=True)), - ('detail_type', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='eerfapp.DetailType')), - ('subject', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='_detail_values', to='eerfapp.Subject')), + ('detail_type', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='serf.DetailType')), + ('subject', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='_detail_values', to='serf.Subject')), ], options={ 'db_table': 'subject_detail_value', @@ -171,8 +183,8 @@ class Migration(migrations.Migration): fields=[ ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('value', models.CharField(blank=True, max_length=135, null=True)), - ('datum', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='_detail_values', to='eerfapp.Datum')), - ('detail_type', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='eerfapp.DetailType')), + ('datum', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='_detail_values', to='serf.Datum')), + ('detail_type', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='serf.DetailType')), ], options={ 'db_table': 'datum_detail_value', @@ -181,6 +193,6 @@ class Migration(migrations.Migration): ), migrations.AlterUniqueTogether( name='datum', - unique_together={('subject', 'number', 'span_type')}, + unique_together={('procedure', 'number', 'span_type')}, ), ] diff --git a/django-eerf/eerfapp/__init__.py b/python/serf/migrations/__init__.py similarity index 100% rename from django-eerf/eerfapp/__init__.py rename to python/serf/migrations/__init__.py diff --git a/django-eerf/eerfapp/migrations/old/0001_initial.py b/python/serf/migrations/old/0001_initial.py similarity index 100% rename from django-eerf/eerfapp/migrations/old/0001_initial.py rename to python/serf/migrations/old/0001_initial.py diff --git a/django-eerf/eerfapp/migrations/old/0002_auto_20140419_2253.py b/python/serf/migrations/old/0002_auto_20140419_2253.py similarity index 100% rename from django-eerf/eerfapp/migrations/old/0002_auto_20140419_2253.py rename to python/serf/migrations/old/0002_auto_20140419_2253.py diff --git a/django-eerf/eerfapp/migrations/old/0003_auto_20140419_2253.py b/python/serf/migrations/old/0003_auto_20140419_2253.py similarity index 100% rename from django-eerf/eerfapp/migrations/old/0003_auto_20140419_2253.py rename to python/serf/migrations/old/0003_auto_20140419_2253.py diff --git a/django-eerf/eerfapp/models.py b/python/serf/models.py similarity index 99% rename from django-eerf/eerfapp/models.py rename to python/serf/models.py index d9df4de..f491ac3 100644 --- a/django-eerf/eerfapp/models.py +++ b/python/serf/models.py @@ -2,7 +2,6 @@ # import django.utils.timezone import datetime import numpy as np -from eerfhelper import feature_functions # ========================= @@ -359,6 +358,7 @@ def extend_stop_time(self): self.stop_time = new_time if False: # Remove code to calculate features. + from serf.tools.features import hreflex_features as feature_functions def recalculate_child_feature_values(self): # REcalculate implies we want to calculate using period's details. if self.span_type == 'period': @@ -369,7 +369,7 @@ def calculate_all_features(self, refdatum=None): return [self.calculate_value_for_feature_name(dfv.feature_type.name, refdatum=refdatum) for dfv in self._feature_values] def calculate_value_for_feature_name(self, fname, refdatum=None): - # import EERF.APIextension.feature_functions + # import SERF.APIextension.feature_functions fxn = getattr(feature_functions, fname) # pulls the name of the function from the feature_functions module. return self.update_dfv(fname, fxn(self, refdatum=refdatum)) @@ -395,7 +395,6 @@ def set_data(self, values): self.dat_array = values self.n_channels, self.n_samples = values.shape self.save() - data = property(get_data, set_data) diff --git a/python/serf/scripts/Depth_Process.py b/python/serf/scripts/Depth_Process.py new file mode 100644 index 0000000..87c7639 --- /dev/null +++ b/python/serf/scripts/Depth_Process.py @@ -0,0 +1,291 @@ +import time +import json +import numpy as np +from cerebuswrapper import CbSdkConnection +from pylsl import stream_inlet, resolve_byprop +from django.utils import timezone +from qtpy.QtCore import QSharedMemory +from serf.tools.db_wrap import DBWrapper + + +SIMOK = False +SAMPLINGGROUPS = ["0", "500", "1000", "2000", "10000", "30000"] # , "RAW"] RAW broken in cbsdk + + +class NSPBufferWorker: + + def __init__(self): + self.current_depth = -20.000 + + # try to resolve LSL stream + self.depth_inlet = None + self.resolve_stream() + + # DB wrapper + self.db_wrapper = DBWrapper() + + # shared memory object to receive kill signal + self.shared_memory = QSharedMemory() + self.shared_memory.setKey("Depth_Process") + + # cbSDK; connect using default parameters + self.cbsdk_conn = CbSdkConnection(simulate_ok=False) + self.cbsdk_conn.connect() + + # neural data buffer + self.group_info = self.cbsdk_conn.get_group_config(SAMPLINGGROUPS.index("30000")) + self.n_chan = len(self.group_info) + + # Default values + self.procedure_id = None + self.buffer_length = 6 * 30000 + self.sample_length = 4 * 30000 + self.validity_threshold = [self.sample_length * .9] * self.n_chan + self.threshold = [False] * self.n_chan + self.settings = [] + self.start_time = timezone.now() + + # process settings + if self.shared_memory.attach(QSharedMemory.ReadWrite): + _, settings = self.read_shared_memory() + + if settings != '': + self.process_settings(settings) + + # loop + self.is_running = True + else: + self.is_running = False + + def read_shared_memory(self): + if self.shared_memory.isAttached(): + self.shared_memory.lock() + signal = self.shared_memory.data() + kill_sig = np.frombuffer(signal[-1], dtype=np.bool) + settings = ''.join([x.decode('utf-8') for x in signal[1:-1] if x != b'\x00']) + # clear shared_memory but + # leave the first byte unchanged because this is the output byte + self.shared_memory.data()[1:] = np.zeros((self.shared_memory.size()-1,), dtype=np.int8).tobytes() + self.shared_memory.unlock() + else: + kill_sig = True + settings = '' + return kill_sig, settings + + def write_shared_memory(self, in_use_done): + # The output is 8bit integer: + # -1 : Recording + # 0 : NSP not recording + # 1 : Done + if self.shared_memory.isAttached(): + self.shared_memory.lock() + self.shared_memory.data()[0] = np.array([in_use_done], dtype=np.int8).tobytes() + self.shared_memory.unlock() + + def process_settings(self, sett_str): + # process inputs + sett_dict = json.loads(sett_str) + sett_keys = sett_dict.keys() + + if 'procedure_id' in sett_keys: + self.reset_procedure(sett_dict['procedure_id']) + + if 'buffer_length' in sett_keys: + # TODO: not hard-code the sampling rate? + self.buffer_length = int(float(sett_dict['buffer_length']) * 30000) + self.sample_length = int(float(sett_dict['sample_length']) * 30000) + self.reset_buffer() + + if 'electrode_settings' in sett_keys: + for ii, info in enumerate(self.group_info): + label = info['label'].decode('utf-8') + if label in sett_dict['electrode_settings'].keys(): + self.threshold[ii] = bool(sett_dict['electrode_settings'][label]['threshold']) + self.validity_threshold[ii] = \ + float(sett_dict['electrode_settings'][label]['validity']) / 100 * self.sample_length + + def reset_procedure(self, proc_id): + self.procedure_id = proc_id + self.db_wrapper.select_procedure(self.procedure_id) + + def reset_buffer(self): + self.buffer = np.zeros((self.n_chan, self.buffer_length), dtype=np.int16) + self.buffer_idx = 0 + # for each channel we will keep a bool array whether each sample point is valid or not + # when a condition is met to trigger sending the sample to the DB we will pick the window + # with highest validity count. + self.validity = np.zeros((self.n_chan, self.buffer_length), dtype=bool) + self.valid_idx = (0, 0) + self.update_buffer_status = True + + def clear_buffer(self): + self.buffer.fill(0) + self.buffer_idx = 0 + + self.validity.fill(False) + # list of tuples: (index of validity value, value) + # saves the index with largest validity across all channels + self.valid_idx = (0, 0) + + self.update_buffer_status = True + self.start_time = timezone.now() + + def resolve_stream(self): + # will register to LSL stream to read electrode depth + info = resolve_byprop('source_id', 'depth1214', timeout=1) + if len(info) > 0: + self.depth_inlet = stream_inlet(info[0]) + sample = self.depth_inlet.pull_sample(0) + # If new sample + if sample[0]: + self.current_depth = sample[0][0] + + def run_buffer(self): + while self.is_running: + # check for kill signal + kill_sig, new_settings = self.read_shared_memory() + + if kill_sig: + self.is_running = False + continue + + if new_settings != '': + self.process_settings(new_settings) + + # collect NSP data, regardless of recording status to keep cbsdk buffer empty + # data is a list of lists. + # 1st level is a list of channels + # 2nd level is a list [chan_id, np.array(data)] + data = self.cbsdk_conn.get_continuous_data() + rec_status = self.cbsdk_conn.get_recording_state() + + if not rec_status: + self.write_shared_memory(0) + + # only process the NSP data if Central is recording + elif data and self.update_buffer_status: + + # all data segments should have the same length, so first check if we run out of buffer space + data_length = data[0][1].shape[0] + if (self.buffer_idx + data_length) >= self.buffer_length: + # if we run out of buffer space before data has been sent to the DB few things could have gone + # wrong: + # - data in buffer is not good enough + # - the new data chunk is larger than the difference between buffer and sample length + # (e.g. 6s buffer and 4s sample, if the current buffer has 3s of data and it receives a 4s + # long chunk then the buffer would overrun, and still not have enough data to send to DB. + # Although unlikely in real-life, it happened during debugging.) + + # trim data to only fill the buffer, discarding the rest + # TODO: is this the optimal solution? Slide buffer instead? + data_length = self.buffer_length - self.buffer_idx + + # continue to validate received data + for chan_idx, (chan, values) in enumerate(data): + + if data_length > 0: + # Validate data + valid = self.validate_data_sample(values[:data_length]) + + # append data to buffer + self.buffer[chan_idx, + self.buffer_idx:self.buffer_idx + data_length] = values[:data_length] + + self.validity[chan_idx, + self.buffer_idx:self.buffer_idx + data_length] = valid + + # increment buffer index, all data segments should have same length, if they don't, will match the first + # channel + self.buffer_idx += data_length + + # check if data length > sample length + if self.buffer_idx >= self.sample_length: + + # compute total validity of last sample_length and if > threshold, send to DB + sample_idx = self.buffer_idx - self.sample_length + + temp_sum = [np.sum(x[sample_idx:self.buffer_idx]) for x in self.validity] + + # check if validity is better than previous sample, if so, store it + if np.sum(temp_sum) > self.valid_idx[1]: + self.valid_idx = (sample_idx, np.sum(temp_sum)) + + if all(x >= y for x, y in zip(temp_sum, self.validity_threshold)) or \ + self.buffer_idx >= self.buffer_length: + self.send_to_db() + + # check for new depth + # At this point, the individual channels have either been sent to the DB or are still collecting waiting for + # either of the following conditions: acquire sufficient data (i.e. sample_length) or acquire sufficiently + # clean data (i.e. validity_threshold). If the channel is still acquiring data but has sufficiently long + # segments, we will send the cleanest segment to the DB (i.e. valid_idx). + if not self.depth_inlet: + self.resolve_stream() + else: + sample = self.depth_inlet.pull_sample(0) + # If new sample + if sample[0]: + # New depth + if sample[0][0] != self.current_depth: + # check whether the channels are still acquiring data + # it can be because they have insufficient samples or because the samples do not have a high + # enough validity value. If this is the case, send the best one to the DB, even if possibly + # corrupted + if self.update_buffer_status: + self.send_to_db() + + self.clear_buffer() + self.current_depth = sample[0][0] + + # only if recording + if rec_status: + self.write_shared_memory(-1) + + time.sleep(.010) + + def send_to_db(self): + # if we actually have a computed validity (i.e. segment is long enough) + if self.valid_idx[1] != 0: + # the info that needs to be sent the DB_wrapper is: + # Datum: + # - subject_id + # - is_good : to be determined by validity values + # - start_time / stop_time ? + # Datum Store: + # - channel_labels : from group_info + # - erp : actual data + # - n_channels and n_samples : determined by data size + # - x_vec: time ? + # DatumDetailValue: + # - detail_type: depth (fetch from DetailType + # - value: depth value + self.db_wrapper.create_depth_datum(depth=self.current_depth, + data=self.buffer[:, + self.valid_idx[0]:self.valid_idx[0]+self.sample_length], + is_good=np.array([x >= y for x, y in zip( + np.sum(self.validity[:, self.valid_idx[0]: + self.valid_idx[0] + self.sample_length], axis=1), + self.validity_threshold)], dtype=np.bool), + group_info=self.group_info, + start_time=self.start_time, + stop_time=timezone.now()) + + self.write_shared_memory(1) + + self.update_buffer_status = False + + @staticmethod + def validate_data_sample(data): + # TODO: implement other metrics + # SUPER IMPORTANT: when cbpy returns an int16 value, it can be -32768, however in numpy: + # np.abs(-32768) = -32768 for 16 bit integers since +32768 does not exist. + # We therefore can't use the absolute value for the threshold. + threshold = 30000 # arbitrarily set for now + validity = np.array([-threshold < x < threshold for x in data]) + + return validity + + +if __name__ == '__main__': + worker = NSPBufferWorker() + worker.run_buffer() diff --git a/python/serf/scripts/Features_Process.py b/python/serf/scripts/Features_Process.py new file mode 100644 index 0000000..ad7be4b --- /dev/null +++ b/python/serf/scripts/Features_Process.py @@ -0,0 +1,100 @@ +import time +import numpy as np +import json +from qtpy.QtCore import QSharedMemory +from serf.tools.db_wrap import DBWrapper + + +class FeaturesWorker: + def __init__(self): + + # DB wrapper + self.db_wrapper = DBWrapper() + + # attach to shared memory and read settings + # shared memory object to receive kill signal + self.shared_memory = QSharedMemory() + self.shared_memory.setKey("Features_Process") + + self.procedure_id = None + self.settings = [] + self.all_datum_ids = [] + self.gt = 0 # fetch datum ids greater than this value + + # process settings + if self.shared_memory.attach(QSharedMemory.ReadWrite): + # loop + self.is_running = True + else: + self.is_running = False + + def process_settings(self, sett_str): + # process inputs + sett_dict = json.loads(sett_str) + if 'procedure_id' in sett_dict.keys(): + self.reset_procedure(sett_dict['procedure_id']) + + if 'features' in sett_dict.keys(): + self.reset_features(sett_dict['features']) + + def reset_procedure(self, proc_id): + self.procedure_id = proc_id + self.db_wrapper.select_procedure(self.procedure_id) + self.reset_datum() + + def reset_features(self, feats): + self.db_wrapper.select_features(feats) + self.reset_datum() + + def reset_datum(self): + # we will list all datum for the subject and all feature types that match the settings + self.all_datum_ids = [] + self.gt = 0 + + def read_shared_memory(self): + if self.shared_memory.isAttached(): + self.shared_memory.lock() + signal = self.shared_memory.data() + kill_sig = np.frombuffer(signal[-1], dtype=np.bool) + settings = ''.join([x.decode('utf-8') for x in signal[:-1] if x != b'\x00']) + # clear shared_memory + self.shared_memory.data()[:] = np.zeros((self.shared_memory.size(),), dtype=np.int8).tobytes() + self.shared_memory.unlock() + else: + kill_sig = True + settings = '' + return kill_sig, settings + + def run_check(self): + while self.is_running: + # check for kill signal or new settings + kill_sig, new_settings = self.read_shared_memory() + + if kill_sig: + self.is_running = False + continue + + if new_settings != '': + self.process_settings(new_settings) + + new_datum = self.db_wrapper.list_all_datum_ids(gt=self.gt) + + if len(new_datum) > 0: + self.all_datum_ids += new_datum + + if len(self.all_datum_ids) > 0: + # get oldest data and check if all features have been computed + # in case we're stuck with a datum whose feature can't compute, we + # want to keep the largest datum id. + self.gt = max(self.all_datum_ids + [self.gt]) + d_id = self.all_datum_ids.pop(0) + if not self.db_wrapper.check_and_compute_features(d_id): + # re append at the end of the list? + self.all_datum_ids.append(d_id) + time.sleep(0.25) # test to slow down process to decrease HDD load + # time.sleep(0.1) + + +if __name__ == '__main__': + worker = FeaturesWorker() + worker.run_check() diff --git a/django-eerf/eerfhelper/utils/MonitorNewTrials.pyw b/python/serf/scripts/MonitorNewTrials.pyw similarity index 60% rename from django-eerf/eerfhelper/utils/MonitorNewTrials.pyw rename to python/serf/scripts/MonitorNewTrials.pyw index 3466f54..3a961ad 100644 --- a/django-eerf/eerfhelper/utils/MonitorNewTrials.pyw +++ b/python/serf/scripts/MonitorNewTrials.pyw @@ -7,7 +7,7 @@ import os sys.path.append(os.path.abspath('d:/tools/eerf/python/eerf')) os.environ.setdefault("DJANGO_SETTINGS_MODULE", "eerf.settings") from eerfd.models import * -#from eerfx.online import * +# from eerfx.online import * #=============================================================================== # #http://stackoverflow.com/questions/3346124/how-do-i-force-django-to-ignore-any-caches-and-reload-data @@ -15,12 +15,12 @@ from eerfd.models import * # @transaction.commit_manually #=============================================================================== + class App: def __init__(self, master): - - #master is the root + # master is the root self.frame = master - plot_frame=Frame(self.frame) + plot_frame = Frame(self.frame) plot_frame.pack(side=TOP, fill=X) pb_frame = Frame(self.frame) pb_frame.pack(side=TOP, fill=X) @@ -40,61 +40,64 @@ class App: def update_plot(self): old_id = self.last_id if self.last_id else 0 - #transaction.commit() + # transaction.commit() query = Datum.objects.filter(datum_id__gt=old_id).filter(span_type='trial').order_by('-datum_id') query.update() - new_trial = query.all()[0] if query.count()>0 else None + new_trial = query.all()[0] if query.count() > 0 else None has_store = False - try: has_store = np.any(new_trial.store) - except: pass - if np.any(new_trial) and has_store: #If new trial, add the trial to the plot - tr_store=new_trial.store + try: + has_store = np.any(new_trial.store) + except: + pass + if np.any(new_trial) and has_store: # If new trial, add the trial to the plot + tr_store = new_trial.store fig = self.fig x = tr_store.x_vec y = tr_store.data - if not isinstance(y,basestring): + if not isinstance(y, basestring): nchans = y.shape[0] - for cc in range(0,nchans): - y[cc,:]=y[cc,:]-np.mean(y[cc,x<-5]) - x_bool = np.logical_and(x>=-10,x<=100) - x=x[x_bool] - #y=y[chan_bool,x_bool] - y=y[:,x_bool] - - + for cc in range(0, nchans): + y[cc, :] = y[cc, :] - np.mean(y[cc, x < -5]) + x_bool = np.logical_and(x >= -10, x <= 100) + x = x[x_bool] + # y = y[chan_bool,x_bool] + y = y[:, x_bool] + naxes = np.size(fig.axes) - while naxes=4])) - y_max = max(y_max, max(temp_data[x>=4])) + y_min = min(y_min, min(temp_data[x >= 4])) + y_max = max(y_max, max(temp_data[x >= 4])) this_ax.lines[-1].set_linewidth(3.0) - #TODO: Scale y-axis to be +/- 10% around displayed trials (excluding stim artifact) + # TODO: Scale y-axis to be +/- 10% around displayed trials (excluding stim artifact) y_margin = 0.1 * np.abs((y_max - y_min)) - this_ax.set_ylim(y_min-y_margin,y_max+y_margin) - if cc==nchans-1:this_ax.set_xlabel('TIME AFTER STIM (ms)') + this_ax.set_ylim(y_min-y_margin, y_max + y_margin) + if cc == nchans-1: + this_ax.set_xlabel('TIME AFTER STIM (ms)') this_ax.set_ylabel('AMPLITUDE (uv)') this_ax.set_title(tr_store.channel_labels[cc]) - #fig.tight_layout() #tight_layout shrinks the plots + # fig.tight_layout() # tight_layout shrinks the plots fig.canvas.draw() self.last_id = new_trial.datum_id self.frame.after(500, self.update_plot) - + + if __name__ == "__main__": - #engine = create_engine("mysql://root@localhost/eerat", echo=False)#echo="debug" gives a ton. - #Session = scoped_session(sessionmaker(bind=engine, autocommit=True)) - root = Tk() #Creating the root widget. There must be and can be only one. + # engine = create_engine("mysql://root@localhost/eerat", echo=False)#echo="debug" gives a ton. + # Session = scoped_session(sessionmaker(bind=engine, autocommit=True)) + root = Tk() # Creating the root widget. There must be and can be only one. app = App(root) - root.mainloop() #Event loops \ No newline at end of file + root.mainloop() # Event loops diff --git a/django-eerf/eerfapp/migrations/__init__.py b/python/serf/scripts/__init__.py similarity index 100% rename from django-eerf/eerfapp/migrations/__init__.py rename to python/serf/scripts/__init__.py diff --git a/python/serf/scripts/djangoshell.py b/python/serf/scripts/djangoshell.py new file mode 100644 index 0000000..fbce31b --- /dev/null +++ b/python/serf/scripts/djangoshell.py @@ -0,0 +1,12 @@ +def main(): + from django.core.management import call_command + from serf.boot_django import boot_django + + # call the django setup routine + boot_django() + + call_command("shell") + + +if __name__ == '__main__': + main() diff --git a/python/serf/scripts/makemigrations.py b/python/serf/scripts/makemigrations.py new file mode 100644 index 0000000..0991dfb --- /dev/null +++ b/python/serf/scripts/makemigrations.py @@ -0,0 +1,12 @@ +def main(): + from django.core.management import call_command + from serf.boot_django import boot_django + + # call the django setup routine + boot_django() + + call_command("makemigrations", "serf") + + +if __name__ == '__main__': + main() diff --git a/python/serf/scripts/migrate.py b/python/serf/scripts/migrate.py new file mode 100644 index 0000000..96b3ffa --- /dev/null +++ b/python/serf/scripts/migrate.py @@ -0,0 +1,12 @@ +def main(): + from django.core.management import call_command + from serf.boot_django import boot_django + + # call the django setup routine + boot_django() + + call_command("migrate") + + +if __name__ == '__main__': + main() diff --git a/django-eerf/eerfapp/static/eerfapp/raw_erp.js b/python/serf/static/eerfapp/raw_erp.js similarity index 97% rename from django-eerf/eerfapp/static/eerfapp/raw_erp.js rename to python/serf/static/eerfapp/raw_erp.js index 6536222..da88139 100644 --- a/django-eerf/eerfapp/static/eerfapp/raw_erp.js +++ b/python/serf/static/eerfapp/raw_erp.js @@ -1,147 +1,147 @@ -$(document).ready(function () { - - var erp_options = { - series: { - lines: { - show: true, - lineWidth: 1 - }, - points: { show: false }, - shadowSize: 0 - }, - legend: { show: false }, - xaxis: { tickDecimals: 0 }, - selection: { mode: "x" } - }; - - //Download the data and plot the ERPs. - var plot_erps = function() { - $.get("http://127.0.0.1:8000/eerfapp/subject/" + subject_pk + "/erp_data/", {}, function(response) { - response = JSON.parse(response); - - //Empty the wrappers - $('div.channel_wrapper').empty(); - $('div.erp_wrapper').empty(); - - var series_hook = function(plot, canvascontext, series) { - if (series.label==window.newest_label){ - series.color = "rgb(255,0,0)"; - series.lines.lineWidth = 3; //TODO: Make this configurable - series.shadowSize=1; //TODO: Make this configurable - } - }; - - var opt_extend = { - hooks: { drawSeries: [series_hook] }, - }; - if ($("#zoom").attr("checked")==="checked" & parseFloat($('input.first_detail').val())){ - $.extend(opt_extend, { - xaxis: { - min: parseFloat($('input.first_detail').val()), - max: parseFloat($('input.second_detail').val()) - } - }); - } - - for (var i=0; i'); - new_div.addClass(response.channel_labels[i]); - $('div.erp_wrapper').append(new_div); - - //Hook into plot so we can modify the (last?) series' options - var ch_data = response.data[response.channel_labels[0]]; - window.newest_label = ch_data[ch_data.length-1].label; - - //Get min and max values for the y-axis in a certain range. - var miny = Infinity; - var maxy = -Infinity; - var yst = 5.0; - for (var tt=0; tt= yst) { - miny = Math.min(miny, trial_data[ss][1]); - maxy = Math.max(maxy, trial_data[ss][1]); - } - } - /* - miny = trial_data.reduce(function(previousValue, currentValue, index, array){ - return currentValue[0]>= yst ? Math.min(previousValue, currentValue[1]) : previousValue; - }, miny); - maxy = trial_data.reduce(function(previousValue, currentValue, index, array){ - return currentValue[0] >= yst ? Math.max(previousValue, currentValue[1]) : previousValue; - }, maxy);*/ - } - - $.extend(opt_extend, { - yaxis: {min: miny, max: maxy} - }); - - var plot = $.plot(new_div, - response.data[response.channel_labels[i]], - $.extend(erp_options, opt_extend) - ); - - //Checkboxes for channel labels. - var new_hidden = $('') - $('div.channel_wrapper').append(new_hidden); - var new_check = $('' + response.channel_labels[i] + ''); - new_check.addClass(response.channel_labels[i]); - $('div.channel_wrapper').append(new_check); - $("input."+response.channel_labels[i]).change(function(el){ - $('div.'+el.srcElement.className).toggle($('input.'+el.srcElement.className)[0].checked); - }); - - //Binding for clicking on the plot - new_div.bind("plotselected", function (event, ranges) { - selected_range = [ranges.xaxis.from, ranges.xaxis.to]; - //$("#selection").text(ranges.xaxis.from.toFixed(1) + " to " + ranges.xaxis.to.toFixed(1)); - $('input.first_detail').val(selected_range[0].toFixed(1)); - $('input.second_detail').val(selected_range[1].toFixed(1)); - $('input.channel_detail').val(event.target.className); - }); - }; - - //Get the session values and uncheck boxes that were saved as unchecked. - $.get("http://127.0.0.1:8000/eerfapp/my_session/", {}, function(result) { - result = JSON.parse(result); - var checkboxes = $('div.channel_wrapper').children('input:checkbox'); - for (var i=0; i'+response[j]+''); - $('select.'+selector_classes[i])[0].appendChild(new_option[0]); - } - } - }); - - //When the selectors change, the name of the input boxes should change. - for (var i=0; i'); + new_div.addClass(response.channel_labels[i]); + $('div.erp_wrapper').append(new_div); + + //Hook into plot so we can modify the (last?) series' options + var ch_data = response.data[response.channel_labels[0]]; + window.newest_label = ch_data[ch_data.length-1].label; + + //Get min and max values for the y-axis in a certain range. + var miny = Infinity; + var maxy = -Infinity; + var yst = 5.0; + for (var tt=0; tt= yst) { + miny = Math.min(miny, trial_data[ss][1]); + maxy = Math.max(maxy, trial_data[ss][1]); + } + } + /* + miny = trial_data.reduce(function(previousValue, currentValue, index, array){ + return currentValue[0]>= yst ? Math.min(previousValue, currentValue[1]) : previousValue; + }, miny); + maxy = trial_data.reduce(function(previousValue, currentValue, index, array){ + return currentValue[0] >= yst ? Math.max(previousValue, currentValue[1]) : previousValue; + }, maxy);*/ + } + + $.extend(opt_extend, { + yaxis: {min: miny, max: maxy} + }); + + var plot = $.plot(new_div, + response.data[response.channel_labels[i]], + $.extend(erp_options, opt_extend) + ); + + //Checkboxes for channel labels. + var new_hidden = $('') + $('div.channel_wrapper').append(new_hidden); + var new_check = $('' + response.channel_labels[i] + ''); + new_check.addClass(response.channel_labels[i]); + $('div.channel_wrapper').append(new_check); + $("input."+response.channel_labels[i]).change(function(el){ + $('div.'+el.srcElement.className).toggle($('input.'+el.srcElement.className)[0].checked); + }); + + //Binding for clicking on the plot + new_div.bind("plotselected", function (event, ranges) { + selected_range = [ranges.xaxis.from, ranges.xaxis.to]; + //$("#selection").text(ranges.xaxis.from.toFixed(1) + " to " + ranges.xaxis.to.toFixed(1)); + $('input.first_detail').val(selected_range[0].toFixed(1)); + $('input.second_detail').val(selected_range[1].toFixed(1)); + $('input.channel_detail').val(event.target.className); + }); + }; + + //Get the session values and uncheck boxes that were saved as unchecked. + $.get("http://127.0.0.1:8000/eerfapp/my_session/", {}, function(result) { + result = JSON.parse(result); + var checkboxes = $('div.channel_wrapper').children('input:checkbox'); + for (var i=0; i'+response[j]+''); + $('select.'+selector_classes[i])[0].appendChild(new_option[0]); + } + } + }); + + //When the selectors change, the name of the input boxes should change. + for (var i=0; i - {% csrf_token %} -
-
- - -
-
-

- - - -
- {% csrf_token %} - - - - + + {% csrf_token %} +
+
+ +
+
+
+

+ + + +
+ {% csrf_token %} + + + +
\ No newline at end of file diff --git a/django-eerf/eerfapp/templates/eerfapp/recruitment_curve.html b/python/serf/templates/eerfapp/recruitment_curve.html similarity index 100% rename from django-eerf/eerfapp/templates/eerfapp/recruitment_curve.html rename to python/serf/templates/eerfapp/recruitment_curve.html diff --git a/django-eerf/eerfapp/templates/eerfapp/subject_detail.html b/python/serf/templates/eerfapp/subject_detail.html similarity index 100% rename from django-eerf/eerfapp/templates/eerfapp/subject_detail.html rename to python/serf/templates/eerfapp/subject_detail.html diff --git a/django-eerf/eerfapp/templates/eerfapp/subject_import.html b/python/serf/templates/eerfapp/subject_import.html similarity index 100% rename from django-eerf/eerfapp/templates/eerfapp/subject_import.html rename to python/serf/templates/eerfapp/subject_import.html diff --git a/django-eerf/eerfapp/templates/eerfapp/subject_list.html b/python/serf/templates/eerfapp/subject_list.html similarity index 100% rename from django-eerf/eerfapp/templates/eerfapp/subject_list.html rename to python/serf/templates/eerfapp/subject_list.html diff --git a/django-eerf/eerfapp/templates/eerfapp/subject_view_data.html b/python/serf/templates/eerfapp/subject_view_data.html similarity index 100% rename from django-eerf/eerfapp/templates/eerfapp/subject_view_data.html rename to python/serf/templates/eerfapp/subject_view_data.html diff --git a/django-eerf/eerfapp/templates/eerfapp/trial_settings.html b/python/serf/templates/eerfapp/trial_settings.html similarity index 100% rename from django-eerf/eerfapp/templates/eerfapp/trial_settings.html rename to python/serf/templates/eerfapp/trial_settings.html diff --git a/django-eerf/eerfapp/tests.py b/python/serf/tests.py similarity index 100% rename from django-eerf/eerfapp/tests.py rename to python/serf/tests.py diff --git a/django-eerf/eerfhelper/__init__.py b/python/serf/tools/__init__.py similarity index 100% rename from django-eerf/eerfhelper/__init__.py rename to python/serf/tools/__init__.py diff --git a/python/serf/tools/db_wrap.py b/python/serf/tools/db_wrap.py new file mode 100644 index 0000000..014fdb5 --- /dev/null +++ b/python/serf/tools/db_wrap.py @@ -0,0 +1,412 @@ +import os +import json +import inspect +import numpy as np +from qtpy.QtCore import QProcess, QSharedMemory +import serf.tools.features as features +from serf.tools.features.base import FeatureBase +from serf.tools.utils.misc_functions import * +from serf.tools.utils._shared import singleton +import serf.scripts + +# django app management +from django.forms.models import model_to_dict +from django.core.exceptions import ObjectDoesNotExist +from django.utils import timezone +import django + + +# call the django setup routine +from serf.boot_django import boot_django +boot_django() + +from serf.models import * + + +@singleton +class DBWrapper(object): + def __init__(self): + self.current_subject = None + self.current_procedure = None + + self.active_features = [] # list of feature class instances + self.all_features = dict() + self.list_all_features() + + # SUBJECT ========================================================================================================== + def load_or_create_subject(self, subject_details): + # validate that the subject doesn't already exist + if subject_details['id'] == '': + print('You must enter a subject id.') + return -1 + + self.current_subject, created = Subject.objects.get_or_create(id=subject_details['id']) + + if created: + # add details + for key, val in subject_details.items(): + # subject_id is an auto-field + if hasattr(self.current_subject, key) and key != 'subject_id': + setattr(self.current_subject, key, val) + self.current_subject.save() + else: + print('Found existing subject entry. Loading it.') + + return self.current_subject.subject_id + + # Fetching from DB + def select_subject(self, sub_id): + try: + self.current_subject = Subject.objects.get(subject_id=sub_id) + except ObjectDoesNotExist: + self.current_subject = None + + @staticmethod + def list_all_subjects(): + return Subject.objects.order_by('-subject_id').values_list('id', flat=True) + + @staticmethod + def load_subject_details(_id): + try: + return model_to_dict(Subject.objects.get(id=_id)) + except ObjectDoesNotExist: + return model_to_dict(Subject()) + + # PROCEDURE ======================================================================================================== + def load_or_create_procedure(self, procedure_details): + # validate that the subject doesn't already exist + if not procedure_details['subject_id']: + print('You must enter a subject id.') + return -1 + + # subject field needs to be an instance of Subject(), if not, remove key from dict. + # We do have the subject id field set. + if 'subject' in procedure_details.keys(): + if not isinstance(procedure_details['subject'], Subject): + del procedure_details['subject'] + + self.current_procedure, created = Procedure.objects.get_or_create(**procedure_details) + + if not created: + print('Loading existing entry.') + + return self.current_procedure.procedure_id + + def select_procedure(self, proc_id): + try: + self.current_procedure = Procedure.objects.get(procedure_id=proc_id) + except ObjectDoesNotExist: + self.current_procedure = None + + @staticmethod + def list_all_procedures(s_id): + return Procedure.objects.filter(subject=s_id).order_by('date') + + @staticmethod + def load_procedure_details(p_id, fields=None, exclude=None): + # can't use model_to_dict because: + # 1- it skips non editable fields, such as NPArrayBlobField (i.e. BinaryField) + # 2- would return the binary array, that would still need to be converted to np.array + try: + curr_proc = model_to_dict(Procedure.objects.get(procedure_id=p_id), fields=fields, exclude=exclude) + except ObjectDoesNotExist: + curr_proc = model_to_dict(Procedure(), fields=fields, exclude=exclude) + + return curr_proc + + # DATUM ============================================================================================================ + # return datum ids for values greater than gt + # matches current procedure + def list_all_datum_ids(self, gt=0): + return Datum.objects.filter(procedure=self.current_procedure, + span_type='period', + datum_id__gt=gt).order_by('number').values_list('datum_id', flat=True) + + def list_channel_labels(self): + labels = set([]) + for dat in self.list_all_datum_ids(): + try: + labels.update(Datum.objects.get(datum_id=dat).store.channel_labels) + except ObjectDoesNotExist: + continue + return labels + + def load_depth_data(self, chan_lbl='None', gt=0, do_hp=True, return_uV=True): + all_data = self.list_all_datum_ids(gt=gt) + + depth_detail_id = DetailType.objects.filter(name='depth').values_list('detail_type_id', flat=True) + + out_info = dict() + for d_id in all_data: + try: + datum = Datum.objects.get(datum_id=d_id) + ddv = datum._detail_values.get(detail_type_id=depth_detail_id[0]) + if ddv.value: + depth_value = float(ddv.value) + else: + continue + + chan_id = datum.store.channel_labels.index(chan_lbl) + if chan_id != -1: + chan_data = datum.store.get_data()[chan_id, :] + else: + chan_data = np.nan((datum.store.get_data().shape[0],)) + + if return_uV: + chan_data = int_to_voltage(chan_data) + + if do_hp: + chan_data = high_pass_filter(chan_data) + + out_info[datum.datum_id] = [depth_value, chan_data, datum.is_good[chan_id]] + + except (ObjectDoesNotExist, IndexError) as e: + continue + + return out_info + + # Saving to DB + def create_depth_datum(self, depth=0.000, data=None, is_good=np.array([True], dtype=np.bool), group_info=None, + start_time=django.utils.timezone.now(), stop_time=django.utils.timezone.now()): + if self.current_procedure: + dt = Datum() + dt.procedure = self.current_procedure + dt.is_good = is_good + dt.span_type = 'period' + if not timezone.is_aware(start_time): + start_time = start_time.replace(tzinfo=timezone.utc) + dt.start_time = start_time + if not timezone.is_aware(stop_time): + stop_time = stop_time.replace(tzinfo=timezone.utc) + dt.stop_time = stop_time + dt.save() + + # add datum detail values + dt.update_ddv('depth', depth) + + ds = DatumStore() + ds.datum = dt + # channel labels needs to be a list of strings + if type(group_info) is dict: + ds.channel_labels = [x['label'].decode('utf-8') for x in group_info] + elif type(group_info) is list: + if type(group_info[0]) is dict: + ds.channel_labels = [x['label'].decode('utf-8') for x in group_info] + else: + ds.channel_labels = group_info + ds.set_data(data) + ds.save() + + # FEATURES ========================================================================================================= + def list_all_features(self): # lists from files in the features directory and create the DB entry if needed + # dictionary {category: [list of tuple (class name, class)]} + # list all the modules in features + modules = inspect.getmembers(features, inspect.ismodule) + for mod_name, mod in modules: + for cla in inspect.getmembers(mod, inspect.isclass): + if issubclass(cla[1], FeatureBase) and cla[1] != FeatureBase: + if cla[1].category not in self.all_features.keys(): + self.all_features[cla[1].category] = [] + + # check if already exist in DB + db_feature, created = FeatureType.objects.get_or_create(name=cla[1].name) + if created: + db_feature.description = cla[1].desc + db_feature.save() + self.all_features[cla[1].category].append(cla[1](db_feature.feature_type_id)) + + # to_select is a list of all the categories to process + def select_features(self, to_select): + # list of feature categories + for select in to_select: + # dict: {category: [(feature name, feature class),]} + if select in self.all_features.keys(): + self.active_features.extend(self.all_features[select]) + + def check_and_compute_features(self, datum_id): + output = False + + try: + datum = Datum.objects.get(datum_id=datum_id) + + for feat in self.active_features: + # check if already computed, featuretype gets created when listing all available features + feature_type = FeatureType.objects.get(feature_type_id=feat.db_id) + + data_feature_value = datum._feature_values.filter(feature_type=feature_type) + + if len(data_feature_value) == 0: # does not exist + # compute data + value, x_vec = feat.run(datum.store.get_data()) + + new_data_feature_value = DatumFeatureValue(datum=datum, + feature_type=feature_type) + new_data_feature_value.save() + + dfs = DatumFeatureStore( + dfv=new_data_feature_value, + # datum_store keeps channel labels as a ', ' separated string. Need to convert back to a + # list. If not, all characters get comma separated. + channel_labels=datum.store.channel_labels, + n_channels=datum.store.n_channels, + n_features=1, + x_vec=x_vec + ) + dfs.set_data(value) + + output = True + except ObjectDoesNotExist: + output = False + return output + + def load_features_data(self, category='DBS', chan_lbl='None', gt=0): + if category in self.all_features.keys(): + features = self.all_features[category] + + all_data = self.list_all_datum_ids(gt=gt) + detail_type_id = DetailType.objects.filter(name='depth').values_list('detail_type_id', flat=True) + + out_info = dict() + for id in all_data: + try: + datum = Datum.objects.get(datum_id=id) + + tmp_out = dict() + depth_str = datum._detail_values.filter(detail_type_id=detail_type_id[0]).\ + values_list('value', flat=True) + + if len(depth_str) > 0: + tmp_out['depth'] = float(depth_str[0]) + else: + continue + + for feat in features: + feat_type = FeatureType.objects.filter(name=feat.name).values_list('feature_type_id', flat=True) + feat_value = datum._feature_values.filter(feature_type_id=feat_type[0]) + chan_id = feat_value[0].store.channel_labels.index(chan_lbl) + + if chan_id != -1: + tmp_out[feat.name] = [feat_value[0].store.x_vec, + feat_value[0].store.get_data()[chan_id], + datum.is_good[chan_id]] + + # check if all features are present, if not stop here to avoid having data missing features + if all(x.name in tmp_out.keys() for x in features): + out_info[datum.datum_id] = dict(tmp_out) + else: + continue + except (ObjectDoesNotExist, IndexError) as e: + continue + + return out_info + + @staticmethod + def return_enums(model_name): + + if model_name.lower() == 'subject': + fields = Subject._meta.get_fields() + elif model_name.lower() == 'procedure': + fields = Procedure._meta.get_fields() + else: + fields = None + + out_dict = dict() + for x in fields: + if type(x) == EnumField: + out_dict[x.attname] = [choice[1] for choice in x.choices] + + return out_dict + + +class ProcessWrapper: + def __init__(self, process_name): + + # define process + self.process_name = process_name + + self.worker = QProcess() + self.worker.setProcessChannelMode(QProcess.ForwardedChannels) + + # The shared memory object will be used the send the process parameters and the kill signal (the last byte). + # Parameters will be a space separated string and the kill signal will be set to 1 when the process needs to + # terminate. + self.shared_memory = QSharedMemory() + self.shared_memory.setKey(self.process_name) + + self.manage_shared_memory() + + def manage_shared_memory(self): + # before starting the worker, we will check whether the named shared memory object exists and + # as a failsafe, we will send the kill signal to all attached processes. + if self.shared_memory.attach() or self.shared_memory.isAttached(): + # if attached means that the shared memory block already exists, so terminate process + self.shared_memory.lock() + self.shared_memory.data()[-1:] = memoryview(np.array([True], dtype=np.int8).tobytes()) + self.shared_memory.unlock() + # if shared memory is not attached and can't attach (i.e. doesn't exits) create it + elif not self.shared_memory.attach() and not self.shared_memory.isAttached(): + # going to be 4096 regardless of value, as long as < 4096 + self.shared_memory.create(4096) + + def send_settings(self, settings): + """ + Send settings to running process via QSharedMemory. + :param settings: + dictionary of settings e.g.: {'subject_id': int, 'features':['DBS', 'LFP']} + :return: + None + """ + # offset the bytes by 10 to avoid the last one (i.e. kill signal) + sett_str = json.dumps(settings) + # convert settings string into bytes + b_settings = sett_str.encode('utf-8') + len_b_settings = len(b_settings) + + if self.shared_memory.isAttached() and len_b_settings < 4086: + self.shared_memory.lock() + # clear to make sure we don't have leftovers + self.shared_memory.data()[:] = np.zeros((self.shared_memory.size(),), dtype=np.int8).tobytes() + self.shared_memory.data()[-(len_b_settings+10):-10] = b_settings + self.shared_memory.unlock() + + def start_worker(self): + # start process + if self.process_name != '': + + # make sure kill signal is off + self.shared_memory.lock() + self.shared_memory.data()[-1:] = memoryview(np.array([False]).tobytes()) + self.shared_memory.unlock() + + run_command = "python " + os.path.join(os.path.dirname(serf.scripts.__file__), + self.process_name + ".py ") + + self.worker.start(run_command) + + # function to properly terminate QProcess + def kill_worker(self): + if self.shared_memory.isAttached(): + self.shared_memory.lock() + self.shared_memory.data()[-1:] = memoryview(np.array([True]).tobytes()) + self.shared_memory.unlock() + else: + self.worker.kill() + + self.worker.waitForFinished() + # detach shared memory to destroy + self.shared_memory.detach() + + def worker_status(self): + # reads the stdout from the process. The script prints either 'in_use' or 'done' to show the current state of + # the depth recording. + if self.shared_memory.isAttached(): + self.shared_memory.lock() + out = np.frombuffer(self.shared_memory.data()[0], dtype=np.int8)[0] + self.shared_memory.unlock() + else: + out = -1 + + return out + + def is_running(self): + return self.worker.state() != 0 diff --git a/python/serf/tools/features/__init__.py b/python/serf/tools/features/__init__.py new file mode 100644 index 0000000..e2bc2a7 --- /dev/null +++ b/python/serf/tools/features/__init__.py @@ -0,0 +1,22 @@ +""" +Any features can be added to the list, as long as they are defined as a class with the model: +class ClassName: + name = "ClassName" + desc = "Class description" + category = "DBS", "DL", or any custom value + + def __init__(self): + self.db_id = None # feature id within the db. populated at instantiation + + @staticmethod + def run(data): + return data, x_vec + +""" + +from . import dbs_features +from . import dl_features +from . import lfp_features +from . import spike_features + +__all__ = ['dbs_features', 'dl_features', 'lfp_features', 'spike_features'] diff --git a/python/serf/tools/features/base/FeatureBase.py b/python/serf/tools/features/base/FeatureBase.py new file mode 100644 index 0000000..1dd7c19 --- /dev/null +++ b/python/serf/tools/features/base/FeatureBase.py @@ -0,0 +1,13 @@ +class FeatureBase: + name = "FeatureBase" + desc = """ Base class for all features. """ + category = "Base" + + def __init__(self, db_id): + self.db_id = db_id + + def run(self, data): + """ + Needs to be defined in sub-class. + """ + return None diff --git a/python/serf/tools/features/base/__init__.py b/python/serf/tools/features/base/__init__.py new file mode 100644 index 0000000..affa46f --- /dev/null +++ b/python/serf/tools/features/base/__init__.py @@ -0,0 +1 @@ +from .FeatureBase import FeatureBase \ No newline at end of file diff --git a/python/serf/tools/features/dbs_features.py b/python/serf/tools/features/dbs_features.py new file mode 100644 index 0000000..0316ef1 --- /dev/null +++ b/python/serf/tools/features/dbs_features.py @@ -0,0 +1,164 @@ +import numpy as np +from serf.tools.utils.misc_functions import * +from scipy.signal import hilbert, decimate, filtfilt, iirnotch +from scipy.fft import fft, ifft, next_fast_len +from scipy.stats import linregress, chi2 +from serf.tools.features.base.FeatureBase import FeatureBase +from pytf import FilterBank +from mspacman.algorithm.pac_ import (pad, pac_mi) + +FS = 30000 + +BETABAND = [13, 30] +GAMMABAND = [60, 200] +MAXSEGLEN = 2**14 + + +# DBS Features +class BetaPower(FeatureBase): + name = "BetaPower" + desc = "Band-Pass filter in the beta 13-30Hz range then np.abs(Hilbert)**2." + category = "DBS" + + def run(self, data): + """ + input datum store erp field and returns the mean RMS for the segment + + :param data: + n_channel x n_samples numpy array a + :return: + n_channel x 1 RMS value + """ + out_data = np.zeros((data.shape[0], 1)) + for idx, dat in enumerate(data): + dat = int_to_voltage(dat) + + dat = band_pass_filter(dat, filt_order=4, bp=BETABAND) + + pwr = np.abs(hilbert(dat))**2 + + out_data[idx] = np.mean(pwr) + + return out_data, np.zeros((out_data.shape[1],)) + + +class NoiseRMS(FeatureBase): + name = "NoiseRMS" + desc = "Based on BlackRock's algorithm detailed in: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4332592/" + # The behavior is described for 2 seconds of data (i.e. 100 mean-squared values of 600 samples = 60 000 samples) + # - Filter for spikes: usually HP 250Hz 4th order Butterworth (BlackRock digital filter) + # - Square all values + # - Average values in 100 windows of 600 samples + # - Discard the 5 lowest values and compute the sqrt of the average of the next 20 values + # Since we will have segments of varying length, we will use 20% of the resulting samples to compute the RMS and + # discard the lowest 5% + # :param data: Neural data segment to compute noise RMS on. It is a Channel x Sample np.array + # :return: RMS values for each channel and an array of 0s, one for each channel as the x_axis values""" + category = "DBS" + + def run(self, data): + """ + input datum store erp field and returns the mean RMS for the segment + + :param data: + n_channel x n_samples numpy array a + :return: + n_channel x 1 RMS value + """ + out_data = np.zeros((data.shape[0], 1)) + for idx, dat in enumerate(data): + dat = high_pass_filter(dat) + dat = np.square(dat) + + bin_size = 600 + n_bins = int(np.floor(dat.shape[0] / 600)) + bins = [np.mean(dat[x*bin_size:(x+1)*bin_size]) for x in range(n_bins)] + + # sort bins values + bins = np.sort(bins) + + # discard the first 5% samples and keep the following 20% + start_idx = int(0.05 * bins.shape[0]) + stop_idx = int(0.2 * bins.shape[0]) + start_idx + + avg = np.sqrt(np.mean(bins[start_idx:stop_idx])) + + # convert to voltage values + avg = int_to_voltage(avg) + + out_data[idx] = avg + + return out_data, np.zeros((out_data.shape[1],)) + + +class PAC(FeatureBase): + + name = "PAC" + desc = """ Using MSPACMAN algorithm to compute Beta to high-gamma PAC. """ + category = "DBS" + + def __init__(self, db_id): + super(PAC, self).__init__(db_id) + + # Create the filter banks + decimate_by = 30 + fpsize = BETABAND[1] - BETABAND[0] + 1 # number of frequencies for phase + fasize = np.int(np.round((GAMMABAND[1] - GAMMABAND[0]) / 10) + 1) # number of frequencies for amp + + # from Dvorak and Fenton, JNeurosciMeth, 2014: + # For accurate PAC estimation, standard PAC algorithms require amplitude filters with a bandwidth + # at least twice the modulatory frequency. The phase filters must be moderately narrow-band, especially + # when the modulatory rhythm is non-sinusoidal. The minimally appropriate analysis window is ∼10 s. + # As our highest beta band frequency is 30 Hz we set the gamma bandwidth to be 60Hz. + bw_p = 2 # band width phase + bw_a = 60 # band width amplitude + + # Get the phase-giving and amplitude-enveloping signals for comodulogram + fp = np.linspace(BETABAND[0], BETABAND[1], fpsize) # 13 - 30 Hz, 18 freqs; 1 Hz steps + fa = np.linspace(GAMMABAND[0], GAMMABAND[1], fasize) # 60 - 200 Hz, 15 steps; 10Hz steps + + fois_lo = np.asarray([(f - bw_p, f + bw_p) for f in fp]) + fois_hi = np.asarray([(f - bw_a, f + bw_a) for f in fa]) + + self.los = FilterBank(binsize=2 ** 14, + freq_bands=fois_lo, + order=2 ** 12, + sample_rate=FS, + decimate_by=decimate_by, + hilbert=True) + + self.his = FilterBank(binsize=2 ** 14, + freq_bands=fois_hi, + order=2 ** 12, + sample_rate=FS, + decimate_by=decimate_by, + hilbert=True) + + def run(self, data): + + """ + input datum store erp field and returns mean and peak MI value for Beta-high-gamma PAC. + + :param data: + n_channel x n_samples numpy array + :return: + n_channel x 2 MI value [peak, average, std] + """ + data = int_to_voltage(data) + + x_los = self.los.analysis(data, window='hanning') + x_his = self.his.analysis(data, window='hanning') + + angs, amps = np.angle(x_los), np.abs(x_his) + + # Compute PAC + # phase-amplitude distribution (pad) + pds = pad(angs, amps, nbins=10) + # modulation indices: chan x los x his + mis = pac_mi(pds) + + out_data = np.concatenate((np.atleast_2d(mis.max(axis=(2, 1))).T, + np.atleast_2d(mis.mean(axis=(2, 1))).T, + np.atleast_2d(mis.std(axis=(2, 1))).T), axis=1) + + return out_data, np.zeros((out_data.shape[1],)) diff --git a/python/serf/tools/features/dl_features.py b/python/serf/tools/features/dl_features.py new file mode 100644 index 0000000..f1e5d59 --- /dev/null +++ b/python/serf/tools/features/dl_features.py @@ -0,0 +1,15 @@ +import numpy as np + +# # DBS Features +# class PlaceHolderModel: +# name = "PlaceholderDLModel" +# desc = "Placeholder to test multiple modules in utils." +# category = "DL" +# +# # DB ID is the feature id in the database +# def __init__(self, db_id): +# self.db_id = db_id +# +# @staticmethod +# def run(data): +# return data, 0 diff --git a/python/serf/tools/features/hreflex_features.py b/python/serf/tools/features/hreflex_features.py new file mode 100644 index 0000000..f31cbf5 --- /dev/null +++ b/python/serf/tools/features/hreflex_features.py @@ -0,0 +1,161 @@ +import numpy as np +from scipy.optimize import curve_fit +# import statsmodels.api as sm + + +# helper functions +def get_submat_for_datum_start_stop_chans(datum, x_start, x_stop, chan_label): + if isinstance(x_start, unicode): + x_start = float(x_start) + if isinstance(x_stop, unicode): + x_stop = float(x_stop) + temp_store = datum.store + x_vec = temp_store.x_vec + y_mat = temp_store.data + chan_list = temp_store.channel_labels + chan_bool = np.asarray([cl == chan_label for cl in chan_list]) + y_mat = y_mat[chan_bool, :] + for cc in range(0, y_mat.shape[0]): + y_mat[cc, :] = y_mat[cc, :] - np.mean(y_mat[cc, x_vec < -5]) + x_bool = np.logical_and(x_vec >= x_start, x_vec <= x_stop) + return y_mat[:, x_bool] if not isinstance(y_mat, basestring) else None + + +def get_aaa_for_datum_start_stop(datum, x_start, x_stop, chan_label): + sub_mat = get_submat_for_datum_start_stop_chans(datum, x_start, x_stop, chan_label) + if not np.any(sub_mat): + return None + sub_mat = np.abs(sub_mat) + ax_ind = 1 if sub_mat.ndim == 2 else 0 + return np.average(sub_mat, axis=ax_ind)[0] + + +def get_p2p_for_datum_start_stop(datum, x_start, x_stop, chan_label): + sub_mat = get_submat_for_datum_start_stop_chans(datum, x_start, x_stop, chan_label) + if not np.any(sub_mat): + return None + ax_ind = 1 if sub_mat.ndim == 2 else 0 + p2p = np.nanmax(sub_mat, axis=ax_ind) - np.nanmin(sub_mat, axis=ax_ind) + return p2p[0] + + +def get_ddvs(datum, refdatum=None, keys=None): + if keys: + if refdatum is None: + ddvs = datum.subject.detail_values_dict() + else: + ddvs = refdatum.detail_values_dict() + values = [ddvs[key] for key in keys] + return values + else: + return None + + +# feature_functions + +def BEMG_aaa(datum, refdatum=None): + my_keys = ['BG_start_ms', 'BG_stop_ms', 'BG_chan_label'] + x_start, x_stop, chan_label = get_ddvs(datum, refdatum, my_keys) + return get_aaa_for_datum_start_stop(datum, x_start, x_stop, chan_label) + + +def MR_aaa(datum, refdatum=None): + my_keys = ['MR_start_ms', 'MR_stop_ms', 'MR_chan_label'] + x_start, x_stop, chan_label = get_ddvs(datum, refdatum, my_keys) + return get_aaa_for_datum_start_stop(datum, x_start, x_stop, chan_label) + + +def HR_aaa(datum, refdatum=None): + my_keys = ['HR_start_ms', 'HR_stop_ms', 'HR_chan_label'] + x_start, x_stop, chan_label = get_ddvs(datum, refdatum, my_keys) + return get_aaa_for_datum_start_stop(datum, x_start, x_stop, chan_label) + + +def MEP_aaa(datum, refdatum=None): + my_keys = ['MEP_start_ms', 'MEP_stop_ms', 'MEP_chan_label'] + x_start, x_stop, chan_label = get_ddvs(datum, refdatum, my_keys) + return get_aaa_for_datum_start_stop(datum, x_start, x_stop, chan_label) + + +def MEP_p2p(datum, refdatum=None): + my_keys = ['MEP_start_ms', 'MEP_stop_ms', 'MEP_chan_label'] + x_start, x_stop, chan_label = get_ddvs(datum, refdatum, my_keys) + return get_p2p_for_datum_start_stop(datum, x_start, x_stop, chan_label) + + +def HR_res(datum, refdatum=None): + print("TODO: HR_res") + + +def sig_func(x, x0, k): + return 1 / (1 + np.exp(-k*(x-x0))) + + +def MEP_res(datum, refdatum=None): + #=========================================================================== + # The MEP residual is the amplitude of the MEP after subtracting the effects + # of the background EMG and the stimulus amplitude. + #=========================================================================== + mep_feat = 'MEP_p2p' #Change this to 'MEP_aaa' if preferred. + prev_trial_limit = 100 + + # Residuals only make sense when calculating for a single trial. + if datum.span_type == 'period': + return None + + # TODO: Add a check for enough trials to fill the model. + + # Get the refdatum + if refdatum is None or refdatum.span_type == 'trial': + refdatum = datum.periods.order_by('-datum_id').all()[0] + + # Get the X and Y for this trial + my_bg, my_mep = [datum.calculate_value_for_feature_name(fname, refdatum=refdatum) for fname in ['BEMG_aaa', mep_feat]] + my_stim = datum.detail_values_dict()['TMS_powerA'] + + # Get background EMG, stimulus amplitude, and MEP_p2p for all trials (lim 100?) for this period. + stim_ddvs = DatumDetailValue.objects.filter(datum__periods__pk=refdatum.datum_id, + detail_type__name__contains='TMS_powerA').order_by('-id').all()[:prev_trial_limit] + dd_ids = [temp.datum_id for temp in stim_ddvs] + stim_vals = np.array([temp.value for temp in stim_ddvs], dtype=float) + + all_dfvs = DatumFeatureValue.objects.filter(datum__periods__pk=refdatum.datum_id) + bg_dfvs = all_dfvs.filter(feature_type__name__contains='BEMG_aaa').order_by('-id').all()[:prev_trial_limit] + df_ids = [temp.datum_id for temp in bg_dfvs] + bg_vals = np.array([temp.value for temp in bg_dfvs]) + mep_dfvs = all_dfvs.filter(feature_type__name__contains=mep_feat).order_by('-id').all()[:prev_trial_limit] + mep_vals = np.array([temp.value for temp in mep_dfvs]) + + # Restrict ourselves to trials where dd_ids and df_ids match. + uids = np.intersect1d(dd_ids, df_ids, assume_unique=True) + stim_vals = stim_vals[np.in1d(dd_ids, uids)] + bg_vals = bg_vals[np.in1d(df_ids, uids)] + mep_vals = mep_vals[np.in1d(df_ids, uids)] + + # Transform stimulus amplitude into a linear predictor of MEP size. + p0 = ((np.max(stim_vals) - np.min(stim_vals)) / 2, 0.1) # x0, k for sig_func + y = mep_vals - np.min(mep_vals) + mep_scale = np.max(y) + y = y / mep_scale + popt, pcov = curve_fit(sig_func, stim_vals, y, p0) + stim_vals_sig = np.min(mep_vals) + (mep_scale * sig_func(stim_vals, popt[0], popt[1])) + my_stim_sig = np.min(mep_vals) + (mep_scale * sig_func(my_stim, popt[0], popt[1])) + + return get_residual(np.column_stack((my_bg, my_stim_sig)), np.array(my_mep), + np.column_stack((bg_vals, stim_vals_sig)), np.array(mep_vals))[0] + + +def get_residual(test_x, test_y, train_x, train_y): + # Convert the input into z-scores + x_means = np.mean(train_x, 0) + x_std = np.std(train_x, 0) + zx = (train_x - x_means) / x_std # Built-in broadcasting + + # Calculate the coefficients for zy = a zx. Prepend zx with column of ones + coeffs = np.linalg.lstsq(np.column_stack((np.ones(zx.shape[0],), zx)), train_y)[0] + + # Calculate expected_y using the coefficients and test_x + test_zx = (test_x - x_means) / x_std + expected_y = dot(coeffs, np.column_stack((np.ones(test_zx.shape[0]), test_zx)).T) + + return test_y - expected_y diff --git a/python/serf/tools/features/lfp_features.py b/python/serf/tools/features/lfp_features.py new file mode 100644 index 0000000..22821f5 --- /dev/null +++ b/python/serf/tools/features/lfp_features.py @@ -0,0 +1,290 @@ +import numpy as np +from scipy.signal import decimate, iirnotch, filtfilt +from scipy.fft import fft, ifft +from scipy.stats import chi2, linregress +from serf.tools.utils.misc_functions import * +import os +import logging +from serf.tools.features.base.FeatureBase import FeatureBase + +logger = logging.getLogger(__name__) + +FS = 30000 # sampling frequency of raw signal +SR = 1000 # down sampled LFP rate + +BETABAND = [13, 30] +GAMMABAND = [60, 200] +MAXSEGLEN = 2**14 + + +# LFP +class LFPSpectrumAndEpisodes(FeatureBase): + + name = "LFPSpectrumAndEpisodes" + desc = "Processes features from the LFPs: average power, p_episodes" + category = "LFP" + + def __init__(self, db_id): + super(LFPSpectrumAndEpisodes, self).__init__(db_id) + + # we will decimate the signal to 1kHz before running the analyses so we will set + # a maximal segment length 2**14, which is plenty considering our ~3000 wavelet samples. + # It will accept maximal segments lengths of ~13.5 seconds. + self.seg_len = MAXSEGLEN + self.fo = 4 ** np.arange(1, 4.1, 0.1) # 4 - 256 Hz, 31 bands + self.ds_factor = FS//SR + self.wavelets, self.n = define_complex_morlet(self.fo, + max_segment_length=self.seg_len, + sampling_rate=SR, + c=7) + self.pwr_thresholds = None + + def pre_process_data(self, data): + # decimate, with anti-aliasing filter, the data to 1kHz. + # from the scipy documentation : + # When using IIR downsampling, it is recommended to call decimate multiple times + # for downsampling factors higher than 13. + ds_factor = self.ds_factor + while ds_factor > 10: + data = decimate(data, 10) + ds_factor //= 10 + + # We will then decimate by a factor of 10 until the factor is < 10 + data = decimate(data, ds_factor) + + # notch filter @ 60Hz + b, a = iirnotch(60, 30.0, fs=SR) + data = filtfilt(b, a, data) + + # notch filter @ 120Hz + b, a = iirnotch(120, 60.0, fs=SR) + data = filtfilt(b, a, data) + + # notch filter @ 180Hz + b, a = iirnotch(180, 90.0, fs=SR) + data = filtfilt(b, a, data) + + # notch filter @ 240 + b, a = iirnotch(240, 120.0, fs=SR) + data = filtfilt(b, a, data) + return data - np.atleast_2d(np.mean(data, axis=1)).T + + def compute_power(self, data): + # FFT transform of data to speed up convolution with wavelets + fft_data = fft(data, n=self.seg_len) + # data is channel x samples, need to change to chan x freq x sample + fft_data = np.atleast_3d(fft_data).reshape((fft_data.shape[0], 1, fft_data.shape[1])) + + pwr = np.abs(ifft(fft_data * self.wavelets)) ** 2 + # # remove zero padding + return pwr[:, :, self.n // 2:self.n // 2 + pwr.shape[2]] + + def compute_pwr_thresholds(self, pwr): + # compute power threshold from chi-square distribution + df = 2 + thresh_ppf = chi2.ppf(.95, df) + reg_pwr = np.zeros((pwr.shape[0], pwr.shape[1])) + + for idx, p in enumerate(pwr): + avg_pwr = np.mean(p, axis=1) + # linear regression on power data + slope, intercept, r_value, p_value, std_err = linregress(np.log2(self.fo), 10 * np.log10(avg_pwr)) + + # linear regression power converted to chi2 distribution + reg_pwr[idx, :] = 10 ** ((np.log2(self.fo) * slope + intercept) / 10) + + # power thresholds + return reg_pwr * (thresh_ppf / df) + + def compute_p_episodes(self, pwr, pwr_thresholds, n_periods=3): + # 3 periods * 1kHz + dur_thresholds = (n_periods * SR) // self.fo + cross = np.greater(pwr, np.atleast_3d(pwr_thresholds)).astype(np.int) + + # since diff.shape[1] = cross.shape[1]-1 and the first sample might be above threshold + # we need to concatenate the first cross column. + diffs = np.concatenate((cross[:, :, 0:1], np.diff(cross, axis=2)), axis=2) + + # if the last sample is > threshold, need to set it's value to -1 only if + # it is not the first threshold crossing sample + diffs[np.logical_and(cross[:, :, -1] == 1, diffs[:, :, -1] == 0), -1] = -1 + diffs[diffs[:, :, -1] != -1, -1] = 0 + + s_c, s_f, s_idx = np.nonzero(diffs == 1) + e_c, e_f, e_idx = np.nonzero(diffs == -1) + + durations = e_idx - s_idx + p_idx = np.nonzero(durations > dur_thresholds[s_f]) + p_episodes = np.zeros((pwr.shape[0], self.fo.shape[0])) + + # returns proportion of segment with oscillations + for c, f, dur in zip(s_c[p_idx], s_f[p_idx], durations[p_idx]): + p_episodes[c, f] += dur + + return p_episodes / pwr.shape[2] + + def run(self, data): + data = self.pre_process_data(data) + pwr = self.compute_power(data) + pwr_thresholds = self.compute_pwr_thresholds(pwr) + p_episodes = self.compute_p_episodes(pwr, pwr_thresholds) + + # we will combine the average beta power and p_episodes in one struct + # chan x value, in this case values are 31 frequencies power, 31 frequencies p_episodes + out_data = np.concatenate((pwr.mean(axis=2), + p_episodes), axis=1) + + return out_data, self.fo + + +class MultiTaperSpectrum(FeatureBase): + name = "MultiTaperSpectrum" + desc = """ Functions for multi-taper power spectral density estimation. """ + category = "LFP" + + def run(self, data, tapers=None, fs=30000, time_half_bandwidth=2., k=None, onesided=True, + nfft=None, return_with_raw=False): + """Power spectral density estimate using the Thomson multitaper method.""" + from scipy.fftpack import fft, fftfreq + out_data = None + + for idx, X in enumerate(data): + if X.shape[0] == X.size: + X = X.reshape((-1, 1)) + + tapers, N, k = get_tapers(tapers, X.shape, time_half_bandwidth, k) + + if nfft is None: + nfft = N + + # compute spectral density under each taper + taperX = X.reshape(X.shape + (1,)) * tapers.reshape((X.shape[0], 1, -1)) + # noinspection PyTypeChecker + try: + _result = fft(taperX, n=nfft, axis=0) + except MemoryError: + # Workaround for what looks like an MKL bug (uses too much mem) + print("Got MemoryError in fft with data shape %s; running trial by " + "trial." % (taperX.shape,)) + _result = np.nan*np.ones((nfft,) + taperX.shape[1:], dtype=complex) + taperX_tmp = taperX.reshape(taperX.shape[0], -1) + result_tmp = _result.reshape(_result.shape[0], -1) + for k in range(result_tmp.shape[1]): + result_tmp[:, k] = fft(taperX_tmp[:, k]) + + # keep desired number of freq bins + if np.iscomplexobj(X) or not onesided: + num_freqs = nfft + else: + num_freqs = (nfft + 1) // 2 if nfft % 2 else nfft // 2 + 1 + freqs = fftfreq(nfft, 1 / fs)[:num_freqs] + _result = _result[:num_freqs, ...] + + Pxx = (np.abs(_result))**2 + + if fs is None: + Pxx /= (2 * np.pi) + fs = 1.0 + else: + Pxx /= fs + + # make sure that power is doubled in case of a onesided spectrum + if num_freqs < nfft: + if nfft % 2: + Pxx[1:, ...] *= 2 + else: + # (last point is unpaired Nyquist freq and requires special treatment) + Pxx[1:-1, ...] *= 2 + freqs[-1] *= -1 + + # average across tapers + Pxx = np.sum(Pxx, axis=-1) / k + + if return_with_raw: + return Pxx, freqs, _result, nfft + + if out_data is None: + out_data = np.zeros((data.shape[0], Pxx.shape[0])) + + # Pxx is a (60000,1) array, need to flatten + out_data[idx, :] = Pxx.flatten() + + return out_data, freqs + + +def get_tapers(tapers, data_shape, time_half_bandwidth, k): + if tapers is None: + tapers = dpss(data_shape[0], time_half_bandwidth, k) + N = tapers.shape[0] + max_tapers = tapers.shape[1] + + # sanity checks + # To be coherent with other tools, the pmtm function uses 2*nw-1 tapers + if k is None: + k = max_tapers - 1 + if k > max_tapers: + k = max_tapers - 1 + logger.warning('More tapers were requested than have been precomputed. ' + 'We will use a maximum of %s.' % k) + if tapers.shape[0] != data_shape[0]: + raise Exception('The data and tapers arrays must have the same length ' + 'along the first dimension. ') + if float(time_half_bandwidth) / N > 0.5: + logger.warning('Time-bandwidth product is greater than 0.5. Cannot ' + 'satisfy this. Increase your bandwidth, or time window ' + 'appropriately.') + + # prune tapers to first k (if needed) + return tapers[:, 0:k], N, k + + +def dpss(window_length=1000, time_half_bandwidth=2.5, k=None): + """ + Compute the desired discrete prolate spheroidal (Slepian) sequences using + a precomputed version on stored locally. + """ + from scipy.interpolate import interp1d + + # load precomputed tapers from disk or mem cache + try: + w = dpss.cached_w + except AttributeError: + from scipy.io import loadmat + folder = os.path.dirname(__file__) + filename = os.path.join(folder, '../resources/dpss.mat') + dpss.cached_w = loadmat(os.path.normpath(filename))['w'] + w = dpss.cached_w + + # get # of sample points in the db (n), and # taper sets in the db (m) + n = w[0, 0].shape[0] + m = w.shape[1] + + # used to compute lookup position in db + max_tapers = np.ceil(2 * time_half_bandwidth) + # sanity checks + if max_tapers > m: + max_tapers = m + logger.warning('The chosen half-bandwidth requires more tapers than ' + 'have been precomputed. We will use a maximum of ' + '%s.' % max_tapers) + + # To be coherent with other tools, the DPSS function returns 2*nw tapers + if k is None: + k = max_tapers + if k > max_tapers: + k = max_tapers + logger.warning('More tapers were requested than have been precomputed. ' + 'We will use a maximum of %s.' % k) + + # get index of correct set of DPSS tapers + # taper sets are stored in ascending order from num_tapers = 2,3,...,100 + idx = max_tapers - 2 + + # look up tapers and prune their number to k (also, we want it in double) + tapers = np.array(w[0, int(idx)][:, 0:int(k)].T, dtype=float) + + # interpolate onto the desired number of samples + f = interp1d(np.linspace(0, 1, n), tapers) + return np.sqrt(float(n) / window_length) * f(np.linspace(0, 1, + window_length)).T + diff --git a/python/serf/tools/features/spike_features.py b/python/serf/tools/features/spike_features.py new file mode 100644 index 0000000..7319694 --- /dev/null +++ b/python/serf/tools/features/spike_features.py @@ -0,0 +1,85 @@ +import numpy as np +from serf.tools.utils.misc_functions import int_to_voltage, high_pass_filter +from serf.tools.features.base.FeatureBase import FeatureBase + + +class DBSSpikeFeatures(FeatureBase): + name = "DBSSpikeFeatures" + desc = "Computes: NoiseRMS, Spike Rate, Burst Index, Fano Factor" + category = "Spikes" + + def __init__(self, db_id, sr=30000): + super(DBSSpikeFeatures, self).__init__(db_id) + self.SR = sr + + def run(self, data): + """ + :param data: + n_channel x n_samples numpy array a + :return: + n_channel x 4 [RMS, Rate, BI, FF] + """ + out_data = np.zeros((data.shape[0], 4), dtype=np.float) + for idx, dat in enumerate(data): + dat = high_pass_filter(int_to_voltage(dat)) + + # RMS value ===================================================== + # settings + bin_size = 600 + n_bins = int(np.floor(dat.shape[0] / bin_size)) + + s_dat = np.square(dat) + bins = [np.mean(s_dat[x * bin_size:(x + 1) * bin_size]) for x in range(n_bins)] + bins = np.sort(bins) + + # discard the first 5% samples and keep the following 20% + start_idx = int(0.05 * bins.shape[0]) + stop_idx = int(0.2 * bins.shape[0]) + start_idx + + # RMS + _RMS = np.sqrt(np.mean(bins[start_idx:stop_idx])) + + # Spike Rate =============================================== + rms_mult = 4 + # delay before a second threshold crossing can be detected (BlackRock Settings) + refractory_period = 38 + + # find threshold crossings and remove contiguous samples + thresh_cross = np.where(dat < - (rms_mult * _RMS))[0] + diffs = np.diff(thresh_cross) + to_keep = (diffs != 1) & (diffs > refractory_period) + # always keep first threshold crossing + to_keep = np.hstack((np.array([True], dtype=bool), to_keep.flatten())) + thresh_cross = thresh_cross[to_keep] + _Rate = len(thresh_cross) / dat.shape[0] * self.SR + + # Burst index ==================================================== + # from (Pralong et al., 2004) and (Hutchison et al., 1997; 1998) we define the BurstIndex as: + # reciprocal of (time of peak in ISI histogram / mean firing rate of spike train) + # ISI histogram is 250 bins 0-500 ms, 2 ms per bin + ISI = np.diff(thresh_cross) / self.SR + counts, edges = np.histogram(ISI, bins=500, range=(0, 0.5)) + # center of peak + peak = np.where(counts == np.max(counts))[0] + 1 + _BI = np.mean(ISI) / float(edges[peak[0]]) + + # Fano Factor + # 100 ms bins @ 30 kHz: 3000 samples + # assuming a "trial" of 100 ms, we will compute the std/mean of spike counts in each + # "trial" and get the fano factor from these trials. + win_size = 0.100 # sec + overlap = 0.50 # % + sample_per_bin = win_size * self.SR + bins = np.arange(0, dat.shape[0], int(overlap * sample_per_bin)) + counts = [np.sum((thresh_cross >= x) & (thresh_cross < x + sample_per_bin)) for x in bins] + _FF = (np.std(counts) ** 2) / np.mean(counts) + + # Some values are float64 for some reason. + out_data[idx, 0] = np.float(_RMS) + out_data[idx, 1] = np.float(_Rate) + out_data[idx, 2] = np.float(_BI) + out_data[idx, 3] = np.float(_FF) + + return out_data, np.zeros((out_data.shape[1],)) + + diff --git a/python/serf/tools/online.py b/python/serf/tools/online.py new file mode 100644 index 0000000..06f64e3 --- /dev/null +++ b/python/serf/tools/online.py @@ -0,0 +1,173 @@ +# This is pretty old. I'm keeping it around because it has some useful snippets I will likely need. +import numpy as np +import time, os, datetime +from scipy.optimize import curve_fit +# from EERF.API import * +# from sqlalchemy.orm import query +# from sqlalchemy import desc +import BCPy2000.BCI2000Tools.FileReader as FileReader +from matplotlib.mlab import find + + +# sigmoid function used for fitting response data +def my_sigmoid(x, x0, k, a, c): + # x0 = half-max, k = slope, a = max, c = min + return a / (1 + np.exp(-1*k*(x-x0))) + c + + +def my_simp_sigmoid(x, x0, k): + return 1 / (1 + np.exp(-1*k*(x-x0))) + + +# Calculate and return _halfmax and _halfmax err +def model_sigmoid(x, y, mode=None): + # Fit a sigmoid to those values for trials in this period. + n_trials = x.shape[0] + if n_trials > 4: + if not mode or mode == 'halfmax': + sig_func = my_sigmoid + p0 = (np.median(x), 0.1, np.max(y) - np.min(y), np.min(y)) # x0, k, a, c + nvars = 4 + elif mode == "threshold": + sig_func = my_simp_sigmoid + p0 = (np.median(x), 0.1) # x0, k + nvars = 2 + try: + popt, pcov = curve_fit(sig_func, x, y, p0=p0) + except RuntimeError: + print("Error - curve_fit failed") + popt = np.empty((nvars,)) + popt.fill(np.NAN) + pcov = np.Inf # So the err is set to nan + # popt = x0, k, a, c + # diagonal pcov is variance of parameter estimates. + if np.isinf(pcov).all(): + perr = np.empty((nvars,)) + perr.fill(np.NAN) + else: + perr = np.sqrt(pcov.diagonal()) + return popt, perr + + +def _recent_stream_for_dir(dir, maxdate=None): + dir = os.path.abspath(dir) + files = FileReader.ListDatFiles(d=dir) + # The returned list is in ascending order, assume the last is most recent + best_stream = None + for fn in files: + temp_stream = FileReader.bcistream(fn) + temp_date = datetime.datetime.fromtimestamp(temp_stream.datestamp) + if not best_stream\ + or (maxdate and temp_date <= maxdate)\ + or (not maxdate and temp_date > datetime.datetime.fromtimestamp(best_stream.datestamp)): + best_stream = temp_stream + return best_stream + + +# http://code.activestate.com/recipes/412717-extending-classes/ +def get_obj(name): + return eval(name) + + +class ExtendInplace(type): # This class enables class definitions here to _extend_ parent classes. + def __new__(cls, name, bases, dict): + prevclass = get_obj(name) + del dict['__module__'] + del dict['__metaclass__'] + for k, v in dict.iteritems(): + setattr(prevclass, k, v) + return prevclass + + +class Subject: + __metaclass__ = ExtendInplace + + +class Datum: + __metaclass__ = ExtendInplace + + def model_erp(self, model_type='halfmax'): + if self.span_type == 'period': + fts = self.datum_type.feature_types + isp2p = any([ft for ft in fts if 'p2p' in ft.Name]) + + if 'hr' in self.type_name: + stim_det_name = 'dat_Nerve_stim_output' + erp_name = 'HR_p2p' if isp2p else 'HR_aaa' + else: # if 'mep' in self.type_name: + stim_det_name = 'dat_TMS_powerA' + erp_name = 'MEP_p2p' if isp2p else 'MEP_aaa' + # get xy_array as dat_TMS_powerA, MEP_aaa + x = self._get_child_details(stim_det_name) + x = x.astype(np.float) + x_bool = ~np.isnan(x) + y = self._get_child_features(erp_name) + if model_type == 'threshold': + y = y > self.detection_limit + y = y.astype(int) + elif 'hr' in self.type_name: # Not threshold, and hr, means cut off trials > h-max + h_max = np.max(y) + y_max_ind = find(y == h_max)[0] + x_at_h_max = x[y_max_ind] + x_bool = x <= x_at_h_max + n_trials = 1 if x.size == 1 else x[x_bool].shape[0] + # Should data be scaled/standardized? + if n_trials > 4: + return model_sigmoid(x[x_bool], y[x_bool], mode=model_type) + (x,) + (y,) + else: + return None, None, None, None + + def assign_coords(self, space='brainsight'): + if self.span_type == 'period' and self.datum_type.Name == 'mep_mapping': + # Find and load the brainsight file + dir_stub = get_or_create(System, Name='bci_dat_dir').Value + bs_file_loc = dir_stub + '/' + self.subject.Name + '/mapping/' + str(self.Number) + '_' + space + '.txt' + # Parse the brainsight file for X-Y coordinates + data = [line.split('\t') for line in file(bs_file_loc)] + data = [line for line in data if 'Sample' in line[0]] + starti = find(['#' in line[0] for line in data])[0] + data = data[starti:] + headers = data[0] + data = data[1:] + x_ind = find(['Loc. X' in col for col in headers])[0] + y_ind = find(['Loc. Y' in col for col in headers])[0] + z_ind = find(['Loc. Z' in col for col in headers])[0] + + i = 0 + for tt in self.trials: + tt.detail_values['dat_TMS_coil_x'] = float(data[i][x_ind]) + tt.detail_values['dat_TMS_coil_y'] = float(data[i][y_ind]) + tt.detail_values['dat_TMS_coil_z'] = float(data[i][z_ind]) + i = i+1 + + def add_trials_from_file(self, filename): + if self.span_type == 'period' and filename: + bci_stream = FileReader.bcistream(filename) + sig, states = bci_stream.decode(nsamp='all') + sig, chan_labels = bci_stream.spatialfilteredsig(sig) + erpwin = [int(bci_stream.msec2samples(ww)) for ww in bci_stream.params['ERPWindow']] + x_vec = np.arange(bci_stream.params['ERPWindow'][0], + bci_stream.params['ERPWindow'][1], + 1000 / bci_stream.samplingfreq_hz, dtype=float) + trigchan = bci_stream.params['TriggerInputChan'] + trigchan_ix = find(trigchan[0] in chan_labels) + trigthresh = bci_stream.params['TriggerThreshold'] + trigdetect = find(np.diff(np.asmatrix(sig[trigchan_ix, :] > trigthresh, dtype='int16')) > 0) + 1 + intensity_detail_name = 'dat_TMS_powerA' if 'dat_TMS_powerA' in self.detail_values else 'dat_Nerve_stim_output' + # Get approximate data segments for each trial + trig_ix = find(np.diff(states['Trigger']) > 0) + 1 + for i in np.arange(len(trigdetect)): + ix = trigdetect[i] + dat = sig[:, ix+erpwin[0]:ix+erpwin[1]] + self.trials.append(Datum(subject_id=self.subject_id, + datum_type_id=self.datum_type_id, + span_type='trial', + parent_datum_id=self.datum_id, + IsGood=1, Number=0)) + my_trial = self.trials[-1] + my_trial.detail_values[intensity_detail_name] = str(states['StimulatorIntensity'][0, trig_ix[i]]) + if int(bci_stream.params['ExperimentType']) == 1: # SICI intensity + my_trial.detail_values['dat_TMS_powerB'] = str(bci_stream.params['StimIntensityB']) # TODO: Use the state. + my_trial.detail_values['dat_TMS_ISI'] = str(bci_stream.params['PulseInterval']) + my_trial.store = {'x_vec': x_vec, 'data': dat, 'channel_labels': chan_labels} + Session.commit() diff --git a/python/serf/tools/resources/dpss.mat b/python/serf/tools/resources/dpss.mat new file mode 100644 index 0000000..ec70660 Binary files /dev/null and b/python/serf/tools/resources/dpss.mat differ diff --git a/python/serf/tools/utils/__init__.py b/python/serf/tools/utils/__init__.py new file mode 100644 index 0000000..5a472b8 --- /dev/null +++ b/python/serf/tools/utils/__init__.py @@ -0,0 +1,19 @@ +""" +Any features can be added to the list, as long as they are defined as a class with the model: +class ClassName: + name = "ClassName" + desc = "Class description" + category = "DBS", "DL", or any custom value + + def __init__(self): + self.db_id = None # feature id within the db. populated at instantiation + + @staticmethod + def run(data): + return data, x_vec + +""" + +from . import misc_functions + +__all__ = ['misc_functions'] diff --git a/python/serf/tools/utils/_shared.py b/python/serf/tools/utils/_shared.py new file mode 100644 index 0000000..3bc1a76 --- /dev/null +++ b/python/serf/tools/utils/_shared.py @@ -0,0 +1,8 @@ +def singleton(cls): + instances = {} + + def getinstance(**kwargs): + if cls not in instances: + instances[cls] = cls(**kwargs) + return instances[cls] + return getinstance diff --git a/python/serf/tools/utils/misc_functions.py b/python/serf/tools/utils/misc_functions.py new file mode 100644 index 0000000..8295ee4 --- /dev/null +++ b/python/serf/tools/utils/misc_functions.py @@ -0,0 +1,102 @@ +from scipy import signal, version +from scipy.fft import fft +import numpy as np + + +# Settings +HP_SPIKE_CUTOFF = 250 # in Hz + + +# common functions +def high_pass_filter(data, filt_order=4, cut_off=250, fs=30000): + + # Filter design + if version.full_version not in ['1.4.1']: + sos = signal.butter(filt_order, cut_off / (0.5*fs), 'hp', output='sos') + else: + sos = signal.butter(filt_order, cut_off, 'hp', fs=fs, output='sos') + + # mirror pad the signal to remove edge effects + pad = np.concatenate((np.flip(data[:fs]), data, np.flip(data[-fs:]))) + + # filter + filtered = signal.sosfilt(sos, pad) + + # remove pad + data = filtered[fs:-fs] + + return data + + +def band_pass_filter(data, filt_order=4, bp=[13, 30], fs=30000): + # Filter design + sos = signal.butter(filt_order, bp, 'bp', fs=fs, output='sos') + + # mirror pad the signal to remove edge effects + pad = np.concatenate((np.flip(data[:fs]), data, np.flip(data[-fs:]))) + + # filter + filtered = signal.sosfilt(sos, pad) + + # remove pad + data = filtered[fs:-fs] + + return data + + +# BlackRock documentation has a 0.25 uV per bit digitization. +def int_to_voltage(data, uV_per_bit=0.25): + return data * uV_per_bit + + +def define_complex_morlet(fo, max_segment_length=120000, sampling_rate=30000, c=7): + """ + Taken from an old Matlab script used for Doucet et al. 2019, Hippocampus. + %-------------------------------------------------------------------------- + % SR Sampling Rate of signal + % c Nomber of wavelet oscillations see : + % Hughes, A., Whitten, T., Caplan, J. & Dickson, C. BOSC: A better oscillation + detection method, extracts both sustained and transient rhythms from rat hippocampal + recordings. Hippocampus 22, 1417–1428 (2012) + % fo Center frequencies for the wavelet family + + % The value of c should always be > 5, usually use 7 (Tallon-Beaudry et al. 1997) + % The higher the number, the thighter the wavelets, meaning higher + % frequency resolution and lower time resolution. + + % When using the STFT, you can adjust the transform window to enhance the + % desired characteristic. A larger window allows for better frequency + % resolution, and a smaller window allows for better temporal resolution. + % However, for the STFT, the window size is constant throughout the algorithm. + % This can pose a problem for some nonstationary signals. The wavelet + % transform provides an alternative to the STFT that often provides a better + % frequency/time representation of the signal. + + % From Tallon-Beaudry 1997 : The time resolution of this method thus + % increases with frequency, whereas the frequency resolution decreases. + + %Outputs : + %-------------------------------------------------------------------------- + % coeffs Dict with fft transformed wavelet coefficients + % s_t Time decay (gaussian standard deviation) + % s_f Wavelet Spectral Bandwidth + %-------------------------------------------------------------------------- + """ + sampling_period = 1 / sampling_rate + + # Frequency and time bandwidth + s_f = fo / c + s_t = 1 / (2 * np.pi * s_f) + + # our longest wavelet is the lowest frequency + _x = np.arange(-5 * s_t[0], 5 * s_t[0], sampling_period) + + # set coefficients to match input data shape of : chan x freq x samples + coeffs = np.zeros((fo.shape[0], max_segment_length), dtype=np.complex128) + for idx, f in enumerate(fo): + # complex Morlet wavelet formula (FIND SOURCE) + coeffs[idx, :] = fft((s_t[idx] * np.sqrt(np.pi)) ** (-1 / 2) * + np.exp(-_x ** 2 / (2 * s_t[idx] ** 2)) * + np.exp(1j * 2 * np.pi * f * _x), max_segment_length) + + return coeffs, _x.shape[0] diff --git a/django-eerf/eerfapp/urls.py b/python/serf/urls.py similarity index 100% rename from django-eerf/eerfapp/urls.py rename to python/serf/urls.py diff --git a/django-eerf/eerfapp/views.py b/python/serf/views.py similarity index 98% rename from django-eerf/eerfapp/views.py rename to python/serf/views.py index 26d917d..ce71e5e 100644 --- a/django-eerf/eerfapp/views.py +++ b/python/serf/views.py @@ -1,218 +1,218 @@ -import json -import numpy as np -import datetime -from django.shortcuts import render, get_object_or_404 -from django.views.decorators.http import require_http_methods -from django.http import HttpResponse, HttpResponseRedirect#, Http404 -from django.contrib.sessions.models import Session -from django.core.serializers.json import DjangoJSONEncoder -from eerfapp import models -#import pdb -#from django.core.urlresolvers import reverse -#from django.template import RequestContext, loader - -#=============================================================================== -# Index. For now this is a redirect to something useful. -#=============================================================================== -def index(request): - #return render_to_response('eerfapp/index.html') - #pdb.set_trace() - request.session.flush() - return HttpResponseRedirect('/eerfapp/subject/') - -#=============================================================================== -# Helper functions (not views) -#=============================================================================== -def store_man_for_request_subject(request, pk): #Helper function to return a filtered DatumStore manager. - store_man = models.DatumStore.objects.filter(datum__subject__pk=pk).filter(datum__span_type=1).filter(n_samples__gt=0) - my_session = models.Session.objects.get(pk=request.session.session_key).get_decoded() - if my_session.has_key('trial_start'): - store_man = store_man.filter(datum__start_time__gte=my_session['trial_start']) - if my_session.has_key('trial_stop'): - store_man = store_man.filter(datum__stop_time__lte=my_session['trial_stop']) - return store_man - -#=============================================================================== -# Rendering views -#=============================================================================== - -#/subject/, /subject/pk/, and /period/ are all automatic views. - -def subject_list(request): #View list of subjects and option to import - mySubjects = models.Subject.objects.all() - context = {'subject_list': mySubjects} - return render(request, 'eerfapp/subject_list.html', context) - -def subject_import(request): - #TODO: Get list of elizan subjects. Mark those that are already imported. - context = {'elizan_subjects': {} } - return render(request, 'eerfapp/subject_import.html', context) - -def view_data(request, pk):#View data for subject with pk - subject = get_object_or_404(models.Subject, pk=pk) - context = {'subject': subject} - return render(request, 'eerfapp/subject_view_data.html', context) - -def erps(request, trial_pk_csv='0'): - #convert trial_pk_csv to trial_pk_list - trial_pk_list = trial_pk_csv.split(',') - if len(trial_pk_list[0])>0: - trial_pk_list = [int(val) for val in trial_pk_list] - stores = models.DatumStore.objects.filter(pk__in=trial_pk_list) - #subject = get_object_or_404(Subject, pk=subject_id) - data = ','.join(['"' + str(st.datum_id) + '": ' + json.dumps(st.erp.tolist()) for st in stores]) - data = '{' + data + '}' - else: - data = '{}' - return render(request, 'eerfapp/erp_data.html',{'data': data}) - -#=============================================================================== -# API: GET or POST. Non-rendering. -#=============================================================================== -@require_http_methods(["GET", "POST"]) -def set_details(request, pk): - subject = get_object_or_404(models.Subject, pk=pk) - my_dict = request.POST.copy() - my_dict.pop('csrfmiddlewaretoken', None)#Remove the token provided by the POST command - for key in my_dict: - subject.update_ddv(key, my_dict[key]) - #return HttpResponseRedirect(reverse('eerfapp.views.monitor', args=(pk,))) - return HttpResponseRedirect(request.META['HTTP_REFERER']) - -@require_http_methods(["GET"]) -def get_detail_values(request, pk, detail_name, json_vals_only=True): - detail_type = get_object_or_404(models.DetailType, name=detail_name) - ddvs_man = models.DatumDetailValue.objects.filter(datum__subject__pk=pk).filter(detail_type=detail_type) - my_session = Session.objects.get(pk=request.session.session_key).get_decoded() - if my_session.has_key('trial_start'): - ddvs_man = ddvs_man.filter(datum__start_time__gte=my_session['trial_start']) - if my_session.has_key('trial_stop'): - ddvs_man = ddvs_man.filter(datum__stop_time__lte=my_session['trial_stop']) - if json_vals_only: - ddvs = [ddv.value for ddv in ddvs_man] - return HttpResponse(json.dumps(ddvs)) - else: - return ddvs_man - -@require_http_methods(["GET"]) -def get_feature_values(request, pk, feature_name, json_vals_only=True): - feature_type = get_object_or_404(models.FeatureType, name=feature_name) - dfvs_man = models.DatumFeatureValue.objects.filter(datum__subject__pk=pk).filter(feature_type=feature_type) - my_session = Session.objects.get(pk=request.session.session_key).get_decoded() - if my_session.has_key('trial_start'): - dfvs_man = dfvs_man.filter(datum__start_time__gte=my_session['trial_start']) - if my_session.has_key('trial_stop'): - dfvs_man = dfvs_man.filter(datum__stop_time__lte=my_session['trial_stop']) - if json_vals_only: - dfvs = [dfv.value for dfv in dfvs_man] - return HttpResponse(json.dumps(dfvs)) - else: - return dfvs_man - -def recalculate_feature(request, pk, feature_name): - trial_man = models.Datum.objects.filter(subject__pk=pk, span_type=1, store__n_samples__gt=0) - my_session = Session.objects.get(pk=request.session.session_key).get_decoded() - if my_session.has_key('trial_start'): - trial_man = trial_man.filter(start_time__gte=my_session['trial_start']) - if my_session.has_key('trial_stop'): - trial_man = trial_man.filter(stop_time__lte=my_session['trial_stop']) - for tr in trial_man: - tr.calculate_value_for_feature_name(feature_name) - return HttpResponse('success') - -@require_http_methods(["GET"]) -def count_trials(request, pk): #GET number of trials for subject. Uses session variables. Non-rendering - store_man = store_man_for_request_subject(request, pk) - return HttpResponse(json.dumps(store_man.count())) - -@require_http_methods(["GET"]) -def erp_data(request, pk): #Gets ERP data for a subject. Uses session variables. Non-rendering. - [x_min,x_max] = [-10,100.0] - - #Get the manager for datum_store... and reverse its order - store_man = store_man_for_request_subject(request, pk).order_by('-pk') - - #Get the last trial_limit trials - my_session = Session.objects.get(pk=request.session.session_key).get_decoded() - trial_limit = int(my_session['trial_limit']) if my_session.has_key('trial_limit') and int(my_session['trial_limit'])>0 else store_man.count() - store_man = store_man[0:trial_limit] - - #Return the channel_labels and data - if store_man.count()>0: - n_channels = [st.n_channels for st in store_man] - channel_labels = store_man[np.nonzero(n_channels==np.max(n_channels))[0][0]].channel_labels - data = dict([(chlb, [{'label': st.pk, 'data': np.column_stack((st.x_vec[np.logical_and(st.x_vec>=x_min,st.x_vec<=x_max)], st.data[channel_labels.index(chlb),np.logical_and(st.x_vec>=x_min,st.x_vec<=x_max)])).tolist()} for st in reversed(store_man)]) for chlb in channel_labels]) - else: - channel_labels = '' - data = {} - return HttpResponse(json.dumps({'data': data, 'channel_labels': channel_labels})) - -@require_http_methods(["GET"]) -def get_xy(request): - getter = request.GET.copy() - my_session = Session.objects.get(pk=request.session.session_key).get_decoded() - trial_man = models.Datum.objects.filter(subject__pk=getter['subject_pk'], span_type=1) - trial_man = trial_man.filter(start_time__gte=my_session['trial_start']) if my_session.has_key('trial_start') else trial_man - trial_man = trial_man.filter(stop_time__lte=my_session['trial_stop']) if my_session.has_key('trial_stop') else trial_man - trial_man = trial_man.filter(_detail_values__detail_type__name=getter['x_name']) - trial_man = trial_man.filter(_feature_values__feature_type__name=getter['y_name']) - trial_man = trial_man.distinct() - x = [float(tr._detail_values.get(detail_type__name='TMS_powerA').value) for tr in trial_man] - y = [tr._feature_values.get(feature_type__name='MEP_p2p').value for tr in trial_man] - data = [{ "label": getter['y_name'], "data": np.column_stack((x,y)).tolist()}] - return HttpResponse(json.dumps(data)) - -#GET or POST session dictionary. -@require_http_methods(["GET", "POST"]) -def my_session(request): - my_session = Session.objects.get(pk=request.session.session_key).get_decoded() - if request.method == 'GET': - return HttpResponse(json.dumps(my_session, cls=DjangoJSONEncoder)) - elif request.method == 'POST': - my_post = request.POST.copy()#mutable copy of POST - my_post.pop('csrfmiddlewaretoken', None) - - #Fix date values. %b->Sep, %d->11, %Y->2001, %X->locale time - #Returned as '1/25/2013 6:46:03 AM' - #date_format = '%b %d %Y %X'#date_format = '%Y-%m-%dT%H:%M:%S' - date_format = '%m/%d/%Y %I:%M:%S %p' - date_keys = ['trial_start', 'trial_stop'] - for key in date_keys: - my_post[key] = datetime.datetime.strptime(my_post[key], date_format) if my_post.has_key(key) else request.session.get(key, datetime.datetime.now()) - - #Other values to fix - my_post['monitor'] = my_post.has_key('monitor') if my_post.has_key('trial_start') else True - - #Put the values back into request.session - for key in my_post: request.session[key] = my_post[key] - return HttpResponseRedirect(request.META['HTTP_REFERER']) - -@require_http_methods(["GET"]) -def detail_types(request): - dts = models.DetailType.objects.all() - dt_names = [dt.name for dt in dts] - return HttpResponse(json.dumps(dt_names)) - -@require_http_methods(["GET"]) -def feature_types(request): - fts = models.FeatureType.objects.all() - ft_names = [ft.name for ft in fts] - return HttpResponse(json.dumps(ft_names)) - -def store_pk_check(request, pk): - #Given a pk, return how many DatumStore objects we have with pk greater - pk = int(pk) if len(pk)>0 else 0 - n_stores = models.DatumStore.objects.filter(pk__gte=pk).count() - return HttpResponse(json.dumps(n_stores)) - - -@require_http_methods(["POST"]) -def import_elizan(request): - my_session = Session.objects.get(pk=request.session.session_key).get_decoded() - my_post = request.POST.copy()#mutable copy of POST - my_post.pop('csrfmiddlewaretoken', None) - selected_sub = my_post.has_key('subject_select') - subject_json = my_post['subject_json'] - #TODO: parse JSON - #TODO: import subject. +import json +import numpy as np +import datetime +from django.shortcuts import render, get_object_or_404 +from django.views.decorators.http import require_http_methods +from django.http import HttpResponse, HttpResponseRedirect#, Http404 +from django.contrib.sessions.models import Session +from django.core.serializers.json import DjangoJSONEncoder +from eerfapp import models +#import pdb +#from django.core.urlresolvers import reverse +#from django.template import RequestContext, loader + +#=============================================================================== +# Index. For now this is a redirect to something useful. +#=============================================================================== +def index(request): + #return render_to_response('eerfapp/index.html') + #pdb.set_trace() + request.session.flush() + return HttpResponseRedirect('/eerfapp/subject/') + +#=============================================================================== +# Helper functions (not views) +#=============================================================================== +def store_man_for_request_subject(request, pk): #Helper function to return a filtered DatumStore manager. + store_man = models.DatumStore.objects.filter(datum__subject__pk=pk).filter(datum__span_type=1).filter(n_samples__gt=0) + my_session = models.Session.objects.get(pk=request.session.session_key).get_decoded() + if my_session.has_key('trial_start'): + store_man = store_man.filter(datum__start_time__gte=my_session['trial_start']) + if my_session.has_key('trial_stop'): + store_man = store_man.filter(datum__stop_time__lte=my_session['trial_stop']) + return store_man + +#=============================================================================== +# Rendering views +#=============================================================================== + +#/subject/, /subject/pk/, and /period/ are all automatic views. + +def subject_list(request): #View list of subjects and option to import + mySubjects = models.Subject.objects.all() + context = {'subject_list': mySubjects} + return render(request, 'eerfapp/subject_list.html', context) + +def subject_import(request): + #TODO: Get list of elizan subjects. Mark those that are already imported. + context = {'elizan_subjects': {} } + return render(request, 'eerfapp/subject_import.html', context) + +def view_data(request, pk):#View data for subject with pk + subject = get_object_or_404(models.Subject, pk=pk) + context = {'subject': subject} + return render(request, 'eerfapp/subject_view_data.html', context) + +def erps(request, trial_pk_csv='0'): + #convert trial_pk_csv to trial_pk_list + trial_pk_list = trial_pk_csv.split(',') + if len(trial_pk_list[0])>0: + trial_pk_list = [int(val) for val in trial_pk_list] + stores = models.DatumStore.objects.filter(pk__in=trial_pk_list) + #subject = get_object_or_404(Subject, pk=subject_id) + data = ','.join(['"' + str(st.datum_id) + '": ' + json.dumps(st.erp.tolist()) for st in stores]) + data = '{' + data + '}' + else: + data = '{}' + return render(request, 'eerfapp/erp_data.html',{'data': data}) + +#=============================================================================== +# API: GET or POST. Non-rendering. +#=============================================================================== +@require_http_methods(["GET", "POST"]) +def set_details(request, pk): + subject = get_object_or_404(models.Subject, pk=pk) + my_dict = request.POST.copy() + my_dict.pop('csrfmiddlewaretoken', None)#Remove the token provided by the POST command + for key in my_dict: + subject.update_ddv(key, my_dict[key]) + #return HttpResponseRedirect(reverse('eerfapp.views.monitor', args=(pk,))) + return HttpResponseRedirect(request.META['HTTP_REFERER']) + +@require_http_methods(["GET"]) +def get_detail_values(request, pk, detail_name, json_vals_only=True): + detail_type = get_object_or_404(models.DetailType, name=detail_name) + ddvs_man = models.DatumDetailValue.objects.filter(datum__subject__pk=pk).filter(detail_type=detail_type) + my_session = Session.objects.get(pk=request.session.session_key).get_decoded() + if my_session.has_key('trial_start'): + ddvs_man = ddvs_man.filter(datum__start_time__gte=my_session['trial_start']) + if my_session.has_key('trial_stop'): + ddvs_man = ddvs_man.filter(datum__stop_time__lte=my_session['trial_stop']) + if json_vals_only: + ddvs = [ddv.value for ddv in ddvs_man] + return HttpResponse(json.dumps(ddvs)) + else: + return ddvs_man + +@require_http_methods(["GET"]) +def get_feature_values(request, pk, feature_name, json_vals_only=True): + feature_type = get_object_or_404(models.FeatureType, name=feature_name) + dfvs_man = models.DatumFeatureValue.objects.filter(datum__subject__pk=pk).filter(feature_type=feature_type) + my_session = Session.objects.get(pk=request.session.session_key).get_decoded() + if my_session.has_key('trial_start'): + dfvs_man = dfvs_man.filter(datum__start_time__gte=my_session['trial_start']) + if my_session.has_key('trial_stop'): + dfvs_man = dfvs_man.filter(datum__stop_time__lte=my_session['trial_stop']) + if json_vals_only: + dfvs = [dfv.value for dfv in dfvs_man] + return HttpResponse(json.dumps(dfvs)) + else: + return dfvs_man + +def recalculate_feature(request, pk, feature_name): + trial_man = models.Datum.objects.filter(subject__pk=pk, span_type=1, store__n_samples__gt=0) + my_session = Session.objects.get(pk=request.session.session_key).get_decoded() + if my_session.has_key('trial_start'): + trial_man = trial_man.filter(start_time__gte=my_session['trial_start']) + if my_session.has_key('trial_stop'): + trial_man = trial_man.filter(stop_time__lte=my_session['trial_stop']) + for tr in trial_man: + tr.calculate_value_for_feature_name(feature_name) + return HttpResponse('success') + +@require_http_methods(["GET"]) +def count_trials(request, pk): #GET number of trials for subject. Uses session variables. Non-rendering + store_man = store_man_for_request_subject(request, pk) + return HttpResponse(json.dumps(store_man.count())) + +@require_http_methods(["GET"]) +def erp_data(request, pk): #Gets ERP data for a subject. Uses session variables. Non-rendering. + [x_min,x_max] = [-10,100.0] + + #Get the manager for datum_store... and reverse its order + store_man = store_man_for_request_subject(request, pk).order_by('-pk') + + #Get the last trial_limit trials + my_session = Session.objects.get(pk=request.session.session_key).get_decoded() + trial_limit = int(my_session['trial_limit']) if my_session.has_key('trial_limit') and int(my_session['trial_limit'])>0 else store_man.count() + store_man = store_man[0:trial_limit] + + #Return the channel_labels and data + if store_man.count()>0: + n_channels = [st.n_channels for st in store_man] + channel_labels = store_man[np.nonzero(n_channels==np.max(n_channels))[0][0]].channel_labels + data = dict([(chlb, [{'label': st.pk, 'data': np.column_stack((st.x_vec[np.logical_and(st.x_vec>=x_min,st.x_vec<=x_max)], st.data[channel_labels.index(chlb),np.logical_and(st.x_vec>=x_min,st.x_vec<=x_max)])).tolist()} for st in reversed(store_man)]) for chlb in channel_labels]) + else: + channel_labels = '' + data = {} + return HttpResponse(json.dumps({'data': data, 'channel_labels': channel_labels})) + +@require_http_methods(["GET"]) +def get_xy(request): + getter = request.GET.copy() + my_session = Session.objects.get(pk=request.session.session_key).get_decoded() + trial_man = models.Datum.objects.filter(subject__pk=getter['subject_pk'], span_type=1) + trial_man = trial_man.filter(start_time__gte=my_session['trial_start']) if my_session.has_key('trial_start') else trial_man + trial_man = trial_man.filter(stop_time__lte=my_session['trial_stop']) if my_session.has_key('trial_stop') else trial_man + trial_man = trial_man.filter(_detail_values__detail_type__name=getter['x_name']) + trial_man = trial_man.filter(_feature_values__feature_type__name=getter['y_name']) + trial_man = trial_man.distinct() + x = [float(tr._detail_values.get(detail_type__name='TMS_powerA').value) for tr in trial_man] + y = [tr._feature_values.get(feature_type__name='MEP_p2p').value for tr in trial_man] + data = [{ "label": getter['y_name'], "data": np.column_stack((x,y)).tolist()}] + return HttpResponse(json.dumps(data)) + +#GET or POST session dictionary. +@require_http_methods(["GET", "POST"]) +def my_session(request): + my_session = Session.objects.get(pk=request.session.session_key).get_decoded() + if request.method == 'GET': + return HttpResponse(json.dumps(my_session, cls=DjangoJSONEncoder)) + elif request.method == 'POST': + my_post = request.POST.copy()#mutable copy of POST + my_post.pop('csrfmiddlewaretoken', None) + + #Fix date values. %b->Sep, %d->11, %Y->2001, %X->locale time + #Returned as '1/25/2013 6:46:03 AM' + #date_format = '%b %d %Y %X'#date_format = '%Y-%m-%dT%H:%M:%S' + date_format = '%m/%d/%Y %I:%M:%S %p' + date_keys = ['trial_start', 'trial_stop'] + for key in date_keys: + my_post[key] = datetime.datetime.strptime(my_post[key], date_format) if my_post.has_key(key) else request.session.get(key, datetime.datetime.now()) + + #Other values to fix + my_post['monitor'] = my_post.has_key('monitor') if my_post.has_key('trial_start') else True + + #Put the values back into request.session + for key in my_post: request.session[key] = my_post[key] + return HttpResponseRedirect(request.META['HTTP_REFERER']) + +@require_http_methods(["GET"]) +def detail_types(request): + dts = models.DetailType.objects.all() + dt_names = [dt.name for dt in dts] + return HttpResponse(json.dumps(dt_names)) + +@require_http_methods(["GET"]) +def feature_types(request): + fts = models.FeatureType.objects.all() + ft_names = [ft.name for ft in fts] + return HttpResponse(json.dumps(ft_names)) + +def store_pk_check(request, pk): + #Given a pk, return how many DatumStore objects we have with pk greater + pk = int(pk) if len(pk)>0 else 0 + n_stores = models.DatumStore.objects.filter(pk__gte=pk).count() + return HttpResponse(json.dumps(n_stores)) + + +@require_http_methods(["POST"]) +def import_elizan(request): + my_session = Session.objects.get(pk=request.session.session_key).get_decoded() + my_post = request.POST.copy()#mutable copy of POST + my_post.pop('csrfmiddlewaretoken', None) + selected_sub = my_post.has_key('subject_select') + subject_json = my_post['subject_json'] + #TODO: parse JSON + #TODO: import subject. return HttpResponseRedirect(request.META['HTTP_REFERER']) \ No newline at end of file diff --git a/python/setup.py b/python/setup.py new file mode 100644 index 0000000..dd8ba0b --- /dev/null +++ b/python/setup.py @@ -0,0 +1,32 @@ +import os +from setuptools import setup, find_packages + +with open(os.path.join(os.path.dirname(__file__), '..', 'README.md')) as readme: + README = readme.read() + + +setup( + name='serf', + version='0.8', + packages=find_packages(), + include_package_data=True, + license='BSD License', # example license + description='A simple Django app to...', + long_description=README, + url='https://github.com/cboulay/SERF', + author='Chadwick Boulay', + author_email='chadwick.boulay@gmail.com', + classifiers=[ + 'Framework :: Django', + 'Intended Audience :: Developers', + 'Operating System :: OS Independent', + 'Programming Language :: Python', + ], + + entry_points={ + 'console_scripts': ['serf-shell=serf.scripts.djangoshell:main', + 'serf-makemigrations=serf.scripts.makemigrations:main', + 'serf-migrate=serf.scripts.migrate:main', + ], + } +) diff --git a/eerfapp.sql b/serf.sql similarity index 100% rename from eerfapp.sql rename to serf.sql diff --git a/standalone.py b/standalone.py deleted file mode 100644 index 040a680..0000000 --- a/standalone.py +++ /dev/null @@ -1,16 +0,0 @@ -#Here are some examples of how you can run a standalone application using this project's ORM. -import sys -import django -import os - - -apppath = os.path.join(os.path.expanduser('~'), "django_eerf", "expdb") -sys.path.insert(0, apppath) -os.environ.setdefault("DJANGO_SETTINGS_MODULE", "expdb.settings") -django.setup() -from eerfapp.models import * - - -print(Subject.objects.get_or_create(name='Test')[0]) -# ft = ('HR_aaa', 'H-reflex avg abs amp') -# myFT = FeatureType.objects.filter(name=ft[0])