Skip to content

Commit

Permalink
updated readme
Browse files Browse the repository at this point in the history
  • Loading branch information
AndreaSalati committed Oct 1, 2024
1 parent 9910639 commit d32d140
Show file tree
Hide file tree
Showing 3 changed files with 12 additions and 27 deletions.
2 changes: 1 addition & 1 deletion MLE_droin.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -925,7 +925,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.18"
"version": "3.9.16"
}
},
"nbformat": 4,
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

## Overview
This repository is dedicated to the inference of lobular coordinates from single-cell RNA sequencing (scRNA-seq) data. Our focus is on implementing and comparing three distinct computational approaches to address this challenge. This repository contains the code used in the study:
[*A sexually dimorphic hepatic cycle of very low density lipoprotein uptake and assembly*](https://www.biorxiv.org/content/10.1101/2023.10.07.561324v2.abstract).
[*A sexually dimorphic hepatic cycle of periportal VLDL generation and subsequent pericentral VLDLR-mediated re-uptake*](https://www.nature.com/articles/s41467-024-52751-2).

## Code for the article
1. `MLE.ipynb` it's the notebook that assigns the lobular coordinate to cells based on their transcriptome. Such coordinate is present in several figures of the paper.
Expand Down
35 changes: 10 additions & 25 deletions module/torch_models.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,8 +25,8 @@ def training(data, x_unif, coef_pau, n_c, dm, clamp, n_iter, batch_size, dev):
Returns:
tuple: A tuple containing the final values of the model's parameters and the loss values:
- x_final (numpy.ndarray): Final optimized values of x.
- a0_final (numpy.ndarray): Final optimized values of a0, one of the PAU coefficients.
- a1_final (numpy.ndarray): Final optimized values of a1, another PAU coefficient.
- a0_final (numpy.ndarray): Final optimized values of a0, the intercepts
- a1_final (numpy.ndarray): Final optimized values of a1, the slopes.
- disp_final (numpy.ndarray): Final dispersion values after optimization.
- losses (list): List of loss values recorded at each training iteration.
Expand Down Expand Up @@ -160,7 +160,14 @@ def loss_clamp_batch(x, a0, a1, disp, batch_size, mp, DATA):
a0 is sample specific, disp and a1 only gene specific.
If you want to use all datapoints, set batch_size = DATA.shape[0]
The 'clamp' gene slope coefficient a1 is set to the fix value (1).
Parameters:
x (array-like): The latent variable x.
a0 (array-like): The intercepts a0.
a1 (array-like): The slopes a1.
disp (array-like): The dispersion values.
batch_size (int): The size of the batch for training.
mp (dict): A dictionary containing the mask, fix, and other parameters.
DATA (array-like): The count matrix used for training.
"""
NC = DATA.shape[0]
# killing the gradient
Expand Down Expand Up @@ -242,28 +249,6 @@ def loss_simple(x, a0, a1, disp, mp, DATA):
return -NB.log_prob(DATA[:, :]).sum()


# def loss_simple_batch(x, a0, a1, disp, batch_size, mp, DATA):

# NC = DATA.shape[0]
# # killing the gradient
# a1_ = torch.matmul(mp.mask, a1)
# a1_[mp.clamp] = mp.fix

# idx = torch.randperm(DATA.shape[0])[:batch_size]
# y = x[idx, None] * a1_[None, :] + a0[None, :] + mp.log_n_UMI[idx, None]
# alpha = torch.exp(disp)

# y = mp.cutoff * torch.tanh(y / mp.cutoff)
# lmbda = torch.exp(y)

# r = 1 / alpha
# p = alpha * lmbda / (1 + alpha * lmbda)
# NB = torch.distributions.NegativeBinomial(
# total_count=r, probs=p, validate_args=None
# )
# return -NB.log_prob(DATA[idx, :]).sum() * (NC / batch_size)


def loss_gene_selection(x, a0, a1, disp, batch_size, mp, DATA):
"""
This loss function does not optimize x, but only a0 and a1.
Expand Down

0 comments on commit d32d140

Please sign in to comment.