Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

some fixes #3341

Merged
merged 4 commits into from
Dec 18, 2023
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 15 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,20 @@
# Qiita changelog

Version 2023.12
---------------

* The sample and preparation information pages will display the timestamp of their last update.
* Added a ProcessingJob.complete_processing_job method to retrieve the job that is completing the current job.
* Added a ProcessingJob.complete_processing_job method to retrieve the job that is completing the current job.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line looks like a duplicate of the one above.

* Added a ProcessingJob.trace method to trace all the jobs of a processing_job.
* Analyses now accept SLURM reservation's via the GUI; this will be [helpful for workshops or classes](https://qiita.ucsd.edu/static/doc/html/faq.html#are-you-planning-a-workshop-or-class).
* Admins can now add per-user-level SLURM submission parameters via the DB; this is helpful to prioritize wet-lab and admin jobs.
* Workflow definitions can now use sample or preparation information columns/values to differentiate between them.
* Updated the Adapter and host filtering plugin (qp-fastp-minimap2) to v2023.12 addressing a bug in adapter filtering; [more information](https://qiita.ucsd.edu/static/doc/html/processingdata/qp-fastp-minimap2.html).
* Other fixes: [3334](https://github.com/qiita-spots/qiita/pull/3334), [3338](https://github.com/qiita-spots/qiita/pull/3338). Thank you @sjanssen2.
* The internal Sequence Processing Pipeline is now using the human pan-genome reference, together with the GRCh38 genome + PhiX and CHM13 genome for human host filtering.


Version 2023.10
---------------

Expand Down
2 changes: 1 addition & 1 deletion qiita_core/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,4 @@
# The full license is in the file LICENSE, distributed with this software.
# -----------------------------------------------------------------------------

__version__ = "2023.10"
__version__ = "2023.12"
2 changes: 1 addition & 1 deletion qiita_db/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@
from . import user
from . import processing_job

__version__ = "2023.10"
__version__ = "2023.12"

__all__ = ["analysis", "artifact", "archive", "base", "commands",
"environment_manager", "exceptions", "investigation", "logger",
Expand Down
14 changes: 12 additions & 2 deletions qiita_db/metadata_template/prep_template.py
Original file line number Diff line number Diff line change
Expand Up @@ -851,20 +851,30 @@ def _get_predecessors(workflow, node):
starting_job = None
pt_artifact = self.artifact.artifact_type

workflows = []
all_workflows = [wk for wk in qdb.software.DefaultWorkflow.iter()]
# are there any workflows with parameters?
check_requirements = False
default_parameters = {'prep': {}, 'sample': {}}
if [wk for wk in all_workflows if wk.parameters != default_parameters]:
check_requirements = True
ST = qdb.metadata_template.sample_template.SampleTemplate
for wk in qdb.software.DefaultWorkflow.iter():
workflows = []
for wk in all_workflows:
if wk.artifact_type == pt_artifact and pt_dt in wk.data_type:
if check_requirements and wk.parameters == default_parameters:
charles-cowart marked this conversation as resolved.
Show resolved Hide resolved
continue
wk_params = wk.parameters
reqs_satisfied = True

if wk_params['sample']:
check_requirements = True
df = ST(self.study_id).to_dataframe(samples=list(self))
for k, v in wk_params['sample'].items():
if k not in df.columns or v not in df[k].unique():
reqs_satisfied = False

if wk_params['prep']:
check_requirements = True
df = self.to_dataframe()
for k, v in wk_params['prep'].items():
if k not in df.columns or v not in df[k].unique():
Expand Down
14 changes: 12 additions & 2 deletions qiita_db/processing_job.py
Original file line number Diff line number Diff line change
Expand Up @@ -999,7 +999,9 @@ def submit(self, parent_job_id=None, dependent_jobs_list=None):
qdb.sql_connection.TRN.commit()

job_dir = join(qdb.util.get_work_base_dir(), self.id)
software = self.command.software
command = self.command
software = command.software
cname = command.name
plugin_start_script = software.start_script
plugin_env_script = software.environment_script

Expand All @@ -1011,7 +1013,15 @@ def submit(self, parent_job_id=None, dependent_jobs_list=None):
# case where we are going to execute some command and then wait for the
# plugin to return their own id (first implemented for
# fast-bowtie2+woltka)
if 'ENVIRONMENT' in plugin_env_script:
#
# This is the hardcoded lines described in issue:
# https://github.com/qiita-spots/qiita/issues/3340
# the idea is that in the future we shouldn't check specific command
# names to know if it should be executed differently and the
# plugin should let Qiita know that a specific command should be ran
# as job array or not
cnames_to_skip = {'Calculate Cell Counts'}
if 'ENVIRONMENT' in plugin_env_script and cname not in cnames_to_skip:
# the job has to be in running state so the plugin can change its`
# status
with qdb.sql_connection.TRN:
Expand Down
2 changes: 1 addition & 1 deletion qiita_pet/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,4 @@
# The full license is in the file LICENSE, distributed with this software.
# -----------------------------------------------------------------------------

__version__ = "2023.10"
__version__ = "2023.12"
2 changes: 1 addition & 1 deletion qiita_pet/handlers/api_proxy/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@
from .user import (user_jobs_get_req)
from .util import check_access, check_fp

__version__ = "2023.10"
__version__ = "2023.12"

__all__ = ['prep_template_summary_get_req', 'data_types_get_req',
'study_get_req', 'sample_template_filepaths_get_req',
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ Shotgun sequencing
------------------

Qiita currently has one active shotgun metagenomics data analysis pipeline: a per sample
bowtie2 alignment step with Woltka classification using either the WoLr1, WoLr2 (default) or RS210 databases.
bowtie2 alignment step with Woltka classification using either the WoLr2 (default) or RS210 databases.
Below you will find more information about each of these options.

.. note::
Expand Down Expand Up @@ -87,7 +87,7 @@ Note that the command produces up to 6 output artifacts based on the aligner and

- Alignment Profile: contains the raw alignment file and the no rank classification BIOM table
- Per genome Predictions: contains the per genome level predictions BIOM table
- Per gene Predictions: Only WoLr1 & WoLr2, contains the per gene level predictions BIOM table
- Per gene Predictions: Only WoLr2, contains the per gene level predictions BIOM table
- KEGG Pathways: Only WoLr2, contains the functional profile
- KEGG Ontology (KO): Only WoLr2, contains the functional profile
- KEGG Enzyme (EZ): Only WoLr2, contains the functional profile
Expand Down
4 changes: 2 additions & 2 deletions qiita_pet/templates/workflows.html
Original file line number Diff line number Diff line change
Expand Up @@ -73,9 +73,9 @@ <h3>Recommended Default Workflows</h3>
default Earth Microbiome Project protocol and so assumes the uploaded data are multiplexed sequences with the reversed barcodes in your mapping file and index sequence
file (<a href="https://earthmicrobiome.org/protocols-and-standards/" target="_blank">see here</a> for more details). Thus, if the protocol does not apply to your data
you can still use the Default Workflow, however, you should first manually process your data using the appropriate steps until you have a defined step; in our example,
demultiplexed your reads. After demultiplexing the Default Workflow is safe to use with any protocol.
demultiplex your reads. After demultiplexing, the Default Workflow is safe to use with any protocol.
<br/><br/>
If you have already manually performed one of the processing steps in the Defaul Workflow pipeline, the "Add Default Workflow" button will not re-select those steps but
If you have already manually performed one of the processing steps in the Default Workflow pipeline, the "Add Default Workflow" button will not re-select those steps but
instead will only select any remaining steps that have not been completed. You can also add additional workflows on top of the recommended Default Workflow at any time.
<br/><br/>
Note that this is not a full inclusive list of data types accepted by Qiita but only those that have a defined workflow.
Expand Down
2 changes: 1 addition & 1 deletion qiita_ware/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,4 @@
# The full license is in the file LICENSE, distributed with this software.
# -----------------------------------------------------------------------------

__version__ = "2023.10"
__version__ = "2023.12"
2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
from setuptools import setup
from glob import glob

__version__ = "2023.10"
__version__ = "2023.12"


classes = """
Expand Down
Loading