Replies: 7 comments 1 reply
-
Hi
I think the dry-run flag is an important feature. I am using a command line for most of the steps, and the dry run allows me to understand the latest feature and upgrade done from the previouse version.
Thanks
Assaf
From: J. Sebastian Paez <[email protected]>
Sent: Monday, February 1, 2021 2:24 AM
To: Nesvilab/philosopher <[email protected]>
Cc: Subscribed <[email protected]>
Subject: [Nesvilab/philosopher] Dry run for pipeline (#184)
I would like to know if there is any desire/interest in adding a --dry-run flag or equivalent to the pipeline sub-command, which would yield the sequential commands that would be run by the pipeline.
Is your feature request related to a problem? Please describe.
I am running philosopher in a HPC cluster and it would be useful to have a way to generate the commands that will be run by pipeline but be able to run them in different instances/jobs
Describe the solution you'd like
I would be nice to have an cli-interface to what Frag-pipe does when setting the dry run checkbox, where all philosopher commands are printed to the console.
Describe alternatives you've considered
I have so far attempted to use fragpipe in my local system and modify the paths manually, which I feel is not optimal
Is there any alternative to know the commands that pipeline would use to run them in independent commands (still respecting the order and file paths...)
Highly appreciating your help!
Sebastian
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub<#184>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AI6YNW6J25IOPPSY4YKUYA3S4XYDLANCNFSM4W33YOAQ>.
|
Beta Was this translation helpful? Give feedback.
-
Hi, thanks for your feedback. I'm not entirely sure I understand the benefits of having a "dry run" for a command-line program with a master configuration file. When you run the pipeline command, you need to provide the YAML, which contains all the necessary parameters for your run. Those parameters are exactly the ones you would see being passed to the command-line. |
Beta Was this translation helpful? Give feedback.
-
Hi
I usually don’t run the fragpipe from the command line , I run the different steps on Linux servers through command line. I use the dry run to follow any new version commands or if I want to execute new flow like TMT. The dry run allow me to track the logic of the flow and the different packages commands and the order of execution. Then I can implement the steps in command line and do my own intermediate steps if I need to. Maybe it is more usable for “advance” users... The YAML file doesn't hold, for example, the folder tree but when you follow the dry run commands you understand better which steps are executed on what segment of the data.
Anyway - I like this option.
Assaf
נשלח מה-iPhone שלי
ב-1 בפבר׳ 2021, בשעה 18:10, Felipe Leprevost <[email protected]> כתב/ה:
Hi, thanks for your feedback. I'm not entirely sure I understand the benefits of having a "dry run" for a command-line program with a master configuration file. When you run the pipeline command, you need to provide the YAML, which contains all the necessary parameters for your run. Those parameters are exactly the ones you would see being passed to the command-line.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub<#184 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AI6YNW5IJKHUEWR3C4Z4PB3S43HAHANCNFSM4W33YOAQ>.
|
Beta Was this translation helpful? Give feedback.
-
FragPipe generates that, for all steps (not just philosopher). So you can try using that in the meantime, until Felipe is able to add it to his philosopher yaml pipeline.
By the way, you can run FragPipe on linux if you have x forwarding.
Alexey
From: assafkacen <[email protected]>
Sent: Monday, February 1, 2021 3:00 PM
To: Nesvilab/philosopher <[email protected]>
Cc: Subscribed <[email protected]>
Subject: Re: [Nesvilab/philosopher] Dry run for pipeline (#184)
External Email - Use Caution
Hi
I usually don’t run the fragpipe from the command line , I run the different steps on Linux servers through command line. I use the dry run to follow any new version commands or if I want to execute new flow like TMT. The dry run allow me to track the logic of the flow and the different packages commands and the order of execution. Then I can implement the steps in command line and do my own intermediate steps if I need to. Maybe it is more usable for “advance” users... The YAML file doesn't hold, for example, the folder tree but when you follow the dry run commands you understand better which steps are executed on what segment of the data.
Anyway - I like this option.
Assaf
נשלח מה-iPhone שלי
ב-1 בפבר׳ 2021, בשעה 18:10, Felipe Leprevost <[email protected]<mailto:[email protected]>> כתב/ה:
Hi, thanks for your feedback. I'm not entirely sure I understand the benefits of having a "dry run" for a command-line program with a master configuration file. When you run the pipeline command, you need to provide the YAML, which contains all the necessary parameters for your run. Those parameters are exactly the ones you would see being passed to the command-line.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub<#184 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AI6YNW5IJKHUEWR3C4Z4PB3S43HAHANCNFSM4W33YOAQ>.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub<#184 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AIIMM672DMV7RWMQIB5OQE3S44B37ANCNFSM4W33YOAQ>.
**********************************************************
Electronic Mail is not secure, may not be read every day, and should not be used for urgent or sensitive issues
|
Beta Was this translation helpful? Give feedback.
-
@prvst I guess the utility of the dry run would be to see if your parameters file actually corresponds to the expected commands, and or to allow debugging by running them independently. If for instance, my 6th command in the pipeline is failing, I could run the first 5 and run the 6th with different parameters for debugging purposes. An additional utility would be in the (hopefully never-happening) case that philosopher is ignoring one of the parameters in the .yml, it would be possible to notice it easily instead of looking into the headers of the .pep.xml files and noticing a missing flag. An alternative would be to allow an additional argument to run a specific command with the same parameters file, so you could do something like.
@anesvi I indeed have tried running the dry-run on fragpipe using x-forwarding and the issue that I see is that a lot of the copies/directory changes that happen between steps of the pipeline are not explicit in the dry run, so I end up having to translate several of the java calls and comments to system equivalents in a manual fashion to get it to run as a bash script. This is why I was wondering if a pure-philosopher alternative was available, since the api is really clean and would solve a lot of these requirements. I still love the gui for simple experiments that run on a desktop workstation but when the number of raw files gets +100, being able to move to the cluster and making smaller jobs is just necessary. Thanks for the quick reply! |
Beta Was this translation helpful? Give feedback.
-
Dear Sebastian,
Thank you for the feedback.
Felipe, let’s discuss this sometime. I know your initial thinking was that the yaml file is sufficient, but clearly we have some interest from the users in generating the list of commands that are being executed using a particular yaml file
Best
Alexey
From: J. Sebastian Paez <[email protected]>
Sent: Saturday, February 20, 2021 5:41 AM
To: Nesvilab/philosopher <[email protected]>
Cc: Nesvizhskii, Alexey <[email protected]>; Mention <[email protected]>
Subject: Re: [Nesvilab/philosopher] Dry run for pipeline (#184)
External Email - Use Caution
@prvst<https://github.com/prvst> I guess the utility of the dry run would be to see if your parameters file actually corresponds to the expected commands, and or to allow debugging by running them independently. If for instance, my 6th command in the pipeline is failing, I could run the first 5 and run the 6th with different parameters for debugging purposes.
An additional utility would be in the (hopefully never-happening) case that philosopher is ignoring one of the parameters in the .yml, it would be possible to notice it easily instead of looking into the headers of the .pep.xml files and noticing a missing flag.
An alternative would be to allow an additional argument to run a specific command with the same parameters file, so you could do something like.
philosopher pipeline --config myparams.yml --subcommand peptideprophet folder1/ folder2/
@anesvi<https://github.com/anesvi> I indeed have tried running the dry-run on fragpipe using x-forwarding and the issue that I see is that a lot of the copies/directory changes that happen between steps of the pipeline are not explicit in the dry run, so I end up having to translate several of the java calls and comments to system equivalents in a manual fashion to get it to run as a bash script. This is why I was wondering if a pure-philosopher alternative was available, since the api is really clean and would solve a lot of these requirements. I still love the gui for simple experiments that run on a desktop workstation but when the number of raw files gets +100, being able to move to the cluster and making smaller jobs is just necessary.
Thanks for the quick reply!
Keep up the great work (sorry I don't know go to help more directly with a PR).
Kindest wishes,
Sebastian
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<#184 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AIIMM633GWA2GDX5M63DG4DS76GVJANCNFSM4W33YOAQ>.
**********************************************************
Electronic Mail is not secure, may not be read every day, and should not be used for urgent or sensitive issues
|
Beta Was this translation helpful? Give feedback.
-
To implement a "dry run," I would have to move some pieces from the place. Something easier that would give you the same final result is the addition of a verbose mode. You will run the pipeline the same way you do now, and for each command executed, the program will print the cmd string in the screen/log with the rest of the processing. Would that work for you? |
Beta Was this translation helpful? Give feedback.
-
I would like to know if there is any desire/interest in adding a --dry-run flag or equivalent to the pipeline sub-command, which would yield the sequential commands that would be run by the pipeline.
Is your feature request related to a problem? Please describe.
I am running philosopher in a HPC cluster and it would be useful to have a way to generate the commands that will be run by pipeline but be able to run them in different instances/jobs
Describe the solution you'd like
I would be nice to have an cli-interface to what Frag-pipe does when setting the dry run checkbox, where all philosopher commands are printed to the console.
Describe alternatives you've considered
I have so far attempted to use fragpipe in my local system and modify the paths manually, which I feel is not optimal
Is there any alternative to know the commands that pipeline would use to run them in independent commands (still respecting the order and file paths...)
Highly appreciating your help!
Sebastian
Beta Was this translation helpful? Give feedback.
All reactions