Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: statConfR: An R Package for Static Models of Decision Confidence and Metacognition #6966

Open
editorialbot opened this issue Jul 7, 2024 · 65 comments
Assignees
Labels
R recommend-accept Papers recommended for acceptance in JOSS. review TeX Track: 4 (SBCS) Social, Behavioral, and Cognitive Sciences

Comments

@editorialbot
Copy link
Collaborator

editorialbot commented Jul 7, 2024

Submitting author: @ManuelRausch (Manuel Rausch)
Repository: https://github.com/ManuelRausch/StatConfR
Branch with paper.md (empty if default branch):
Version: 0.2.0
Editor: @samhforbes
Reviewers: @haoxue-fan, @christinamaher
Archive: 10.17605/OSF.IO/EQ3XK

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/dbac9fa6ff476bf8eda77f12aa077192"><img src="https://joss.theoj.org/papers/dbac9fa6ff476bf8eda77f12aa077192/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/dbac9fa6ff476bf8eda77f12aa077192/status.svg)](https://joss.theoj.org/papers/dbac9fa6ff476bf8eda77f12aa077192)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@haoxue-fan & @christinamaher, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review.
First of all you need to run this command in a separate comment to create the checklist:

@editorialbot generate my checklist

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @samhforbes know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Checklists

📝 Checklist for @haoxue-fan

📝 Checklist for @christinamaher

@editorialbot editorialbot added R review TeX Track: 4 (SBCS) Social, Behavioral, and Cognitive Sciences labels Jul 7, 2024
@editorialbot
Copy link
Collaborator Author

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

Software report:

github.com/AlDanial/cloc v 1.90  T=0.02 s (1776.2 files/s, 241132.4 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
R                               26            410            710           2513
TeX                              3             54             10            552
Markdown                         2             44              0            156
YAML                             1              1              4             18
Rmd                              1              2              6              0
-------------------------------------------------------------------------------
SUM:                            33            511            730           3239
-------------------------------------------------------------------------------

Commit count by author:

    67	Manuel Rausch
     8	Hellmann

@editorialbot
Copy link
Collaborator Author

Paper file info:

📄 Wordcount for paper.md is 794

✅ The paper includes a Statement of need section

@editorialbot
Copy link
Collaborator Author

License info:

🟡 License found: GNU General Public License v3.0 (Check here for OSI approval)

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1016/j.concog.2017.02.007 is OK
- 10.1038/s41467-021-23540-y is OK
- 10.3758/s13414-017-1431-5 is OK
- 10.1038/s41562-019-0813-1 is OK
- 10.1016/j.concog.2011.09.021 is OK
- 10.3758/s13414-021-02284-3 is OK
- 10.1037/a0019737 is OK
- 10.1093/nc/nix007 is OK
- 10.1037/rev0000249 is OK
- 10.7554/eLife.75420 is OK
- 10.1007/978-3-642-45190-4_3 is OK
- 10.1093/nc/niw002 is OK
- 10.1038/s41467-022-31727-0 is OK
- 10.1016/j.cognition.2020.104522 is OK
- 10.1016/j.neuroimage.2020.116963 is OK
- 10.1038/s41562-022-01464-x is OK
- 10.1037/rev0000411 is OK
- 10.1177/17456916221075615 is OK
- 10.1037/xge0001524 is OK
- 10.1121/1.1907783 is OK
- 10.1037/met0000634 is OK
- 10.31234/osf.io/5ze8t is OK

MISSING DOIs

- No DOI given, and none found for title: Signal detection theory and psychophysics
- No DOI given, and none found for title: Detection theory: A user’s guide

INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@samhforbes
Copy link

Hi @haoxue-fan, @christinamaher this is our review thread.
Feel free to raise any issues that need addressing according to the reviewer checklist in individual issues in the software repos and link back here.
If there's any larger comments you need to make or anything you want me to look at, you can of course post here, or ping me for questions. Thanks again for agreeing to review.

@samhforbes
Copy link

Hi @haoxue-fan, @christinamaher I just thought I'd check in and see how things were coming along. Please ping me if there's anything you need.

@christinamaher
Copy link

Hi @samhforbes ! Thanks for checking in, apologies for the delay. I am returning today from conference travel. I aim to complete this by the end of the week.

@haoxue-fan
Copy link

haoxue-fan commented Jul 26, 2024

Review checklist for @haoxue-fan

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/ManuelRausch/StatConfR?
  • License: Does the repository contain a plain-text LICENSE or COPYING file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@ManuelRausch) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@christinamaher
Copy link

christinamaher commented Jul 31, 2024

Review checklist for @christinamaher

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/ManuelRausch/StatConfR?
  • License: Does the repository contain a plain-text LICENSE or COPYING file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@ManuelRausch) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@christinamaher
Copy link

christinamaher commented Jul 31, 2024

Hi @samhforbes ! I believe I've gone through the relevant points in the checklist above. The issues (related to points I left unchecked above) are as follows -
General Checks - Human Research Approvals: The dataset provided is from a human experiment. Although it appears to have been previously published, the documentation does not provide details regarding informed consent for data sharing or ethical approvals.

Documentation - Functionality Documentation: The code would benefit significantly from improved documentation.
Function Documentation: Each function is lacking explanation of its purpose, parameters, and outputs.
README File: The README file should provide a clearer guide to the package's components, including how to use the functions effectively.

Software Paper - Summary: The paper lacks a high-level summary of the package’s primary use-case. Including this summary would help readers understand the main objectives and applications of the package.
Model Descriptions: Authors should consider providing a detailed description of the models included in the package and an explain for the rationale behind the choice of models and the fitting procedures used.
Visual Aids: Incorporate figures or schematics that visualize the models and the results would greatly enhance the paper's readability and package's usability.
Plotting Utilities: Consider adding plotting utilities to the package to facilitate visualization of results.

@samhforbes
Copy link

Great, thanks @christinamaher
@ManuelRausch while we wait for feedback from @haoxue-fan there's some really useful feedback to work on.

@haoxue-fan
Copy link

Thanks for the opportunity to review the package and sorry for the delayed response! I first want to applaud for the authors @ManuelRausch for their effort putting together this package - it is super useful for researchers in relevant fields both in terms of encouraging them to try out different models as well as lower the coding barrier. I was able to load the R package and run the code as stated in README without difficulty. However, I have a couple of comments listed below most related to the writing and the documentation aspects of the package that I think worth improving:

  • Documentation: I share the same feeling as @christinamaher that the package can benefit greatly from more documentation. The way that the paper is currently written and the README file makes me feel that the author assumes a relatively high degree of expertise in the related research domain in the audience, which may or may not be true. Therefore, I think it would be great to provide a step-by-step walkthrough example in the paper. This can be the current example use described in README, but probably with more text descriptions e.g., what the output means; what are some functions that can be used; what are the orders that the functions should be used.

  • Relatedly, I notice that on the function help page there are descriptions related to the input data format as well as detailed description of each model. It would be great to make it explicit (e.g., mention that the users can refer to XXX page, or paraphrase them concisely in the paper/README).

  • Is it possible for the audience to implement their own models? Is there any guideline on how to do that, and is there any platform for people to share the model with each other? I understand that this maybe too much to ask for, but either way (allowing community contribution or not), it may worth spoiling some ink describing whether the users can implement their customized models, whether this is recommended, and what are some tips that can be shared.

  • When I was running the example code, the function fitConfModels took a while to run. I am not sure whether this is normal/my computer is too old. Either way, it may worth adding some description on README for the running speed so that the users can calibrate their expectation (I was anxiously checking whether my R is still working since there was no output for a while).

@ManuelRausch
Copy link

Thank you very much @christinamaher and @haoxue-fan for your feedback. I am sorry that I have not been responsive; I was distracted. I will work on a revision as soon as I can.
image

@samhforbes
Copy link

Oh how lovely @ManuelRausch. Enjoy this wonderful period!

@samhforbes
Copy link

Hi @ManuelRausch hope things are well with you and the little one. not chasing here at all, but thought I'd check in and see when you might be able to look at this.

@ManuelRausch
Copy link

ManuelRausch commented Sep 20, 2024

I am back to doing scientific work and I'll be working on a revision.

@samhforbes
Copy link

Hi @ManuelRausch that's great - let me know if I can answer any queries in the meantime

@samhforbes
Copy link

Hi @ManuelRausch Just checking in to see how things are here?

@ManuelRausch
Copy link

ManuelRausch commented Dec 8, 2024

Hi @ManuelRausch Just checking in to see how things are here?

I'm so sorry for the long delay. We have now finished a revision of the package.

@ManuelRausch
Copy link

@haoxue-fan. Thank you very much! Your feedback is much appreciated.

@ManuelRausch
Copy link

Documentation: I share the same feeling as @christinamaher that the package can benefit greatly from more documentation. The way that the paper is currently written and the README file makes me feel that the author assumes a relatively high degree of expertise in the related research domain in the audience, which may or may not be true. Therefore, I think it would be great to provide a step-by-step walkthrough example in the paper. This can be the current example use described in README, but probably with more text descriptions e.g., what the output means; what are some functions that can be used; what are the orders that the functions should be used.

According to the JOSS website, JOSS papers are expected to have metadata, a statement of need, Summary, Acknowledgements, and References sections. Software documentation should not be in the paper and instead should be outlined in the software documentation (https://joss.readthedocs.io/en/latest/paper.html). Although I saw examples papers where this rule did not seem to be strictly enforced, we feel that it is better to keep paper and documentation separate: Although I am not planning to change the user interface, sooner or later, there will be updates to the package. The software documentation can be more promptly updated than the paper.

However, we agree that a step-by-step walkthrough example in the paper is useful. For this purpose, we have worked out the example in the README file (see section 5). The usage example demonstrates the workflow in which we envision that the functions are used, and what the output of each function means.

@ManuelRausch
Copy link

Relatedly, I notice that on the function help page there are descriptions related to the input data format as well as detailed description of each model. It would be great to make it explicit (e.g., mention that the users can refer to XXX page, or paraphrase them concisely in the paper/README

In the new version of the readme, we describe how the documentation can be accessed (section 6). In addition, the readme now includes a mathematical description of each model (section 2). Finally, the revised usage example in the readme describes the inputs and outputs of all function we expected the user to interact with (sections 5.2 – 5.4).

@samhforbes
Copy link

Hi @haoxue-fan and @christinamaher thank you both for reviewing. Would you mind checking off the checklist please in that case?

@christinamaher
Copy link

I checked all remaining points - please let me know if anything additional is needed, thanks!

@haoxue-fan
Copy link

same here, thanks!

@samhforbes
Copy link

Thank you both! @ManuelRausch we can now move to the post review part of this. I will create a checklist for each of us to go through

@samhforbes
Copy link

samhforbes commented Jan 10, 2025

Post-Review Checklist for Editor and Authors

Additional Author Tasks After Review is Complete

  • Double check authors and affiliations (including ORCIDs)
  • Make a release of the software with the latest changes from the review and post the version number here. This is the version that will be used in the JOSS paper.
  • Archive the release on Zenodo/figshare/etc and post the DOI here.
  • Make sure that the title and author list (including ORCIDs) in the archive match those in the JOSS paper.
  • Make sure that the license listed for the archive is the same as the software license.

Editor Tasks Prior to Acceptance

  • Read the text of the paper and offer comments/corrections (as either a list or a pull request)
  • Check that the archive title, author list, version tag, and the license are correct
  • Set archive DOI with @editorialbot set <DOI here> as archive
  • Set version with @editorialbot set <version here> as version
  • Double check rendering of paper with @editorialbot generate pdf
  • Specifically check the references with @editorialbot check references and ask author(s) to update as needed
  • Recommend acceptance with @editorialbot recommend-accept

@samhforbes
Copy link

Hi @ManuelRausch any joy with the above tasks?

@ManuelRausch
Copy link

Hi @ManuelRausch any joy with the above tasks?

I am still working on Latex Compatibility issues for the release on CRAN.

@ManuelRausch
Copy link

  • Finally, the package with all changes from the review has been released on CRAN with the version number 0.2.0.:https://cran.r-project.org/web/packages/statConfR/index.html
  • The release has been archived on osf. doi:10.17605/OSF.IO/EQ3XK.
  • We confirm that the authors and affiliations and ORCIDs are correct, both on the paper and on osf.
  • The package on CRAN and on the archive have been released with the same licence (GPL-3).

@samhforbes
Copy link

@editorialbot set 0.2.0 as version

@editorialbot
Copy link
Collaborator Author

Done! version is now 0.2.0

@samhforbes
Copy link

@editorialbot set 10.17605/OSF.IO/EQ3XK as archive

@editorialbot
Copy link
Collaborator Author

Done! archive is now 10.17605/OSF.IO/EQ3XK

@samhforbes
Copy link

@editorialbot check references

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

✅ OK DOIs

- 10.1038/s41467-022-31727-0 is OK
- 10.1016/j.cognition.2020.104522 is OK
- 10.1016/j.neuroimage.2020.116963 is OK
- 10.3758/s13414-021-02284-3 is OK
- 10.1038/s41562-022-01464-x is OK
- 10.1093/nc/niw002 is OK
- 10.1037/rev0000249 is OK
- 10.1016/j.concog.2017.02.007 is OK
- 10.1016/j.neuroimage.2013.08.065 is OK
- 10.3758/s13414-017-1431-5 is OK
- 10.1038/s41562-019-0813-1 is OK
- 10.1037/a0019737 is OK
- 10.1038/s41467-021-23540-y is OK
- 10.7554/eLife.75420 is OK
- 10.1093/nc/nix007 is OK
- 10.1162/opmi_a_00091 is OK
- 10.1037/xge0001524 is OK
- 10.1007/s42113-024-00205-9 is OK
- 10.1016/j.concog.2011.09.021 is OK
- 10.1121/1.1907783 is OK
- 10.1177/17456916221075615 is OK
- 10.1007/978-3-642-45190-4_3 is OK
- 10.1037/met0000634 is OK
- 10.1037/rev0000411 is OK
- 10.1371/journal.pcbi.1003441 is OK

🟡 SKIP DOIs

- No DOI given, and none found for title: Signal detection theory and psychophysics
- No DOI given, and none found for title: Detection theory: A user’s guide

❌ MISSING DOIs

- None

❌ INVALID DOIs

- None

@samhforbes
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@ManuelRausch
Copy link

I am very happy with the article proof, except that it does not yet show the doi, volume and issue. I suppose that this is intended behavior?

Image

Is there anything I can do at this point?

@samhforbes
Copy link

Ah yeah that is normal behaviour until I final volume and DOI etc are assigned!

Just had a read and all looks good, except in the references, where for journal articles the article title should be in sentence case (except after a colon), and the journal name should have first letters capitalised. See for example Hellman et al 2024 which is one that needs fixing.

@ManuelRausch
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@ManuelRausch
Copy link

Just had a read and all looks good, except in the references, where for journal articles the article title should be in sentence case (except after a colon), and the journal name should have first letters capitalised. See for example Hellman et al 2024 which is one that needs fixing.

Corrected.

@samhforbes
Copy link

@editorialbot check references

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

✅ OK DOIs

- 10.1371/journal.pcbi.1003441 is OK
- 10.1007/978-3-642-45190-4_3 is OK
- 10.1037/xge0001524 is OK
- 10.1007/s42113-024-00205-9 is OK
- 10.1016/j.concog.2011.09.021 is OK
- 10.1121/1.1907783 is OK
- 10.1177/17456916221075615 is OK
- 10.1037/met0000634 is OK
- 10.1037/rev0000411 is OK
- 10.1016/j.neuroimage.2013.08.065 is OK
- 10.3758/s13414-017-1431-5 is OK
- 10.1038/s41562-019-0813-1 is OK
- 10.1037/a0019737 is OK
- 10.1016/j.concog.2017.02.007 is OK
- 10.1037/rev0000249 is OK
- 10.1038/s41467-021-23540-y is OK
- 10.1093/nc/nix007 is OK
- 10.7554/eLife.75420 is OK
- 10.1093/nc/niw002 is OK
- 10.1162/opmi_a_00091 is OK
- 10.1038/s41467-022-31727-0 is OK
- 10.1016/j.cognition.2020.104522 is OK
- 10.1016/j.neuroimage.2020.116963 is OK
- 10.3758/s13414-021-02284-3 is OK
- 10.1038/s41562-022-01464-x is OK

🟡 SKIP DOIs

- No DOI given, and none found for title: Signal detection theory and psychophysics
- No DOI given, and none found for title: Detection theory: A user’s guide

❌ MISSING DOIs

- None

❌ INVALID DOIs

- None

@samhforbes
Copy link

Looks great, well done @ManuelRausch

@samhforbes
Copy link

@editorialbot recommend-accept

@editorialbot
Copy link
Collaborator Author

Attempting dry run of processing paper acceptance...

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

✅ OK DOIs

- 10.1371/journal.pcbi.1003441 is OK
- 10.1007/978-3-642-45190-4_3 is OK
- 10.1037/xge0001524 is OK
- 10.1007/s42113-024-00205-9 is OK
- 10.1016/j.concog.2011.09.021 is OK
- 10.1121/1.1907783 is OK
- 10.1177/17456916221075615 is OK
- 10.1037/met0000634 is OK
- 10.1037/rev0000411 is OK
- 10.1016/j.neuroimage.2013.08.065 is OK
- 10.3758/s13414-017-1431-5 is OK
- 10.1038/s41562-019-0813-1 is OK
- 10.1037/a0019737 is OK
- 10.1016/j.concog.2017.02.007 is OK
- 10.1037/rev0000249 is OK
- 10.1038/s41467-021-23540-y is OK
- 10.1093/nc/nix007 is OK
- 10.7554/eLife.75420 is OK
- 10.1093/nc/niw002 is OK
- 10.1162/opmi_a_00091 is OK
- 10.1038/s41467-022-31727-0 is OK
- 10.1016/j.cognition.2020.104522 is OK
- 10.1016/j.neuroimage.2020.116963 is OK
- 10.3758/s13414-021-02284-3 is OK
- 10.1038/s41562-022-01464-x is OK

🟡 SKIP DOIs

- No DOI given, and none found for title: Signal detection theory and psychophysics
- No DOI given, and none found for title: Detection theory: A user’s guide

❌ MISSING DOIs

- None

❌ INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

👋 @openjournals/sbcs-eics, this paper is ready to be accepted and published.

Check final proof 👉📄 Download article

If the paper PDF and the deposit XML files look good in openjournals/joss-papers#6450, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept

@editorialbot editorialbot added the recommend-accept Papers recommended for acceptance in JOSS. label Feb 21, 2025
@ManuelRausch
Copy link

ManuelRausch commented Feb 21, 2025

Looks great, well done @ManuelRausch

Thank you very much!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
R recommend-accept Papers recommended for acceptance in JOSS. review TeX Track: 4 (SBCS) Social, Behavioral, and Cognitive Sciences
Projects
None yet
Development

No branches or pull requests

5 participants