Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix/improve description #84

Merged
merged 6 commits into from
Feb 5, 2025
Merged

Fix/improve description #84

merged 6 commits into from
Feb 5, 2025

Conversation

gurdeep330
Copy link
Member

@gurdeep330 gurdeep330 commented Feb 4, 2025

For authors

Description

In this PR, I'd like to propose changes to enhance the readability and efficiency of the codebase by implementing the following improvements:

  1. Introduced a custom reducer function to prevent unintended data accumulation caused by operator.add. See Support Clearing Annotated[list, operator.add] Fields in LangGraph State langchain-ai/langgraph#2944
  2. Split a single test (test_langgraph.py) into multiple focused tests, enabling a one-to-one mapping with corresponding tools.
  3. Added comments and docstrings for better code clarity.
  4. Fixed BUG: number of simulated steps #78
  5. Merged common logic from steady_state and simulate_model tools to eliminate redundancy.
  6. Incorporated species scanning into the param_scan tool to extend its functionality FEATURE: parameter scanning #20.

Fixes # (issue) #20 #78

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

How Has This Been Tested?

Please describe the tests you conducted to verify your changes. These may involve creating new test scripts or updating existing ones.

  • Added new test(s) in the tests folder
  • Added new function(s) to an existing test(s) (e.g.: tests/testX.py)
  • No new tests added (Please explain the rationale in this case)

Checklist

  • My code follows the style guidelines mentioned in the Code/DevOps guides
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation (e.g. MkDocs)
  • My changes generate no new warnings
  • I have added or updated tests (in the tests folder) that prove my fix is effective or that my feature works
  • New and existing tests pass locally with my changes
  • Any dependent changes have been merged and published in downstream modules

For reviewers

Checklist pre-approval

  • Is there enough documentation?
  • If a new feature has been added, or a bug fixed, has a test been added to confirm good behavior?
  • Does the test(s) successfully test edge/corner cases?
  • Does the PR pass the tests? (if the repository has continuous integration)

Checklist post-approval

  • Does this PR merge develop into main? If so, please make sure to add a prefix (feat/fix/chore) and/or a suffix BREAKING CHANGE (if it's a major release) to your commit message.
  • Does this PR close an issue? If so, please make sure to descriptively close this issue when the PR is merged.

Checklist post-merge

  • When you approve of the PR, merge and close it (Read this article to know about different merge methods on GitHub)
  • Did this PR merge develop into main and is it suppose to run an automated release workflow (if applicable)? If so, please make sure to check under the "Actions" tab to see if the workflow has been initiated, and return later to verify that it has completed successfully.

@gurdeep330 gurdeep330 self-assigned this Feb 4, 2025
@gurdeep330 gurdeep330 added enhancement New feature or request Talk2Biomodels labels Feb 4, 2025
@gurdeep330 gurdeep330 linked an issue Feb 4, 2025 that may be closed by this pull request
5 tasks
@gurdeep330 gurdeep330 requested a review from dmccloskey February 4, 2025 21:47
Copy link
Member

@dmccloskey dmccloskey left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice cleanup 👍. I have only one minor question that we can probably just monitor at this point in case we find some odd behaviour in the future.

description="species to be observed after each scan."
" These are the species whose concentration"
" will be observed after the parameter scan."
" Do not make up this data.",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

😂

@@ -8,19 +8,33 @@
import operator
from langgraph.prebuilt.chat_agent_executor import AgentState

def add_data(data1: dict, data2: dict) -> dict:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Has this been tested for when there are concurrent writes?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dmccloskey
I monitored this by viewing logs in the Streamlit code and confirmed that it handles concurrent writing correctly. Anyway, in one of the upcoming PRs, I’ll add a pytest too to cover this. Thanks!

@dmccloskey dmccloskey merged commit 8e5cddc into main Feb 5, 2025
6 checks passed
@dmccloskey dmccloskey deleted the fix/improve-description branch February 5, 2025 11:17
Copy link
Contributor

github-actions bot commented Feb 5, 2025

🎉 This PR is included in version 1.14.1 🎉

The release is available on GitHub release

Your semantic-release bot 📦🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

BUG: number of simulated steps FEATURE: parameter scanning
2 participants