Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

'dict' object has no attribute 'locals' #384

Open
filowsky opened this issue Jan 3, 2025 · 4 comments
Open

'dict' object has no attribute 'locals' #384

filowsky opened this issue Jan 3, 2025 · 4 comments

Comments

@filowsky
Copy link

filowsky commented Jan 3, 2025

Pipeline starts but my module is not loaded. After dependencies being downloaded, there is a following error with no stacktrace nor explanation:

Error loading module: test-pipeline
**'dict' object has no attribute 'locals'**
WARNING:root:No Pipeline class found in test-pipeline
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:9099 (Press CTRL+C to quit)

For requirements I use:

requests~=2.32.3
pydantic>=2.8.0 
llama-index==0.12.5
llama-index-llms-azure-openai==0.3.0 
llama-index-embeddings-azure-openai==0.3.0 
llama-index-vector-stores-qdrant==0.4.2

I am using helm chart for pipelines (chart version 0.0.5) and UI (chart version 4.0.6) deployment.

Worth to add that I managed to run it successfully, as a standalone script, on my local machine using the same versions of dependencies. Also, I noticed that the error shows up when I'm adding these code to the pipeline

from llama_index.core import StorageContext
from llama_index.vector_stores.qdrant import QdrantVectorStore
from qdrant_client import QdrantClient

...

self.index = VectorStoreIndex.from_documents(
    documents=self.documents,
    show_progress=True,
    storage_context=StorageContext.from_defaults(
        vector_store=QdrantVectorStore(
            client=QdrantClient(url=QDRANT_URL),
            collection_name="collection_name"
        )
    ),
)

Whole output log below
scratch_142.txt

@ezavesky
Copy link

ezavesky commented Jan 5, 2025

You probably don't want to hear this, but since you've isolated it to specific code within llama_index or qdrant, that's probably where your issue lies -- not within any thing within this pipelines repo. The code within this repo is very straightforward and there is no mention of 'locals'

Having no specific research done (you didn't include enough code), a few issues in qdrant's issues and SO postings may point there -- langchain-ai/langchain#16962. There's also a small chance that something doesn't behave well when using async responses, but that's just a guess since locals implies some localized variable scope.

@paulinergt
Copy link

Hello! I'm facing the same issue however I'm not using qdrant...
If anyone has an update on this it would be very much appreciated :)
Thank you!

@paulinergt
Copy link

Hello! I resolved the error on my end. :)

It seems the issue was caused by the requirements being downloaded twice:

  1. From the requirements.txt file.
  2. Directly from the pipeline script header via the install_frontmatter_requirements function:
title: Custom Llama Index Pipeline
author: open-webui
date: 2024-05-30
version: 1.0
license: MIT
description: A pipeline for retrieving relevant information from a knowledge base using the Llama Index library.
requirements: llama-index-retrievers-bm25, llama-index-embeddings-huggingface, llama-index-readers-github, llama-index-vector-stores-postgres

This duplication led to dependency errors.
I removed the requirements section from the pipeline header, which fixed the issue.

@filowsky
Copy link
Author

filowsky commented Jan 15, 2025

Thank you @paulinergt but in my case I can't remove the requirements section from header because, as far as I know, there is no possibility to pass requirements.txt via Helm chart. Also, I tried various combinations of dependencies installation locally and I couldn't reproduce the error, everything was working fine when running pipelines from the source code (main branch).

But I'm back with an update from my end.

I wasn't able to run the pipeline using the Helm chart, but I managed to do it locally from source code and locally using Docker.

When using Docker, I'm using exactly the same dependencies and the image ghcr.io/open-webui/pipelines:main, which is used by the chart in version 0.0.5. Interestingly, when running the pipeline in Docker, I get the following message:

WARNING:root:No Pipeline class found in test-pipeline

And when I restart the container, the pipeline gets fetched again and magically starts working without any issues. I tried to replicate this behavior on K8S, but without success. I still get:

Error loading module: test-pipeline
'dict' object has no attribute 'locals'
WARNING:root:No Pipeline class found in test-pipeline

So, to sum up:

  • running pipelines locally from source code using pip install -r requirements.txt && ./start.sh, as mentioned in README, works 100% fine
  • running pipelines locally with Docker works after container is restarted (magic)
  • running pipelines on K8S using Helm chart does not work

Any ideas? I'm attaching code of the pipeline (Valves values are empty on purpose, I replace it with real data)

scratch_143.txt

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants