Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TXTSearchTool with ollama #89

Open
fre391 opened this issue Aug 8, 2024 · 5 comments
Open

TXTSearchTool with ollama #89

fre391 opened this issue Aug 8, 2024 · 5 comments

Comments

@fre391
Copy link

fre391 commented Aug 8, 2024

Search is NOT limited to given txt file.

`
from crewai_tools import TXTSearchTool

txt_search_tool = TXTSearchTool(
txt="kunst.txt",
config=dict(
llm=dict(
provider="ollama",
config=dict(
model="llama3.1",
),
),
embedder=dict(
provider="ollama",
config=dict(
model="mxbai-embed-large",
),
),
)
)
`

No error message and at the end the best result is shown, but in between (Verbose) it will also show snippets from other sources, f.e. of a PDF which was searched by XMLSearchTool earlier by using a seperate script....

@iNLyze
Copy link

iNLyze commented Oct 10, 2024

@fre391, nice solution, though I am not yet getting it to work. I am trying to apply the text tool to a local file like so:

tool_txt_search = TXTSearchTool(
    txt=path/'nonsense.txt',
    config=config, 
    verbose=True
)

My config:

config=dict(
    llm=dict(
        provider="ollama",  # Change this to your LLM provider
        config=dict(
            model="llama3.1",  # Specify the model you want to use
        ),
    ),
    embedder=dict(
        provider="ollama",  # Change this to your LLM provider
        config=dict(
            model="mxbai-embed-large",  # Specify the embedding model 
        ),
    ),
)

Running this from jupyter currently (on a server via ssh). Unfortunately, the instantiation never seems to finish. I am not seeing any text output and nothing in journalctl -u ollama either.

How long does it take you to instantiate the tool? Any idea, why it might get stuck?

@kspviswa
Copy link

kspviswa commented Dec 1, 2024

+1 . Also facing the same issue.

@siddas27
Copy link
Contributor

siddas27 commented Dec 5, 2024

I got everything working with this:

os.environ['OPENAI_API_BASE'] = 'http://localhost:11434'
os.environ['OPENAI_MODEL_NAME'] = 'ollama/llama3.2'
os.environ['OPENAI_API_KEY'] = 'NA'

Note: Using this will make this your default llm, no need to configure everything separately

@kspviswa
Copy link

kspviswa commented Dec 6, 2024

I got everything working with this:

os.environ['OPENAI_API_BASE'] = 'http://localhost:11434'
os.environ['OPENAI_MODEL_NAME'] = 'ollama/llama3.2'
os.environ['OPENAI_API_KEY'] = 'NA'

Note: Using this will make this your default llm, no need to configure everything separately

Sorry not working.

/.venv/lib/python3.12/site-packages/crewai/agent.py", line 161, in post_init_setup
    if env_var["key_name"] in unnacepted_attributes:
       ~~~~~~~^^^^^^^^^^^^

It is very clear that Ollama provider doesn't have a key called key_name . I wonder how it worked for you. Can you elaborate?

@kspviswa
Copy link

kspviswa commented Dec 6, 2024

Ok I inspected the code and found that I had to upgrade crewai to latest version. Now it is working.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants