Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Client.__init__() got an unexpected keyword argument 'proxies' #54

Closed
aidando73 opened this issue Nov 29, 2024 · 1 comment
Closed

Client.__init__() got an unexpected keyword argument 'proxies' #54

aidando73 opened this issue Nov 29, 2024 · 1 comment

Comments

@aidando73
Copy link

Issue from llama-stack-apps: meta-llama/llama-stack-apps#131

Getting an error on httpx=0.28.0. Installing httpx=0.27.2 fixes it:

(llama-stack) ubuntu@168-138-112-243:~/1xa100-2/llama-stack-apps$ python -m examples.agents.hello localhost 5001
Traceback (most recent call last):
  File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/ubuntu/1xa100-2/llama-stack-apps/examples/agents/hello.py", line 78, in <module>
    fire.Fire(main)
  File "/home/ubuntu/.local/lib/python3.10/site-packages/fire/core.py", line 135, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
  File "/home/ubuntu/.local/lib/python3.10/site-packages/fire/core.py", line 468, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
  File "/home/ubuntu/.local/lib/python3.10/site-packages/fire/core.py", line 684, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
  File "/home/ubuntu/1xa100-2/llama-stack-apps/examples/agents/hello.py", line 17, in main
    client = LlamaStackClient(
  File "/home/ubuntu/.local/lib/python3.10/site-packages/llama_stack_client/_client.py", line 108, in __init__
    super().__init__(
  File "/home/ubuntu/.local/lib/python3.10/site-packages/llama_stack_client/_base_client.py", line 855, in __init__
    self._client = http_client or SyncHttpxClientWrapper(
  File "/home/ubuntu/.local/lib/python3.10/site-packages/llama_stack_client/_base_client.py", line 753, in __init__
    super().__init__(**kwargs)
TypeError: Client.__init__() got an unexpected keyword argument 'proxies'

httpx maintainer replied that that proxies argument has been removed:

Heya. That parameter became deprecated in 0.26.0, and was removed in 0.28.0.
If you could follow up with llama_stack_client to help their team get this resolved, that'd be helpful.
You can pin to httpx=0.27.2 in the meantime.

encode/httpx#3425

ashwinb pushed a commit to meta-llama/llama-stack that referenced this issue Dec 11, 2024
# What does this PR do?

Addresses issue (#342)

- PDFs uploaded from url are being loaded into vector db as raw bytes
- Instead this PR extracts text from PDF if mime_type is
"application/json"
- Adds tests to cover new cases

## Test Plan

Ran these unit tests:

```bash
llama stack build --template meta-reference-gpu --image-type conda
conda activate llamastack-meta-reference-gpu
pip install pytest pytest-asyncio pypdf
pytest llama_stack/providers/tests/memory/test_vector_store.py -v
```

```
platform linux -- Python 3.10.15, pytest-8.3.3, pluggy-1.5.0 -- /home/ubuntu/1xa100-2/llama-stack/envs/bin/python
cachedir: .pytest_cache
rootdir: /home/ubuntu/1xa100-2/llama-stack
configfile: pyproject.toml
plugins: anyio-4.6.2.post1, asyncio-0.24.0, httpx-0.35.0
asyncio: mode=strict, default_loop_scope=None
collected 3 items                                                                                                                          

llama_stack/providers/tests/memory/test_vector_store.py::TestVectorStore::test_returns_content_from_pdf_data_uri PASSED              [ 33%]
llama_stack/providers/tests/memory/test_vector_store.py::TestVectorStore::test_downloads_pdf_and_returns_content PASSED              [ 66%]
llama_stack/providers/tests/memory/test_vector_store.py::TestVectorStore::test_downloads_pdf_and_returns_content_with_url_object PASSED [100%]

======================================================= 3 passed, 1 warning in 0.62s =======================================================
```

Tested manually via [this
script](https://github.com/aidando73/llama-stack/blob/afc8f8bebf70e1ad065d87e84692e1a3a45d9e19/init.py)
to initialize and [this
script](https://github.com/aidando73/llama-stack/blob/afc8f8bebf70e1ad065d87e84692e1a3a45d9e19/query.py)
to query

```bash
# Ran with meta-reference-gpu with safety
llama stack build --template meta-reference-gpu --image-type conda && llama stack run distributions/meta-reference-gpu/run-with-safety.yaml \
  --port 5001 \
  --env INFERENCE_MODEL=meta-llama/Llama-3.2-11B-Vision-Instruct

# Run init.py script
wget https://raw.githubusercontent.com/aidando73/llama-stack/afc8f8bebf70e1ad065d87e84692e1a3a45d9e19/init.py
pip install httpx==0.27.2 # Due to issue meta-llama/llama-stack-client-python#54
python init.py
# Run query.py script
wget https://raw.githubusercontent.com/aidando73/llama-stack/afc8f8bebf70e1ad065d87e84692e1a3a45d9e19/query.py
python query.py
```

Should output valid text chunks

```
Chunk(content=' that it has a significantly\nlower violation rate than the competing standalone open source model, trading off a higher false refusal rate.\nLong-context safety. Long-context models are vulnerable to many-shot jailbreaking attacks without targeted\nmitigation (Anil et al., 2024). To address this, we finetune our models on SFT datasets that include examples\nof safe behavior in the presence of demonstrations of unsafe behavior in context. We develop a scalable\nmitigation strategy that significantly reduces VR, effectively neutralizing the impact of longer context attacks\neven for 256-shot attacks. This approach shows little to no impact on FRR and most helpfulness metrics.\nTo quantify the effectiveness of our long context safety mitigations, we use two additional benchmarking\nmethods: DocQA and Many-shot. For DocQA, short for “document question answering,” we use long documents\nwith information that could be utilized in adversarial ways. Models are provided both the document and a set\nof prompts related to the document in order to test whether the questions being related to information in the\ndocument affected the model’s ability to respond safely to the prompts. For Many-shot, following Anil et al.\n(2024), we construct a synthetic chat history composed of unsafe prompt-response pairs. A final prompt,\nunrelated to previous messages, is used to test whether the unsafe behavior in-context influenced the model\n45\nto response unsafely. The violation and false refusal rates for both DocQA and Many-shot are shown in\nFigure 20. We see that Llama 405B (with and without Llama Guard) is Pareto-better than the Comp. 2\nsystem across both violation rates and false refusal rates, across both DocQA and Many-shot. Relative to\nComp. 1, we find that Llama 405B is significantly safer, while coming at a trade off on false refusal.\nTool usage safety. The diversity of possible tools and the implementation of the tool usage call and integration\ninto the model make tool usage a challenging capability to fully mitigate (Wallace et al., 2024). We focus on\nthe search usecase. Violation and false refusal rates are shown in Figure 20. We tested against the Comp. 1\nsystem, where we find that Llama 405B is significantly safer, though has a slightly higher false refusal rate.\n5.4.5 Cybersecurity and Chemical/Biological Weapons Safety\nCyberSecurity evaluation results. To evaluate cybersecurity risk, we leverage the Cyber', document_id='num-0', token_count=512)0.7354530813978312
Chunk(content='.\nThrough careful ablations, we observe that mixing0.1% of synthetically generated long-context data with the\noriginal short-context data optimizes the performance across both short-context and long-context benchmarks.\nDPO. We observe that using only short context training data in DPO did not negatively impact long-context\nperformance as long as the SFT model is high quality in long context tasks. We suspect this is due to the\nfact that our DPO recipe has fewer optimizer steps than SFT. Given this finding, we keep the standard\nshort-context recipe for DPO on top of our long-context SFT checkpoints.\n4.3.5 Tool Use\nTeaching LLMs to use tools such as search engines or code interpreters hugely expands the range of tasks\nthey can solve, transforming them from pure chat models into more general assistants (Nakano et al., 2021;\nThoppilan et al., 2022; Parisi et al., 2022; Gao et al., 2023; Mialon et al., 2023a; Schick et al., 2024). We train\nLlama 3 to interact with the following tools:\n• Search engine. Llama 3 is trained to use Brave Search7 to answer questions about recent events that go\nbeyond its knowledge cutoff or that require retrieving a particular piece of information from the web.\n• Python interpreter. Llama 3 can generate and execute code to perform complex computations, read files\nuploaded by the user and solve tasks based on them such as question answering, summarization, data\nanalysis or visualization.\n7https://brave.com/search/api/\n24\n• Mathematical computational engine. Llama 3 can use the Wolfram Alpha API8 to more accurately solve\nmath, science problems, or retrieve accurate information from Wolfram’s database.\nThe resulting model is able to use these tools in a chat setup to solve the user’s queries, including in multi-turn\ndialogs. If a query requires multiple tool calls, the model can write a step-by-step plan, call the tools in\nsequence, and do reasoning after each tool call.\nWe also improve Llama 3’s zero-shot tool use capabilities — given in-context, potentially unseen tool definitions\nand a user query, we train the model to generate the correct tool call.\nImplementation. We implement our core tools as Python objects with different methods. Zero-shot tools can\nbe implemented as Python functions with descriptions, documentation (i.e., examples for', document_id='num-0', token_count=512)0.7350672465928054
Chunk(content=' Embeddings RoPE (θ = 500, 000)\nTable 3 Overview of the key hyperparameters of Llama 3. We display settings for 8B, 70B, and 405B language models.\n• We use a vocabulary with 128K tokens. Our token vocabulary combines 100K tokens from thetiktoken3\ntokenizer with 28K additional tokens to better support non-English languages. Compared to the Llama\n2 tokenizer, our new tokenizer improves compression rates on a sample of English data from 3.17 to\n3.94 characters per token. This enables the model to “read” more text for the same amount of training\ncompute. We also found that adding 28K tokens from select non-English languages improved both\ncompression ratios and downstream performance, with no impact on English tokenization.\n• We increase the RoPE base frequency hyperparameter to 500,000. This enables us to better support\nlonger contexts; Xiong et al. (2023) showed this value to be effective for context lengths up to 32,768.\nLlama 3 405B uses an architecture with 126 layers, a token representation dimension of 16,384, and 128\nattention heads; see Table 3 for details. This leads to a model size that is approximately compute-optimal\naccording to scaling laws on our data for our training budget of3.8 × 1025 FLOPs.\n3.2.1 Scaling Laws\nWe develop scaling laws (Hoffmann et al., 2022; Kaplan et al., 2020) to determine the optimal model size for\nour flagship model given our pre-training compute budget. In addition to determining the optimal model size,\na major challenge is to forecast the flagship model’s performance on downstream benchmark tasks, due to a\ncouple of issues: (1) Existing scaling laws typically predict only next-token prediction loss rather than specific\nbenchmark performance. (2) Scaling laws can be noisy and unreliable because they are developed based on\npre-training runs conducted with small compute budgets (Wei et al., 2022b).\nTo address these challenges, we implement a two-stage methodology to develop scaling laws that accurately\npredict downstream benchmark performance:\n1. We first establish a correlation between the compute-optimal model’s negative log-likelihood on down-\nstream tasks and the training FLOPs.\n2. Next, we correlate the negative log-likelihood on downstream tasks with task accuracy, utilizing both', document_id='num-0', token_count=512)0.7172908346230037
```

## Before submitting

- [x] N/A - This PR fixes a typo or improves the docs (you can dismiss
the other checks if that's the case).
- [x] Ran pre-commit to handle lint / formatting issues.
- [x] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [x] N/A - Updated relevant documentation.
- [x] Wrote necessary unit or integration tests.
@yanxi0830
Copy link
Contributor

yanxi0830 commented Jan 3, 2025

Thanks for raising the issue, this have been fixed in the latest llama-stack-client-python>=0.0.63 versions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants