Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

500 Internal Server Error During Image Inferencing Because "Request URL is missing an 'http://' or 'https://' protocol" #740

Open
1 of 2 tasks
dawenxi-007 opened this issue Jan 10, 2025 · 0 comments

Comments

@dawenxi-007
Copy link

System Info

System Info

Python version: 3.10
llama_stack version: 0.0.63
Hardware: 4xH100 (80GB VRAM/GPU)
Docker image: llamastack/distribution-meta-reference-gpu:latest

Information

  • The official example scripts
  • My own modified scripts

🐛 Describe the bug

Running the Llama Stack vision model meta-llama/Llama-3.2-11B-Vision-Instruct with the example code .

Both the TGI flow (with image llamastack/distribution-tgi) and Meta reference (with image llamastack/distribution-meta-reference-gpu) flow gave the 500 Internal Server Error because "Request URL is missing an 'http://' or 'https://' protocol." Text inferencing works as expected. The issue exists from 0.0.61 version and after at least.

#571 seems showing the same error message. It could be related.

Error logs

Error Log on the client side:

Traceback (most recent call last):
  File "/home/tao/llamastk_meta/image_infer_example.py", line 38, in <module>
    response = client.inference.chat_completion(
  File "/home/tao/llamastk_meta/llamastk_meta_2024-12-02-16-23/env/lib/python3.10/site-packages/llama_stack_client/_utils/_utils.py", line 275, in wrapper
    return func(*args, **kwargs)
  File "/home/tao/llamastk_meta/llamastk_meta_2024-12-02-16-23/env/lib/python3.10/site-packages/llama_stack_client/resources/inference.py", line 217, in chat_completion
    self._post(
  File "/home/tao/llamastk_meta/llamastk_meta_2024-12-02-16-23/env/lib/python3.10/site-packages/llama_stack_client/_base_client.py", line 1263, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
  File "/home/tao/llamastk_meta/llamastk_meta_2024-12-02-16-23/env/lib/python3.10/site-packages/llama_stack_client/_base_client.py", line 955, in request
    return self._request(
  File "/home/tao/llamastk_meta/llamastk_meta_2024-12-02-16-23/env/lib/python3.10/site-packages/llama_stack_client/_base_client.py", line 1043, in _request
    return self._retry_request(
  File "/home/tao/llamastk_meta/llamastk_meta_2024-12-02-16-23/env/lib/python3.10/site-packages/llama_stack_client/_base_client.py", line 1092, in _retry_request
    return self._request(
  File "/home/tao/llamastk_meta/llamastk_meta_2024-12-02-16-23/env/lib/python3.10/site-packages/llama_stack_client/_base_client.py", line 1043, in _request
    return self._retry_request(
  File "/home/tao/llamastk_meta/llamastk_meta_2024-12-02-16-23/env/lib/python3.10/site-packages/llama_stack_client/_base_client.py", line 1092, in _retry_request
    return self._request(
  File "/home/tao/llamastk_meta/llamastk_meta_2024-12-02-16-23/env/lib/python3.10/site-packages/llama_stack_client/_base_client.py", line 1058, in _request
    raise self._make_status_error_from_response(err.response) from None
llama_stack_client.InternalServerError: Error code: 500 - {'detail': 'Internal server error: An unexpected error occurred.'}

Error log on the host side:

INFO:     10.141.0.60:41848 - "POST /alpha/inference/chat-completion HTTP/1.1" 500 Internal Server Error
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/httpx/_transports/default.py", line 72, in map_httpcore_exceptions
    yield
  File "/usr/local/lib/python3.10/site-packages/httpx/_transports/default.py", line 377, in handle_async_request
    resp = await self._pool.handle_async_request(req)
  File "/usr/local/lib/python3.10/site-packages/httpcore/_async/connection_pool.py", line 207, in handle_async_request
    raise UnsupportedProtocol(
httpcore.UnsupportedProtocol: Request URL is missing an 'http://' or 'https://' protocol.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):                                                                                                                                                                                                  [59/1931]
  File "/usr/local/lib/python3.10/site-packages/llama_stack/distribution/server/server.py", line 256, in endpoint
    return await maybe_await(value)
  File "/usr/local/lib/python3.10/site-packages/llama_stack/distribution/server/server.py", line 215, in maybe_await
    return await value
  File "/usr/local/lib/python3.10/site-packages/llama_stack/distribution/routers/routers.py", line 123, in chat_completion
    return await provider.chat_completion(**params)
  File "/usr/local/lib/python3.10/site-packages/llama_stack/providers/inline/inference/meta_reference/inference.py", line 225, in chat_completion
    request = await request_with_localized_media(request)
  File "/usr/local/lib/python3.10/site-packages/llama_stack/providers/inline/inference/meta_reference/inference.py", line 426, in request_with_localized_media
    m.content = await _convert_content(m.content)
  File "/usr/local/lib/python3.10/site-packages/llama_stack/providers/inline/inference/meta_reference/inference.py", line 420, in _convert_content
    return [await _convert_single_content(c) for c in content]
  File "/usr/local/lib/python3.10/site-packages/llama_stack/providers/inline/inference/meta_reference/inference.py", line 420, in <listcomp>
    return [await _convert_single_content(c) for c in content]
  File "/usr/local/lib/python3.10/site-packages/llama_stack/providers/inline/inference/meta_reference/inference.py", line 413, in _convert_single_content
    url = await convert_image_media_to_url(content, download=True)
  File "/usr/local/lib/python3.10/site-packages/llama_stack/providers/utils/inference/prompt_adapter.py", line 78, in convert_image_media_to_url
    r = await client.get(media.image.uri)
  File "/usr/local/lib/python3.10/site-packages/httpx/_client.py", line 1814, in get
    return await self.request(
  File "/usr/local/lib/python3.10/site-packages/httpx/_client.py", line 1585, in request
    return await self.send(request, auth=auth, follow_redirects=follow_redirects)
  File "/usr/local/lib/python3.10/site-packages/httpx/_client.py", line 1674, in send
    response = await self._send_handling_auth(
  File "/usr/local/lib/python3.10/site-packages/httpx/_client.py", line 1702, in _send_handling_auth
    response = await self._send_handling_redirects(
  File "/usr/local/lib/python3.10/site-packages/httpx/_client.py", line 1739, in _send_handling_redirects
    response = await self._send_single_request(request)
  File "/usr/local/lib/python3.10/site-packages/httpx/_client.py", line 1776, in _send_single_request
    response = await transport.handle_async_request(request)                                                                                                                                                                   [29/1931]
  File "/usr/local/lib/python3.10/site-packages/httpx/_transports/default.py", line 376, in handle_async_request
    with map_httpcore_exceptions():
  File "/usr/local/lib/python3.10/contextlib.py", line 153, in __exit__
    self.gen.throw(typ, value, traceback)
  File "/usr/local/lib/python3.10/site-packages/httpx/_transports/default.py", line 89, in map_httpcore_exceptions
    raise mapped_exc(message) from exc
httpx.UnsupportedProtocol: Request URL is missing an 'http://' or 'https://' protocol.
INFO:     10.141.0.60:41848 - "POST /alpha/inference/chat-completion HTTP/1.1" 500 Internal Server Error
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/httpx/_transports/default.py", line 72, in map_httpcore_exceptions
    yield
  File "/usr/local/lib/python3.10/site-packages/httpx/_transports/default.py", line 377, in handle_async_request
    resp = await self._pool.handle_async_request(req)
  File "/usr/local/lib/python3.10/site-packages/httpcore/_async/connection_pool.py", line 207, in handle_async_request
    raise UnsupportedProtocol(
httpcore.UnsupportedProtocol: Request URL is missing an 'http://' or 'https://' protocol.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/llama_stack/distribution/server/server.py", line 256, in endpoint
    return await maybe_await(value)
  File "/usr/local/lib/python3.10/site-packages/llama_stack/distribution/server/server.py", line 215, in maybe_await
    return await value
  File "/usr/local/lib/python3.10/site-packages/llama_stack/distribution/routers/routers.py", line 123, in chat_completion
    return await provider.chat_completion(**params)
  File "/usr/local/lib/python3.10/site-packages/llama_stack/providers/inline/inference/meta_reference/inference.py", line 225, in chat_completion
    request = await request_with_localized_media(request)
  File "/usr/local/lib/python3.10/site-packages/llama_stack/providers/inline/inference/meta_reference/inference.py", line 426, in request_with_localized_media
    m.content = await _convert_content(m.content) 
File "/usr/local/lib/python3.10/site-packages/llama_stack/providers/inline/inference/meta_reference/inference.py", line 426, in request_with_localized_media
    m.content = await _convert_content(m.content)
  File "/usr/local/lib/python3.10/site-packages/llama_stack/providers/inline/inference/meta_reference/inference.py", line 420, in _convert_content
    return [await _convert_single_content(c) for c in content]
  File "/usr/local/lib/python3.10/site-packages/llama_stack/providers/inline/inference/meta_reference/inference.py", line 420, in <listcomp>
    return [await _convert_single_content(c) for c in content]
  File "/usr/local/lib/python3.10/site-packages/llama_stack/providers/inline/inference/meta_reference/inference.py", line 413, in _convert_single_content
    url = await convert_image_media_to_url(content, download=True)
  File "/usr/local/lib/python3.10/site-packages/llama_stack/providers/utils/inference/prompt_adapter.py", line 78, in convert_image_media_to_url
    r = await client.get(media.image.uri)
  File "/usr/local/lib/python3.10/site-packages/httpx/_client.py", line 1814, in get
    return await self.request(
  File "/usr/local/lib/python3.10/site-packages/httpx/_client.py", line 1585, in request
    return await self.send(request, auth=auth, follow_redirects=follow_redirects)
  File "/usr/local/lib/python3.10/site-packages/httpx/_client.py", line 1674, in send
    response = await self._send_handling_auth(
  File "/usr/local/lib/python3.10/site-packages/httpx/_client.py", line 1702, in _send_handling_auth
    response = await self._send_handling_redirects(
  File "/usr/local/lib/python3.10/site-packages/httpx/_client.py", line 1739, in _send_handling_redirects
    response = await self._send_single_request(request)
  File "/usr/local/lib/python3.10/site-packages/httpx/_client.py", line 1776, in _send_single_request
    response = await transport.handle_async_request(request)
  File "/usr/local/lib/python3.10/site-packages/httpx/_transports/default.py", line 376, in handle_async_request
    with map_httpcore_exceptions():
  File "/usr/local/lib/python3.10/contextlib.py", line 153, in __exit__
    self.gen.throw(typ, value, traceback)
  File "/usr/local/lib/python3.10/site-packages/httpx/_transports/default.py", line 89, in map_httpcore_exceptions
    raise mapped_exc(message) from exc
httpx.UnsupportedProtocol: Request URL is missing an 'http://' or 'https://' protocol.
INFO:     10.141.0.60:41848 - "POST /alpha/inference/chat-completion HTTP/1.1" 500 Internal Server Error

Expected behavior

Image inferencing works as expected without error.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant