Replies: 4 comments 4 replies
-
take a look at this example , i was wondering the same question . dbb6800 |
Beta Was this translation helpful? Give feedback.
-
if you are looking for flux integration . i made this |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
I used This is just a test. I hope to attach the image after the response is sent, but I haven’t passed the image to the model for processing yet. from typing import List, Union, Generator, Iterator
from schemas import OpenAIChatMessage
import requests
import json
import base64
from io import BytesIO
from PIL import Image
import itertools
class Pipeline:
def __init__(self):
# Optionally, you can set the id and name of the pipeline.
# Best practice is to not specify the id so that it can be automatically inferred from the filename, so that users can install multiple versions of the same pipeline.
# The identifier must be unique across all pipelines.
# The identifier must be an alphanumeric string that can include underscores or hyphens. It cannot contain spaces, special characters, slashes, or backslashes.
# self.id = "ollama_pipeline"
self.name = "Ollama Pipeline"
pass
async def on_startup(self):
# This function is called when the server is started.
print(f"on_startup:{__name__}")
pass
async def on_shutdown(self):
# This function is called when the server is stopped.
print(f"on_shutdown:{__name__}")
pass
def process_messages(self, messages):
# Collection photo on last message
try:
image_url = messages[-1]['content'][1]['image_url']['url']
image_data = f"\n![image]({image_url})\n"
# cut Base64 tag
# if image_data.startswith("data:image"):
# image_data = image_data.split(",")[1]
# Base64 to 2進制
# image_binary = base64.b64decode(image_data)
# transfer to PIL image
# image = Image.open(BytesIO(image_binary))
except:
image_data = 'None image in user_message'
return image_data
def pipe(
self, user_message: str, model_id: str, messages: List[dict], body: dict
) -> Union[str, Generator, Iterator]:
# This is where you can add your custom pipelines like RAG.
print(f"pipe:{__name__}")
OLLAMA_BASE_URL = "http://host.docker.internal:11434"
MODEL = "llama3"
if "user" in body:
print("######################################")
print(f'# User: {body["user"]["name"]} ({body["user"]["id"]})')
print(f"# Message: {user_message}")
print("######################################")
try:
r = requests.post(
url=f"{OLLAMA_BASE_URL}/v1/chat/completions",
json={**body, "model": MODEL},
stream=True,
)
r.raise_for_status()
if False:#body["stream"]:
return r.iter_lines()
else:
image_data = self.process_messages(messages)
image_iter = iter([image_data])
combined = itertools.chain(r.iter_lines(), image_iter)
return combined #r.json()
except Exception as e:
return f"Error: {e}" |
Beta Was this translation helpful? Give feedback.
-
hi all,
I have a a pipeline which loads and functions ok. It connects to replicate and sends my prompt where an image is generated. The output of the replicate.run () command is an list of outputs including a link to the image that is generated. I want to send this back to open-webui to display. however, i am unable to. can someone help me?
I think my issue is with the pipe function and the return type of pipe and open-webui is rendering the return from my pipeline.
Any assistance would be very much appreciated!
Beta Was this translation helpful? Give feedback.
All reactions