We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Thanks for your excellent work. However, I found some weird performance of the tool when working with the in-house model client.
Firstly, I define the client:
from openai import OpenAI client = OpenAI(base_url="IN-HOUSE IP address", api_key="IN-HOUSE KEY")
And I register the model by using the config:
config
ell.config.register_model("CUSTOM", client)
Then I use the ell.simple:
ell.simple
ell.init(store="./ell_logdir", verbose=True) @ell.simple(model="CUSTOM", client=client) def my_lmp(prompt: str): """You are a helpful assistant.""" return f"Respond to: {prompt}"
Finally, I use this simple function:
temp = my_lmp("Sam") logger.info(temp) logger.info("Finish")
But I got nothing:
By the way, if I do not use the ell to help me record the prompt, I can get the correct response.
ell
The text was updated successfully, but these errors were encountered:
我在本地调ollama是可以成功的 ` import ell from openai import OpenAI
client = OpenAI( base_url='http://127.0.0.1:11434/v1', api_key='ollama', # required, but unused )
ell.init(verbose=True)
class PromptComponent: def init(self, template: str): self.template = template
@ell.simple(model="qwen2.5:7b-instruct-q4_K_M", client=client) def generate_prompt(self, **kwargs) -> str: prompt = self.template.format(**kwargs) return prompt def set_template(self, template: str): self.template = template def get_template(self) -> str: return self.template
def main(): template = "你好,请用python给我实现:{name} 组件" prompt_component = PromptComponent(template) prompt_component.generate_prompt(name="提示工程")
if name == "main": main()
`
Sorry, something went wrong.
No branches or pull requests
Thanks for your excellent work. However, I found some weird performance of the tool when working with the in-house model client.
Firstly, I define the client:
And I register the model by using the
config
:Then I use the
ell.simple
:Finally, I use this simple function:
But I got nothing:
By the way, if I do not use the
ell
to help me record the prompt, I can get the correct response.The text was updated successfully, but these errors were encountered: