Integrating DeepSeek-Coder with Zed IDE - Assistance Needed #14199
Replies: 6 comments 6 replies
-
https://zed.dev/releases/stable/0.142.4 "provider": {
"name": "openai",
"type": "openai",
"default_model": {
"custom": {
"name": "deepseek-coder",
"max_tokens": 32000
}
},
"available_models": [
{
"custom": {
"name": "deepseek-chat",
"max_tokens": 32000
}
},
{
"custom": {
"name": "deepseek-coder",
"max_tokens": 32000
}
}
],
"api_url": "https://api.deepseek.com/v1"
} |
Beta Was this translation helpful? Give feedback.
-
Also |
Beta Was this translation helpful? Give feedback.
-
For newer zed version 0.149.5, this setting works for me:
Do not forget to set api_key of deepseek in zed assistant configure page. |
Beta Was this translation helpful? Give feedback.
-
does anyone know how to config temperature of deepseek-coder in zed? |
Beta Was this translation helpful? Give feedback.
-
Has anyone encountered this error |
Beta Was this translation helpful? Give feedback.
-
zed settings.json {
"assistant": {
"default_model": {
"provider": "copilot_chat",
"model": "gpt-4o"
},
"version": "2"
},
"provider": {
"name": "openai",
"type": "openai",
"api_key": "<key>",
"api_url": "https://api.openai.com/v1/engines",
"default_model": {
"custom": {
"name": "deepseek-coder",
"max_tokens": 128000
}
},
"available_models": [
{
"custom": {
"name": "deepseek-chat",
"max_tokens": 128000
}
},
{
"custom": {
"name": "deepseek-coder",
"max_tokens": 128000
}
}
],
}
} What am I missing here in order to add DeepSeek as an LMM into Zed? Only shows those LLM's at this moment. |
Beta Was this translation helpful? Give feedback.
-
Hello Zed community,
I'm trying to integrate the DeepSeek Coder language model into Zed IDE as an AI assistant, but I'm encountering some issues. I'd appreciate any help or guidance from those who might have experience with this integration.
Current setup:
I've added the following configuration to my settings:
Issue:
Despite DeepSeek stating that their API is compatible with OpenAI's, I haven't been able to get it working. I've tried both with and without the "v1" in the `apiurl, but there's no progress. The assistant panel still shows only GPT-4 and GPT-3.5 models in the dropdown menu.
Questions:
Any insights, suggestions, or documentation links would be greatly appreciated. Thank you in advance for your help!
Environment details:
Beta Was this translation helpful? Give feedback.
All reactions