-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Ollama Model for Yorkie intelligence #255
Comments
It seems like a good way for new users to get started without any setup. It would also be helpful to clearly list the required computer specifications. |
2 tasks
sihyeong671
added a commit
to sihyeong671/codepair
that referenced
this issue
Aug 18, 2024
- fix docker compose file (user can change ollama conatainer port) - fix readme docs(add --env-file option) - add usable model
sihyeong671
added a commit
to sihyeong671/codepair
that referenced
this issue
Aug 20, 2024
- apply github code review
devleejb
pushed a commit
that referenced
this issue
Aug 20, 2024
* Add .gitignore into root project directory(#279) * chore: just combine front, back ignore file - remove .gitignore in each folder * chore: fix legacy file & seperate OD part in gitignore * Add ollama llm for yorkie intelligence # 255 - add docker image in docker compose file - change yorkie intelligence env var - add lib related to ollama * apply formatting * Add ollama model option #255 - fix docker compose file (user can change ollama conatainer port) - fix readme docs(add --env-file option) - add usable model * feat: add modelList type, change port to baseurl #255 - apply github code review * fix: apply npm format * fix: refactor by github review
minai621
pushed a commit
that referenced
this issue
Nov 5, 2024
* Add .gitignore into root project directory(#279) * chore: just combine front, back ignore file - remove .gitignore in each folder * chore: fix legacy file & seperate OD part in gitignore * Add ollama llm for yorkie intelligence # 255 - add docker image in docker compose file - change yorkie intelligence env var - add lib related to ollama * apply formatting * Add ollama model option #255 - fix docker compose file (user can change ollama conatainer port) - fix readme docs(add --env-file option) - add usable model * feat: add modelList type, change port to baseurl #255 - apply github code review * fix: apply npm format * fix: refactor by github review
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
What would you like to be added:
env
fileWhy is this needed:
Since Codepair is currently using OpenAI ChatGPT which costs money, so we need to add option to use ollama's model which can use freely and run locally
Additional Information:
The text was updated successfully, but these errors were encountered: