-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WASM runs in the web #17
base: main
Are you sure you want to change the base?
Conversation
To make it run in the offline docker, 8GB of llm model need to be put into the /public/mistral/ directory. model download link: https://huggingface.co/zxrzxr/Mistral-7B-Instruct-v0.1-q4f32_1 TODOs: Link: between biochatter-server and the wasm (should be easy as long as the output of biochatter is loaded into the DOM, and then loaded into WASM) Chat-UI: synchronize with the next UI.
Hi @xiaoranzhou, |
Sure, I will
|
New file:
|
Thanks! |
chatgse/app/constant.ts
Outdated
@@ -12,7 +12,7 @@ export const DOCS_URL = 'https://biocypher.org' | |||
|
|||
export const DEFAULT_CORS_HOST = "https://a.nextweb.fun"; | |||
export const DEFAULT_API_HOST = `${DEFAULT_CORS_HOST}/api/proxy`; | |||
export const OPENAI_BASE_URL = "https://api.openai.com"; | |||
export const OPENAI_BASE_URL = "http://localhost:5000/"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this interfere with regular behaviour when using the app in non-WASM mode? @xiaoranzhou
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@xiaoranzhou could you comment?
TODO: 1. switch between openAI API and wasm module. 2. Stop the summarize function because it has too much tokens
@slobentanzer Hi Sebastian, I have finished the new integration. Now if you selection the "mistral-wasm" in the model selection, the wasm will be used. The usage is exactly the same as chatGPT version. I have tested it and it works pretty well. The webllm page can be deleted. |
Great, thanks! Will review ASAP. @fengsh27, if you want to take a look, your feedback would also be appreciated. :) |
Hi @xiaoranzhou, I merged the current state into your PR and adjusted the biochatter-server with the WASM class, but trying to get a response from the WASM model in my browser still does not work (it just shows the processing dots indefinitely). Could you check the current version of the PR and whether this setup works on your side? |
More specifically, could we include some information about the loading status of the WASM model somewhere in the app? Currently I can't tell if the model is being loaded or ready for queries. |
Thanks for the feedback and check. I will have a look later today. |
Yes, I was checking on the latest Chrome. Did not see that grey small font text though, just the regular UI components. Where in the frontend code to you do the status update? |
Thanks @xiaoranzhou! Is this the latest version after the merge I did? Could you describe how you start the different parts of the service? |
No, that was the version before your reversed merge in the fork. The new biochatter-server was giving a "Exit(3)" and has following debug information: |
The issue of biochatter-server has been resolved and a pull request biocypher/biochatter-server#4 has been submitted. |
I tested the wasm after merge, it works fine. We can organise a meeting to troubleshooting the Mac usage 😄 @slobentanzer |
Hi @xiaoranzhou,
|
@slobentanzer The current version is in independent from the biochatter-server. This is because: Chatgse-next port can not reach the 5001 port of the biochatter-server. |
WASM runs in the web now.
To make it run in the offline docker, 8GB of llm model need to be put into the /public/mistral/ directory.
model download link: https://huggingface.co/zxrzxr/Mistral-7B-Instruct-v0.1-q4f32_1
TODOs:
Link: between biochatter-server and the wasm (should be easy as long as the output of biochatter is loaded into the DOM, and then loaded into WASM)
Chat-UI: synchronize with the next UI.