You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Just now, I realized that avante's auto suggestion is not copilot, but cursor ++(cursor tab), and this video demonstrates its capabilities
My.Movie.mp4
The video has been speeded up, and it's actually very slow
You can see that it provides document-wide multi-line suggestions (cursor tabs) instead of text-only completion at the cursor (copilot).
For some relatively simple requirement scenarios, we don't want to "describe the requirement - wait for the response - check the result" in the chat box, we enter the accurate and complete code is much faster than it.
So cursor tab is a killer feature for many people, and the ai automatically guesses the requirements based on what we type and gives complete completion suggestions
It's funny that I was still looking for a replacement for cursor tab, not realizing that Avante has already implemented it. Here are some community posts
This doesn't fit the auto suggestion usage scenario: If the AI doesn't respond immediately, we continue typing characters instead of waiting for it
The token consumption is huge
In a normal project, there are usually hundreds or thousands of lines of code. I just tested it for one afternoon, and I burned through a few million tokens
Solution ideas
For an open source project, it is difficult to train a large model specifically for this function like cursor
We can only rely on the power of the big models themselves, and the good news is that some big models (deepseek/claude) provide caching capabilities
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Background
Just now, I realized that avante's auto suggestion is not copilot, but cursor ++(cursor tab), and this video demonstrates its capabilities
My.Movie.mp4
You can see that it provides document-wide multi-line suggestions (cursor tabs) instead of text-only completion at the cursor (copilot).
For some relatively simple requirement scenarios, we don't want to "describe the requirement - wait for the response - check the result" in the chat box, we enter the accurate and complete code is much faster than it.
So cursor tab is a killer feature for many people, and the ai automatically guesses the requirements based on what we type and gives complete completion suggestions
It's funny that I was still looking for a replacement for cursor tab, not realizing that Avante has already implemented it. Here are some community posts
Problem
Now there are two problem:
In a normal project, there are usually hundreds or thousands of lines of code. I just tested it for one afternoon, and I burned through a few million tokens
Solution ideas
For an open source project, it is difficult to train a large model specifically for this function like cursor
We can only rely on the power of the big models themselves, and the good news is that some big models (deepseek/claude) provide caching capabilities
When the cache is hit, it can not only reduce the consumption of tokens, but also improve the response speed
This means that if we can take advantage of the caching capability, we can solve both problems at the same time
Concrete implementation
If you want to use the power of caching, you need to transfer the conversation message incrementally
The current Suggestion implementation sends the entire contents of the file in full with each session
We may need the following modifications:
I have a simple demo here (ignoring the transmitted message format, this is just for my own testing)
Demo
rules.md
chat history:
finally, Is anyone doing something similar? Or how do you feel about the feasibility of this
Beta Was this translation helpful? Give feedback.
All reactions