Skip to content

Actions: katsu560/llama.cpp

CI

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
22 workflow runs
22 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

ggml : do not use ARM features not included in the build (#10457)
CI #63: Commit 55ed008 pushed by katsu560
November 23, 2024 16:13 1h 16m 37s master
November 23, 2024 16:13 1h 16m 37s
gguf-py : fix double call to add_architecture() (#8952)
CI #62: Commit 911b437 pushed by katsu560
August 10, 2024 07:02 1h 0m 24s master
August 10, 2024 07:02 1h 0m 24s
batched-bench : handle empty -npl (#8839)
CI #61: Commit ecf6b7f pushed by katsu560
August 4, 2024 11:55 59m 40s master
August 4, 2024 11:55 59m 40s
Vulkan MMQ Fix (#8479)
CI #60: Commit bda62d7 pushed by katsu560
July 15, 2024 09:40 57m 52s master
July 15, 2024 09:40 57m 52s
llama : return nullptr from llama_grammar_init (#8093)
CI #59: Commit e6bf007 pushed by katsu560
June 25, 2024 19:10 53m 21s master
June 25, 2024 19:10 53m 21s
server : new UI (#7633)
CI #58: Commit 2e66683 pushed by katsu560
June 1, 2024 20:46 43m 22s master
June 1, 2024 20:46 43m 22s
ggml: implement quantized KV cache for FA (#7372)
CI #57: Commit 5ca49cb pushed by katsu560
May 19, 2024 15:05 37m 31s master
May 19, 2024 15:05 37m 31s
April 27, 2024 02:09 35m 56s
April 6, 2024 13:00 29m 34s
gitignore : gguf-split
CI #54: Commit 9556217 pushed by katsu560
March 23, 2024 21:11 29m 10s master
March 23, 2024 21:11 29m 10s
readme : add API changes section
CI #53: Commit 231ae28 pushed by katsu560
March 3, 2024 11:54 33m 41s master
March 3, 2024 11:54 33m 41s
mpt : do not duplicate token_embd.weight on disk (#5670)
CI #52: Commit 15499eb pushed by katsu560
February 23, 2024 14:05 30m 52s master
February 23, 2024 14:05 30m 52s
cmake : fix VULKAN and ROCm builds (#5525)
CI #51: Commit 5bf2b94 pushed by katsu560
February 17, 2024 16:25 27m 7s master
February 17, 2024 16:25 27m 7s
vulkan: Set limit for task concurrency (#5427)
CI #50: Commit 4b7b38b pushed by katsu560
February 10, 2024 00:31 28m 56s master
February 10, 2024 00:31 28m 56s
February 6, 2024 14:15 41m 18s
Remove unused data and add fixes (#5154)
CI #48: Commit 35a2ee9 pushed by katsu560
January 27, 2024 14:53 25m 20s master
January 27, 2024 14:53 25m 20s
llama.swiftui : use correct pointer for llama_token_eos (#4797)
CI #47: Commit c75ca5d pushed by katsu560
January 7, 2024 03:17 28m 29s master
January 7, 2024 03:17 28m 29s
flake.lock: update
CI #46: Commit edd1ab7 pushed by katsu560
January 1, 2024 03:31 23m 41s master
January 1, 2024 03:31 23m 41s
fallback to CPU buffer if host buffer alloc fails (#4610)
CI #45: Commit 708e179 pushed by katsu560
December 24, 2023 01:11 20m 40s master
December 24, 2023 01:11 20m 40s
llama : sanity checks for access to logits (#4274)
CI #44: Commit 8a5be3b pushed by katsu560
December 16, 2023 13:29 23m 44s master
December 16, 2023 13:29 23m 44s
llama : avoid using "optional" keyword (#4283)
CI #43: Commit 5a7d312 pushed by katsu560
December 2, 2023 06:53 15m 28s master
December 2, 2023 06:53 15m 28s
server : allow continue edit on completion mode (#3950)
CI #42: Commit 4a4fd3e pushed by katsu560
November 11, 2023 03:33 24m 51s master
November 11, 2023 03:33 24m 51s