We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent 7587369 commit 1857bfdCopy full SHA for 1857bfd
vcpkg-overlays/llama/portfile.cmake
@@ -2,7 +2,7 @@ vcpkg_from_github(
2
OUT_SOURCE_PATH SOURCE_PATH
3
REPO ggerganov/llama.cpp
4
REF "${VERSION}"
5
- SHA512 6a130e8ec30dfe6da1b070c38cd5503896e9ea5f194cef1b97c0038705df04f256be35e89634efeb504434179d08513de4a9368d3ec123acf26cd8ae39085649
+ SHA512 c67fe577673da72315a5324ef99bc84a7d80830b55c16e6aa7e87b2a645bf53b0e6930c033bdbb717ac155028173a64e85dfe3cbdc4f3945caebe1e4c00fd56e
6
HEAD_REF master
7
)
8
vcpkg-overlays/llama/vcpkg.json
@@ -1,6 +1,6 @@
1
{
"name": "llama",
- "version-string": "b2700",
+ "version-string": "b2865",
"homepage": "https://github.com/ggerganov/llama.cpp",
"description": "Inference of LLaMA model in pure C/C++.",
"dependencies": [
0 commit comments