This is a PSA of a buffer overflow vulnerability in GGUF files. The community updates OP that it was fixed quite early, but they do say the awareness is good for the sake of those who are on older versions of llama.cpp.

Others also make meta-level and general recommendations, like not running models as root/admin, and if you are serious, running them on isolated machines. Their concern comes from the fact that the library is written in an unsafe language.

Got curious about the “fix”: it is the vocabulary, where apparently the helper _try_copy can be misused to exceed INT32_MAX, and lead to an unchecked memcpy.