You can probably write a bash wrapper around it that feeds in “Can you summarize this text: (text here)” by setting the PROMPT variable the bash script. (Probably just do PROMPT=“Can you summarize this text: $1”) (Obviously don’t recompile everytime so remove the clone build and download code)
Just to warn you it might be very bulky and the model that the script is downloading is deprecated so you’ll have to find a different .gguf model on hugging face. Try to find a lightweight .gguf model and replace the MODEL variable with it nane as well the rest of the link. Or just download from a browser and move it into the models folder.
could anyone recommend an LLM that could be run locally or on google colab ? thanks
I believe Llama is open source but not sure how complicated it is to get running locally. Nevermind: https://replicate.com/blog/run-llama-locally
You can probably write a bash wrapper around it that feeds in “Can you summarize this text: (text here)” by setting the PROMPT variable the bash script. (Probably just do PROMPT=“Can you summarize this text: $1”) (Obviously don’t recompile everytime so remove the clone build and download code)
thx
Just to warn you it might be very bulky and the model that the script is downloading is deprecated so you’ll have to find a different .gguf model on hugging face. Try to find a lightweight .gguf model and replace the MODEL variable with it nane as well the rest of the link. Or just download from a browser and move it into the models folder.