[Free LLM in Your Terminal: Ask AI Directly in Bash]
You're deep in the terminal, deep in the zone, and suddenly you need to step out just to ask Google or an LLM (ChatGPT, Claude, or whichever you like) something simple. But often, the question that pops up in your terminal is really basic, like you forgot which tool shows nice system info, or how to check your kernel version in Linux.
Wouldn't it be great if you could ask right there in the terminal and get a quick answer?
Of course, in 2025, for any idea you have, there's already an app for that. Plenty of options exist for chatting with AI models from the terminal. But what we need isn't some complicated TUI app, just something simple: run a command, pass your query, and get an answer. Done.
Thanks to Simon Willison, we have exactly that: https://github.com/simonw/llm. Just call the llm command with your question, get a response, and keep working.
Simon, by the way, is a legend in the space: co-creator of Django and author of an excellent blog that frequently shows up in my X (Twitter) feed.
Here's the catch: the tool works great, but our goal is to avoid spending a cent on quick terminal questions. The solution? Use free models from OpenRouter. Fair warning: "free" isn't truly free. Providers log your prompts and may use them for training. For quick terminal questions, that's an acceptable trade-off.
So here's a step-by-step guide to make llm work nicely with OpenRouter's free models, so you can ask questions from the terminal without spending any money.
Step 1: Install the tool
We'll do everything on Linux.
uv tool install llmStep 2: Check that it works
Try running it with a simple prompt: it’ll tell you that you're missing an OpenAI key.
llm "hey"
Error: No key found - add one using 'llm keys set openai' or set the OPENAI_API_KEY environment variableStep 3: Enable OpenRouter
llm has a plugin system, and of course, there’s already a plugin for OpenRouter.
Install it with:
llm install llm-openrouterStep 4: Add your OpenRouter key
Get your key here, then set it for llm:
llm keys set openrouter
Enter key: <paste key here>Step 5: Verify it works
llm models listYou should see a long list of models, including ones from OpenRouter.
Now, let's filter for free models:
llm openrouter models --freePick one and test it out.
Personally, I like "glm-4.5-air" – it reminds me of early Claude Sonnet models and handles simple queries really well.
llm "what model are you?" -m "openrouter/z-ai/glm-4.5-air:free"Sample response:
I am a GLM large language model developed and trained by Zhipu AI. My training involves processing and learning from a large amount of text data, enabling me to understand and generate human language. I am designed to be helpful, harmless, and honest, and I can answer questions, provide suggestions, and have conversations with you. Is there something specific you'd like to know about me?Perfect.
Step 6: Add a short system prompt
Now let's tune the model to respond briefly and to the point.
llm "what model are you?" -m "openrouter/z-ai/glm-4.5-air:free" -s "answer very short. do not use markdown. do not use bullets and other enumeration"Response:
I am GLM, a large language model developed by Zhipu AI.Nice.
Step 7: Wrap it all into one simple command
Let's make a helper function in your .bashrc.
Aliases won't work here since we need to pass parameters, so we'll use a Bash function.
# Bash functions for better aliases
l() {
llm "$1" -m "openrouter/z-ai/glm-4.5-air:free" -s "answer very short. do not use markdown. do not use bullets and other enumeration"
}Step 8: Test your new command
l "how to get kernel version in Debian?"Output:
uname -rThat's it!
Now you've got a CLI command that gives short, helpful answers to your prompts – and it costs zero dollars.
