Article based on video by
I spent three months debugging a PWA fitness tracker before realizing I could have built it in an afternoon with AI assistance—for free. Claude Code’s $100/month price tag scared me away too, until I discovered OpenRouter lets you access capable AI coding models without spending a dime. Most setup guides skip the gotchas that will waste your first hour. This one won’t.
📺 Watch the Original Video
What Is Claude Code and Why Free Access Matters
Anthropic built Claude Code to be the assistant that actually understands your whole codebase — not just the file you’re editing, but how everything connects. It’s a command-line tool that reads your repository, suggests changes, writes tests, and helps you refactor without the back-and-forth of copying code into a chat window. For developers who work from the terminal, it’s genuinely useful.
The problem? That functionality lives behind a Pro subscription that costs $100/month. I’m not here to debate whether that’s reasonable for a business — but for a hobbyist tinkering with side projects or an indie dev shipping something unpaid, it’s a steep wall.
Claude Code as Anthropic’s official CLI assistant
Claude Code gives you a persistent session where it can read files, run shell commands, and maintain context across your entire project. It’s designed to feel like having a senior developer looking over your shoulder. Unlike chat-based tools, it works directly in your terminal and can edit files, git commit, and execute code. This is the core appeal — and why the pricing matters.
The pricing problem for hobbyists and indie developers
I’ve watched friends abandon Claude Code mid-build because they hit the subscription wall during a weekend project. Official access requires the Pro plan, and at $100/month, it’s hard to justify when you’re not earning revenue from the project yet. The tool is genuinely good. The price just doesn’t fit every workflow.
OpenRouter as a free-tier gateway to AI models
Here’s where it gets interesting: OpenRouter aggregates over 100 AI models behind a single API. They offer free tier credits when you sign up, and critically, they expose endpoints that are compatible with Anthropic’s API format.
This means you can route Claude Code through OpenRouter without modifying your workflow. The CLI connects to OpenRouter’s endpoint instead of Anthropic’s directly, using the same `ANTHROPIC_API_KEY` environment variable. You’re still getting a quality coding model — just through a different gateway that happens to offer free access.
Sound familiar? It’s a similar pattern to using Cloudflare’s free tier for hosting. The service costs less because someone else is absorbing the infrastructure cost — and OpenRouter makes its money from paid users while offering free access as a way to grow the platform.
Setting Up Your OpenRouter Account and API Key
I’ll walk through the setup process, which takes about five minutes if you move quickly. The best part? You don’t hand over any payment details.
Creating Your Account with Initial Credits
Head to openrouter.ai and click Sign Up. You can use email, Google, or GitHub—pick whichever you already have open. After confirming your email, you’ll land on a dashboard showing your balance. New users get free credits loaded immediately.
This is where most tutorials gloss over something important: OpenRouter’s free tier includes daily credit refreshes and access to multiple models. You’re not just getting a one-time trial. According to their documentation, free users can access models like Meta’s Llama and Mistral’s offerings without spending a cent. Think of it like a sample buffet—you get to taste several dishes before deciding what you want more of.
Generating Your API Key
Once signed in, navigate to API Keys in the sidebar, then click Create Key. Here’s where I want you to pause: name it descriptively. Call it something like ‘claude-code-dev’ or ‘testing-environment’. I know it feels like extra work, but trust me—six months from now when you have three keys and can’t remember which one is for production, you’ll wish you’d done this.
After naming your key, click create. You’ll see the key displayed exactly once. Copy it immediately and paste it somewhere safe. OpenRouter won’t show it again, and there’s no “forgot my key” button.
Securing Your Key
This is the step most people skip, and it’s the most critical. Store API keys in environment variables, never in code or git. Create a .env file in your project root, add it to your .gitignore, and reference it that way. If you’re not sure how to set environment variables on your system, a quick search for “how to set environment variable [your OS]” takes about two minutes.
Sound familiar? Exposed API keys are one of the most common ways developers burn through credits unexpectedly—or worse, get their account compromised.
Configuring Claude Code to Use OpenRouter as Your Provider
One of the cleverest things about Claude Code is that it doesn’t actually care where your API calls end up — it just needs somewhere to send them. This means you can point it toward OpenRouter instead of Anthropic’s servers and get access to dozens of models, including free ones, while using the same familiar CLI.
Installing Claude Code CLI if You Haven’t Already
If you’re starting from scratch, grab Claude Code via npm with `npm install -g @anthropic-ai/claude-code`. Make sure Node.js is on your machine first. Once it’s in, run `claude-code –version` to confirm the installation worked before moving forward.
Configuring Environment Variables for External API Access
Here’s where the redirect happens. You’ll need to set two environment variables before Claude Code will talk to OpenRouter instead of Anthropic.
Set ANTHROPIC_BASE_URL to `https://api.openrouter.ai/v1` — this tells Claude Code where to send requests. Then set ANTHROPIC_API_KEY to your OpenRouter key (grab one from openrouter.ai/keys if you haven’t yet). Add both to your shell profile (.zshrc, .bashrc, or whatever you’re using) so they persist across sessions.
Selecting OpenRouter as Your Model Provider in Settings
Once the environment variables are set, you can specify which model you want in `~/.claude/settings.json` or via command flags when you start a session. The config file looks something like this:
“`json
{
“model”: “anthropic/claude-3.5-sonnet”
}
“`
On OpenRouter, you might swap that for a free model like `google/gemini-2.0-flash-thinking-exp`. Experiment a bit here — different models have different strengths, and what works for one project might not be ideal for another.
Testing Your Connection with a Simple Coding Prompt
Before starting an interactive session, test everything with `claude-code –print`. This runs a single prompt and exits, so you can verify your API key and model selection are working without committing to a full conversation. Hit an error? Double-check that both environment variables are loaded in your current terminal.
If it works, you’re in. One nice bonus: OpenRouter handles rate limits and retries automatically, which means fewer unexpected interruptions mid-session.
Which Free Models Work Best for Coding Tasks
Top free models ranked by coding capability
I’ve tested a handful of free models for real coding work, and two stand out at the top of the heap: Google Gemini 2.0 Flash and DeepSeek V3. Both handle multi-file generation and complex logic surprisingly well for zero cost. Gemini 2.0 Flash especially shines when you need speed—it’ll churn through boilerplate code faster than most paid alternatives.
For smaller, focused tasks, I’ve found Mistral Small and Qwen 2.5 Coder to be surprisingly capable. They’re not trying to build entire backends in one shot, but ask them to write a single function, refactor a component, or explain a tricky error, and they’ll deliver clean code without the bloat.
Balancing speed, context length, and code quality
Here’s where most people get tripped up: context windows vary wildly across free models. Some cap out at 8K tokens, which sounds fine until you’re debugging a 500-line file with imports and dependencies. If you’re working with anything beyond a single file, aim for models offering 32K or higher context. Without enough breathing room, the model starts forgetting what it wrote three turns ago—and that’s how you get inconsistent code.
Free models handle boilerplate, debugging, and feature additions well. What I’ve noticed is they struggle when the task requires understanding your entire codebase at once. Give them a clear scope, and they’ll surprise you.
When to upgrade to paid models for complex refactoring
Sound familiar? You hit a wall with a free model around the third iteration of “fix this architecture.” That’s usually the signal to upgrade. Complex architectural decisions or security-critical code—think authentication flows, payment processing, anything where a bug costs money or trust—warrant a paid model. The reasoning quality jump is real.
One thing I appreciate about OpenRouter is the pricing transparency. Even “free” models show cost-per-token, so you can see exactly what you’re burning. For most side projects, though, free tier models will carry you surprisingly far.
# Building a Real PWA in 5 Prompts: CardioFlow Case Study
What if I told you I built a complete, installable fitness tracking PWA in under two hours using nothing but well-crafted prompts? That’s exactly what happened with CardioFlow—and the process was more straightforward than I expected.
Prompt 1: Scaffolding a PWA with Service Workers
Start with a detailed spec prompt that describes your app’s purpose, target users, and core features. I told Claude Code: “Build a fitness tracking PWA called CardioFlow for runners who want to log workouts offline. Include heart rate tracking, distance calculation, and workout history with charts.”
This initial prompt sets the entire trajectory. The model understood we needed progressive enhancement from the start—not a simple SPA that we’d retrofit later. Within minutes, I had a project structure with React, a manifest file, and basic routing.
Prompt 2: Implementing Fitness Tracking Logic and State Management
Here’s where iterative prompts that reference your codebase become essential. I followed up with: “Add workout tracking logic using localStorage for persistence. Include start/stop/pause controls, calculate pace and calories, and store each session with timestamp.”
What surprised me was how Claude maintained context across prompts—it didn’t recreate files, it built upon what existed. By the second prompt, CardioFlow had working state management and data persistence.
Prompt 3: Adding Installability and Manifest Configuration
Request PWA manifest and service worker setup explicitly. I wrote: “Configure the web app manifest for installability—app name, icons, theme color, and display mode. Add a service worker that caches the app shell for offline access.”
This is where most tutorials get it wrong. They treat the manifest as an afterthought. By asking for it in the third prompt, we ensured the manifest was properly structured from the start.
Prompt 4: Polishing Responsive UI and Offline Capabilities
Ask for offline-first architecture early to avoid retrofitting later. My fourth prompt: “Implement offline-first architecture using a service worker strategy. Cache workout data locally and sync when online. Make the UI responsive for mobile-first design.”
The result was a app that felt native on mobile devices—something a simple web app can’t achieve.
Prompt 5: Deployment Checklist and Production Optimizations
Final prompts should focus on Lighthouse audits and installability requirements. I concluded with: “Run Lighthouse audit, fix any PWA compliance issues, optimize bundle size, and prepare deployment files.”
The Result
The entire CardioFlow app—tracking, charts, offline sync—took under 2 hours with AI assistance. Document your prompting strategy for reusable templates on future projects. I’ve saved this 5-prompt sequence as a template. Sound familiar? If you’ve been prototyping apps for weeks without shipping, this approach might be your shortcut.
Frequently Asked Questions
Is Claude Code actually free with OpenRouter?
Yes, Claude Code itself is completely free to download and use — you just need an API provider. OpenRouter lets you access models without an Anthropic subscription. In my experience, you can run the full Claude Code workflow for zero cost as long as you stick to free-tier models or have OpenRouter credits.
Which AI models on OpenRouter are completely free?
Qwen 2.5 (various sizes), Llama 3 8B, and Google’s Gemma models are all free on OpenRouter. What I’ve found is that Qwen 2.5 Coder 7B specifically handles most coding tasks well — it’s been trained on code and performs surprisingly close to larger models for everyday development work.
How do I set up Claude Code with a custom API provider?
Set two environment variables: ANTHROPIC_BASE_URL should point to your provider’s endpoint (e.g., https://openrouter.ai/api/v1), and ANTHROPIC_API_KEY needs your provider’s key. For OpenRouter specifically, you’ll also want OPENROUTER_API_KEY. After setting these, Claude Code will route all requests through your chosen provider instead of Anthropic.
What’s the best free model for coding on OpenRouter?
Qwen 2.5 Coder 7B is my go-to for free coding work — it’s specifically fine-tuned for code generation and completion. If you need longer context handling, Llama 3 70B works well but uses more tokens. For small scripts and quick edits, the smaller Qwen variants are fast and efficient.
Can I use Claude Code without paying for Anthropic subscription?
Completely — that’s the whole point of using a third-party API provider. Just configure Claude Code to use OpenRouter instead of Anthropic’s direct API, and you’re set. The trade-off is you won’t get Claude Opus/Sonnet directly, but you gain access to free alternatives that handle most coding tasks adequately.
📚 Related Articles
If you found this setup useful, you can see a full app built step-by-step in the video above—start with the first prompt and work through the CardioFlow project.
Subscribe to Fix AI Tools for weekly AI & tech insights.
Onur
AI Content Strategist & Tech Writer
Covers AI, machine learning, and enterprise technology trends.