Know your tokens before you spend them
Every LLM bills by the token, and every model uses a slightly different tokenizer. Our token counter implements the official tokenizers for GPT-4 / GPT-5 (cl100k, o200k), Claude (Anthropic’s tokenizer), Gemini, and Llama 3 — so the count you see is the count you’ll actually pay for.
Beyond raw counts, we show context-window usage as a progress bar, estimated cost at current public pricing, and a per-section breakdown so you can see which parts of your prompt are eating the budget.
Features
- Exact token counts for GPT-4, GPT-5, Claude 3.5/4, Gemini, Llama 3
- Cost estimate at the latest public per-million pricing
- Context-window usage shown as a percentage of each model’s limit
- Per-section breakdown for Markdown and chat-format inputs
- Compare counts across models in one view
- Bulk mode: paste a folder of files and see total tokens
Use it to keep prompts under context-window caps, to debug runaway costs, and to size embeddings batches before you call the API.