Token
The small chunks of text that AI models read and generate, roughly three-quarters of a word each.
What It Is
A token is the basic unit of text that an AI model processes. Rather than reading whole words, models break text into smaller pieces called tokens. A token might be a full short word like “the,” part of a longer word like “un” and “believe” and “able,” or even a single character. As a rough rule, one token equals about three-quarters of an English word, so 1,000 tokens is roughly 750 words. Every time you send a prompt to an AI and receive a response, both your input and the output are measured in tokens.
Why It Matters
Tokens are how AI usage is priced and how limits are enforced. When a model has a 200,000-token context window, that is the total budget for your input and the model’s output combined. When you see pricing listed as “$3 per million input tokens,” that is the cost of sending text to the model. Understanding tokens helps you estimate costs, stay within context limits, and write more efficient prompts. Shorter, clearer prompts use fewer tokens and cost less.
In Practice
If you paste a 10-page document into Claude and ask for a summary, you are spending tokens on both the document (input) and the summary (output). API pricing dashboards show token counts so you can track spending. When building automated workflows that call AI models repeatedly, token awareness is what keeps your costs predictable.