Embedded AI
The practice of AI capabilities moving into the response surface of existing productivity tools rather than living as a standalone application the user switches to.
What It Is
Embedded AI is what happens when the model moves inside the tool instead of beside it.
For most of AI’s first public decade, the pattern was: you work in your tool, you notice a task the model could help with, you open a new window, paste the context, get an answer, and return to your tool. The friction was invisible because it was constant. Every AI interaction required a context-crossing: leaving the surface where the work lived, carrying the relevant information manually, and re-importing the result.
Embedded AI closes that gap by making the model a resident of the existing workflow surface. The Excel sidebar is not a chat window adjacent to your spreadsheet. It is a participant inside the spreadsheet, with direct read and write access to the cells where your work lives. The Adobe Acrobat productivity agent does not ask you to export your PDF to a chat interface. It works on the document in place. Microsoft 365 Copilot does not open a new application. It surfaces in the tool picker of the applications you already have open.
This is distinct from a browser extension that overlays a chat bubble on top of a web page, and distinct from an API that a developer calls from their own code. Embedded AI is when the vendor of the primary tool ships the model as a native capability of that tool’s interface, with access to the tool’s own data structures, file formats, and action primitives.
The week of May 7-8, 2026 is a useful timestamp for when this pattern became industry-standard rather than experimental: GPT-5.5 Instant in Microsoft 365 Copilot, ChatGPT for Excel and Google Sheets at general availability on all plans, Adobe’s productivity agent inside Acrobat, and OpenAI’s Realtime API entering GA for voice channels all shipped within 48 hours of each other. Different companies, different tools, the same structural move.
How It Actually Works
Embedded AI in a productivity tool typically involves three layers working together.
The model layer is the large language model or specialized model doing the reasoning: generating text, interpreting instructions, running analysis, producing structured outputs. This is the part most users think of as “the AI.”
The integration layer connects the model to the tool’s native data structures. In a spreadsheet, this means the model can read cell values, understand formula relationships, and write changes back to specific cells. In a document editor, it means the model can access the full text, structure, headings, and metadata. In a voice channel, it means the model is on the live audio stream rather than receiving a transcript after the fact. This integration layer is where the real engineering work lives, and it is why embedding is harder than building a standalone assistant.
The action primitives layer is what the model is allowed to do inside the tool: read only, suggest edits for user approval, or execute changes directly. The scope of this layer defines how agentic the embed actually is. ChatGPT for Excel can edit cells in plain language, which is a meaningful action scope. Adobe’s PDF Spaces allows custom sub-agents, which pushes further toward autonomous operation.
MCP (Model Context Protocol) is becoming the standard plumbing for the integration layer. OpenAI’s announcement of ChatGPT for Excel specifically cited an MCP-powered app ecosystem for the financial data integrations (Moody’s, Dow Jones Factiva, MSCI). The protocol gives models a standardized way to describe tools and data sources, which lowers the cost for productivity tool vendors to ship embedded integrations.
Why It Matters Right Now
The standalone AI assistant model had a structural problem: the gap between where the work lived and where the model lived. Every crossing of that gap cost attention, context fidelity, and time. Busy professionals who needed the most help were also the ones least likely to maintain the discipline of the context-switch.
Embedded AI eliminates the discipline requirement. If the model is already in the spreadsheet, the barrier to using it is the same as the barrier to using any other spreadsheet feature. That changes adoption curves dramatically. It also changes what kinds of tasks the model gets used for: quick judgment calls, single-cell edits, a fast check on a formula, a rapid summary of a section. Tasks that would never justify the friction of a context-switch.
For operators and knowledge workers, the consequence is that AI assistance becomes ambient rather than deliberate. This is simultaneously the value proposition and the concern worth sitting with.
A Concrete Operator Scenario
A sales operations analyst at a mid-size company spends three hours every Monday building a pipeline review deck from four spreadsheet exports. The work is judgment-light but time-heavy: copy values, format tables, write the variance commentary, flag the at-risk deals.
With embedded AI in Excel, the workflow changes: the model reads the exports in place, suggests the summary commentary, flags the outliers, and drafts the variance sentences. The analyst reviews and approves. What took three hours takes forty minutes.
The decision the analyst has to make is not whether to use it. The decision is: which forty minutes did I just create? Do I spend it on the higher-judgment work the model cannot do, or do I absorb it into a busier schedule without changing how I work? That decision is invisible if it is never made explicitly. Embedded AI makes the decision urgent precisely because the friction that used to defer it is gone.
Common Misconceptions
The most common mistake is treating embedded AI as a better autocomplete. Autocomplete suggests the next word. Embedded AI can rewrite the model, run the scenario, edit thirty cells based on a plain-language instruction, and produce a presentation from the document you are reading. The action scope is categorically different.
A related mistake is assuming embedded AI requires technical setup. The announcements of this week were specifically at general availability on all plans. ChatGPT for Excel is not a developer feature. It is a spreadsheet feature. The operator audience is every knowledge worker with an Excel or Google Sheets subscription.
The third mistake is assuming the context window constraint that limited early AI assistants still applies to embedded AI. GPT-Realtime-2 ships with a 128k context window. For voice use cases, that is a full long conversation with tool calls and memory. The capacity constraint that defined the first generation of AI tools is largely gone.
How TWO Uses It
TWO’s editorial lens on embedded AI is not neutral. The pattern is worth understanding precisely, not celebrating broadly.
The honest operator question about embedded AI is: what did you lose visibility into when you stopped doing it? When ChatGPT for Excel runs the variance commentary, the analyst does not do the work of noticing what the numbers actually mean. That noticing is where judgment compounds over time. If the model handles the noticing, the analyst’s judgment about pipeline risk either grows more efficient (because it focuses on harder calls) or atrophies (because it never needs to engage with the raw numbers).
Scott’s working heuristic for embedded AI at TWO: use it to eliminate the mechanical, not to skip the analytical. The financial data integrations in ChatGPT for Excel (Moody’s, Factiva, MSCI, Third Bridge) are a good example of the mechanical: pulling reference data that would otherwise require a tab-switch and a manual search. That is a legitimate friction elimination. Using the model to generate the conclusion you would have drawn from the data is a different choice, and worth making consciously.
The tool-use framing from AI infrastructure is useful here: the model is a tool with action primitives. The operator decides which primitives to grant and which to retain. Embedded AI makes that decision urgent by making it invisible if you never make it.