The Wise Operator
Home

Tag

AI

38 entries tagged AI · 38 terms.


Dictionary

Agent Registry

A centralized directory that catalogs every AI agent operating within a system, assigns each one a verifiable identity, and tracks what each agent is authorized to do.

Agentic Coding

A style of software development where an AI agent writes, edits, and manages code semi-autonomously while a human operator guides the direction.

Agentic Commerce

The pattern where an AI agent, not a human, executes the search, decision, checkout, and after-sales steps of a purchase on a user's behalf.

AI Agent

An AI system that can take actions on its own, using tools and making decisions across multiple steps to accomplish a goal.

AI Pair Programming

Working alongside an AI assistant that helps you write, review, and debug code in real time as a collaborative partner.

Chain of Thought

A prompting technique that asks the AI to show its reasoning step by step, which leads to more accurate and reliable answers.

Claude

An AI assistant built by Anthropic, known for strong reasoning, long context windows, and a focus on safety.

Computer-Use Model

A foundation model purpose-built to operate software the way a person does, by clicking, typing, navigating menus, and manipulating files, rather than producing chat or code.

Content Pipeline

An automated system that researches, creates, reviews, and publishes content through a series of connected steps with minimal manual intervention.

Context Engineering

The practice of designing the entire information ecosystem around an AI model (what it sees, what it remembers, what tools it can use) to produce consistently better results.

Context Window

How much an AI can 'remember' in a single conversation. Think of it as the AI's working memory.

Conversational Advertising

The practice of placing paid sponsored content inside the response surface of an AI chat product, where ads appear within the model's answer flow rather than above search results or alongside page content.

Default Model

The model that a chat product silently serves to users who have not specified one, rotating beneath them without notice as providers update their infrastructure.

Drift

When an AI gradually loses track of your project's goals and starts making suggestions that don't fit anymore, usually because it's forgotten earlier context.

Embedded AI

The practice of AI capabilities moving into the response surface of existing productivity tools rather than living as a standalone application the user switches to.

Embeddings

A way of converting text into numbers so that AI can measure how similar or related different pieces of content are.

Fine-Tuning

The process of taking a pre-trained AI model and training it further on specific data to make it better at a particular task.

Hallucination

When an AI model confidently generates information that is incorrect, fabricated, or nonsensical.

Inference

The process of an AI model generating a response to your input, as opposed to the training phase when it originally learned.

LangChain

An open-source framework that helps developers build applications powered by language models, especially multi-step AI workflows.

Large Language Model (LLM)

An AI system trained on massive amounts of text that can understand and generate human language.

Managed Agent

An AI agent hosted by a platform (most commonly Anthropic's infrastructure) rather than running inside a personal terminal session. A Managed Agent persists beyond the operator's session, exposes a stable invocation surface (an API call or a button in an internal app), and reaches external systems through a governed tool gateway.

Model Context Protocol (MCP)

An open standard that lets AI models connect to external tools and data sources through a unified interface.

Model Routing

The practice of sending different tasks to different AI models based on complexity, cost, and speed requirements.

OpenAI

The company behind ChatGPT and the GPT series of AI models, one of the leading providers of large language models.

Plan Mode

A structured scoping process in AI coding tools that interviews you about your project before any code gets written, producing a clear plan that prevents wasted effort.

Project Memory

Systems that help AI tools remember past decisions and project context across sessions, surviving context window resets so you don't have to re-explain your project every time.

Prompt Engineering

The skill of writing clear, structured instructions to get better and more consistent results from AI models.

Retrieval-Augmented Generation (RAG)

A technique that feeds relevant documents to an AI model at query time so it can answer questions using your actual data instead of guessing.

Skill

A saved instruction file that an AI tool reads each time you invoke it, producing consistent output across runs without requiring the operator to remember and retype a long prompt. In Claude Code, Skills live at ~/.claude/skills/<name>/SKILL.md and are invoked by typing /<name>.

Structured Output

AI responses formatted in a predictable, machine-readable structure like JSON, so other software can reliably process the results.

System Prompt

Hidden instructions given to an AI model that define its personality, rules, and behavior before the user ever sends a message.

Temperature

An AI setting that controls how creative or predictable the model's responses are, on a scale from 0 to 1 (or higher).

Token

The small chunks of text that AI models read and generate, roughly three-quarters of a word each.

Tool Use (AI Tool Calling)

The ability of an AI model to call external functions and services during a conversation, going beyond text generation to take real actions.

Training Data

The massive collection of text, code, and other content that an AI model learns from before it can generate responses.

Vibe Coding

A casual approach to building software with AI where you describe what you want in plain language and let the AI handle the implementation details.

Wafer-Scale Engine

A processor built from an entire silicon wafer rather than smaller chips diced from one, integrating dramatically more on-chip memory and compute cores to eliminate the chip-to-chip communication bottleneck that limits conventional GPU clusters at inference time.