Our infra team breaks down the concept of “context engineering”: @martin_casado - “If you are going to call a model, you have to know what to put in the context… At some point you’re probably going to use traditional computer science." @JenniferHli - “What’s the new form factor of infra that needs to become part of this context engineering?... How do you have agencies, tools, or infrastructure that will provide discovery and guarantees of observability of these systems as well?” “New infrastructure pieces create new patterns and methods of software and building systems. This is a great example of it emerging before our eyes.”
Andrej Karpathy
Andrej Karpathy25.6.2025
+1 for "context engineering" over "prompt engineering". People associate prompts with short task descriptions you'd give an LLM in your day-to-day use. When in every industrial-strength LLM app, context engineering is the delicate art and science of filling the context window with just the right information for the next step. Science because doing this right involves task descriptions and explanations, few shot examples, RAG, related (possibly multimodal) data, tools, state and history, compacting... Too little or of the wrong form and the LLM doesn't have the right context for optimal performance. Too much or too irrelevant and the LLM costs might go up and performance might come down. Doing this well is highly non-trivial. And art because of the guiding intuition around LLM psychology of people spirits. On top of context engineering itself, an LLM app has to: - break up problems just right into control flows - pack the context windows just right - dispatch calls to LLMs of the right kind and capability - handle generation-verification UIUX flows - a lot more - guardrails, security, evals, parallelism, prefetching, ... So context engineering is just one small piece of an emerging thick layer of non-trivial software that coordinates individual LLM calls (and a lot more) into full LLM apps. The term "ChatGPT wrapper" is tired and really, really wrong.
23,15K