Every so often a unique project spawns which gets to run its own race. AI for the most part has been nothing other than chatgpt style terminals and creative image/video gen. We’ve been hearing for several months that we’re on the cusp of everyone losing their jobs due to AI. Yes it’s made everyone 10x in productivity, but we haven’t fully replaced people in the workforce. Why? The dominant AI assistants today, from chatbots in a browser to experimental “agent” frameworks are strong in conversation, but structurally limited in execution. They typically rely on a browser or simple scripting environment to perform tasks. While this works for fetching information or basic web automation, these agents struggle with complex, multi step processes and often break when things deviate from their confined path. Current AI agents fail because they lack persistent memory and fault tolerance, when faced with unexpected errors, they can’t recover or adapt, often stalling or looping indefinitely. Most operate in limited browser based environments and can’t access the full range of enterprise software, leaving the routined work beyond their reach. Which is why we haven’t seen AI replace mundane company roles like customer support and administration. Not for lack of capability in the AI models themselves, but because the frameworks around them aren’t reliable enough for critical workflows. So what’s needed? A reimagined system architecture. One that addresses fault tolerance, memory, access, isolation, and efficiency in a singular framework. Rather than stalling at the first unexpected input, they should catch errors, adapt, and retry different methods, much like humans do when things go wrong. To scale AI into real workflows, it needs persistent memory and task tracking to operate reliably over long durations. They also require full ecosystem access, beyond browser tools to use the same software humans do, including desktop applications. Without secure isolation, agents can't operate safely in dedicated environments, making large scale deployment risky due to potential cross system interference. If they want their runtime to be consistent and efficient, they’ll also need smart resource management that treats computers like a live functioning body. For those that connected the dots, @Codecopenflow recent Fabric release brings all of this together, giving AI agents reliable, fully dedicated operating systems (OS) that combine the cognitive power of advanced models with the infrastructure they need to function like dependable digital workers. Fabric in itself could be a completely independent licensed software. It transforms agents from browser bound scripts into autonomous operators with full OS level access. Much like a DEX aggregator routes the most efficient price to you, Fabric is the routing layer which serves Codec’s deep level architecture. You list your CPU, GPU, memory needs and any region preferences. This means finding the most cost effective servers like AWS/google cloud or GPU resources from Render/IO net. Codec provides clean SDKs and an API for full control of these AI operators. A company can integrate Codec agents into their existing software pipeline (for example, spin up an agent to handle a user request, then spin it down) without needing to reinvent their infrastructure. In customer support, agents can manage entire workflows, query resolution, CRM updates, refunds, reducing labor costs by up to 90% while improving consistency and uptime. For business operations, Codec automates repetitive administrative processes like invoice handling, HR updates, and insurance claims, especially in high volume sectors like finance and healthcare. By focusing on a fully isolated, multi app environment for each AI operator, AI isn’t restricted by the critical issuesof reliability and integration that previous frameworks couldn’t address. Essentially turning cloud computing infrastructure into a flexible assembly line for AI workers. Each “worker” is given the right tools (apps, OS, data access) and a safety harness (isolation + fault handling) to do its job. Every improvement in AI models (GPT-5 etc) only increases the value of Codec’s platform, because better “brains” can now be plugged into this strong “body” to accomplish even more complex jobs. Codec is model agnostic (works with any AI model), so it stands to benefit from the general AI progress without being tied to a single provider’s fate. We are at an inflection point similar to the early days of cloud computing. Just as the companies that provided the platforms for cloud (virtualization, AWS’s infrastructure, etc) became indispensable to enterprise IT, a company that provides the go to platform for AI agents to operate will capture a huge market. OpenAI have already released a fully agentic cloud coding terminal called Codex. Codex will be a mini local version of Codex you can run on your computer, but more importantly Codex’s primary model will be in the cloud with it’s own computer. The co-founder of OpenAI believes that the most successful companies in the future will be these two types of architecture merged together. Sounds familiar. What’s next? Instead of telling you what’s next, maybe it’s better I point to what we haven’t seen yet: - No confirmed token utility - No incentives - No core roadmap - No demos - No marketplace - Minimal partnerships Considering how much is in the pipeline along with new websites, updated docs, deeper liquidity pools, community campaigns/marketing and robotics. Codec hasn’t revealed many cards yet. Sure there might be more ready made browser based products currently on the market, although how long until they’re obsolete? This is an investment into the direction of AI and the primary architecture that will replace human workforces. Codec coded.
Trissy
Trissy13.5.2025
Virtual Environments for Operator Agents: $CODEC My core thesis around the explosion of AI has always centered on the rise of operator agents. But for these agents to succeed, they require deep system access, effectively granting them control over your personal computer and sensitive data, which introduces serious security concerns. We’ve already seen how companies like OpenAI and other tech giants handle user data. While most people don’t care, the individuals who stand to benefit most from operator agents, the top 1% absolutely do. Personally, there's zero chance I’m giving a company like OpenAI full access to my machine, even if it means a 10× boost in productivity. So why Codec? Codec’s architecture is centered on launching isolated, on-demand “cloud desktops” for AI agents. At its core is a Kubernetes-based orchestration service (codenamed Captain) that provisions lightweight virtual machines (VMs) inside Kubernetes pods. Each agent gets its own OS-level isolated environment (a full Linux OS instance) where it can run applications, browsers, or any code, completely sandboxed from other agents and the host. Kubernetes handles scheduling, auto-scaling, and self-healing of these agent pods, ensuring reliability and the ability to spin up/down many agent instances as load demands Trusted Execution Environments (TEEs) are used to secure these VMs, meaning the agent’s machine can be cryptographically isolated, its memory and execution can be protected from the host OS or cloud provider. This is crucial for sensitive tasks: for example, a VM running in an enclave could hold API keys or crypto wallet secrets securely. When an AI agent (an LLM-based “brain”) needs to perform actions, it sends API requests to the Captain service, which then launches or manages the agent’s VM pod. The workflow: the agent requests a machine, Captain (through Kubernetes) allocates a pod and attaches a persistent volume (for the VM’s disk). The agent can then connect into its VM (via a secure channel or streaming interface) to issue commands. Captain exposes endpoints for the agent to execute shell commands, upload/download files, retrieve logs, and even snapshot the VM for later restoration. This design gives the agent a full operating system to work in, but with controlled, audited access. Because it’s built on Kubernetes, Codec can auto-scale horizontally, if 100 agents need environments, it can schedule 100 pods across the cluster, and handle failures by restarting pods. The agent’s VM can be equipped with various MCP servers (like a “USB port” for AI). For example, Codec’s Conductor module is a container that runs a Chrome browser along with a Microsoft Playwright MCP server for browser control. This allows an AI agent to open web pages, click links, fill forms, and scrape content via standard MCP calls, as if it were a human controlling the browser. Other MCP integrations could include a filesystem/terminal MCP (to let an agent run CLI commands securely) or application-specific MCPs (for cloud APIs, databases, etc.). Essentially, Codec provides the infrastructure “wrappers” (VMs, enclaves, networking) so that high-level agent plans can safely be executed on real software and networks. Use Cases Wallet Automation: Codec can embed wallets or keys inside a TEE-protected VM, allowing an AI agent to interact with blockchain networks (trade on DeFi, manage crypto assets) without exposing secret keys. This architecture enables onchain financial agents that execute real transactions securely, something that would be very dangerous in a typical agent setup. The platform’s tagline explicitly lists support for “wallets” as a key capability. An agent could, for instance, run a CLI for an Ethereum wallet inside its enclave, sign transactions, and send them, with the assurance that if the agent misbehaves, it’s confined to its VM and the keys never leave the TEE. Browser and Web Automation: CodecFlow agents can control full web browsers in their VM. The Conductor example demonstrates an agent launching Chrome and streaming its screen to Twitch in real-time. Through the Playwright MCP, the agent can navigate websites, click buttons, and scrape data just like a human user. This is ideal for tasks like web scraping behind logins, automated web transactions, or testing web apps. Traditional frameworks usually rely on API calls or simple headless browser scripts; in contrast, CodecFlow can run a real browser with a visible UI, making it easier to handle complex web applications (e.g. with heavy JavaScript or CAPTCHA challenges) under AI control. Real-World GUI Automation (Legacy Systems): Because each agent has an actual desktop OS, it can automate legacy GUI applications or remote desktop sessions, essentially functioning like robotic process automation (RPA) but driven by AI. For example, an agent could open an Excel spreadsheet in its Windows VM, or interface with an old terminal application that has no API. Codec’s site mentions enabling “legacy automation” explicitly. This opens up using AI to operate software that isn’t accessible via modern APIs, a task that would be very hacky or unsafe without a contained environment. The included noVNC integration suggests agents can be observed or controlled via VNC, which is useful for monitoring an AI driving a GUI. Simulating SaaS Workflows: Companies often have complex processes that involve multiple SaaS applications or legacy systems. for example, an employee might take data from Salesforce, combine it with data from an internal ERP, then email a summary to a client. Codec can enable an AI agent to perform this entire sequence by actually logging into these apps through a browser or client software in its VM, much like a human would. This is like RPA, but powered by an LLM that can make decisions and handle variability. Importantly, credentials to these apps can be provided to the VM securely (and even enclosed in a TEE), so the agent can use them without ever “seeing” plaintext credentials or exposing them externally. This could accelerate automation of routine back office tasks while satisfying IT that each agent runs with least privilege and full auditability (since every action in the VM can be logged or recorded). Roadmap - Launch public demo at end of the month - Feature comparison with other similar platforms (no web3 competitor) - TAO Integration - Large Gaming Partnership In terms of originality, Codec is built on a foundation of existing technologies but integrates them in a novel way for AI agent usage. The idea of isolated execution environments is not new (containers, VMs, and TEEs are standard in cloud computing), but applying them to autonomous AI agents with a seamless API layer (MCP) is extremely novel. The platform leverages open standards and tools wherever possible: it uses MCP servers like Microsoft’s Playwright for browser control instead of reinventing that wheel, and plans to support AWS’s Firecracker micro-VMs for faster virtualization. It also forked existing solutions like noVNC for streaming desktops. Demonstrating the project is standing on the foundations of proven tech (Kubernetes, enclave hardware, open-source libraries), focusing its original development on glue logic and orchestration (the “secret sauce” is how it all works together). The combination of open-source components and a upcoming cloud service (hinted by the mention of a $CODEC token utility and public product access) means Codec will soon be accessible in multiple forms (both as a service and self-hosted). Team Moyai: 15+ years dev experience, currently leading AI development at Elixir Games. lil’km: 5+ years AI developer, currently working with HuggingFace on the LeRobot project. HuggingFace is a huge robotics company and Moyai works as head of ai at elixir games (backed by square enix and solanafdn. I’ve personally video called the entire team and really like the energy they bring. My friend who put them on my radar also met them all at Token2049 and only had good things to say. Final Thoughts There’s still a lot left to cover, which I’ll save for future updates and posts in my Telegram channel. I’ve long believed cloud infrastructure is the future for operator agents. I’ve always respected what Nuit is building, but Codec is the first project that’s shown me the full-stack conviction I was looking for. The team are clearly top tier engineers. They’ve openly said marketing isn’t their strength, which is likely why this has flown under the radar. I’ll be working closely with them to help shape the GTM strategy that actually reflects the depth of what they’re building. With a $4 mil market cap and this level of infrastructure, it feels massively underpriced. If they can deliver a usable product, I think it could easily mark the beginning of the next AI infra cycle. As always, there’s risk and while I’ve vetted the team in stealth over the past few weeks, no project is ever completely rug proof. Price targets? A lot higher.
11,85K