Trendaavat aiheet
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.

Trissy
One final bull run
Protocols and teams should stop paying KOLs for content.
The only time teams should be handing out tokens is to KOLs or community members who are creating high value for their project and returning results.
Your product or narrative should be interesting enough that people with influence should automatically want to buy and write about it naturally.
These are the ones you should be allocating tokens to, not doing 3 paid threads for ambassadors who bot their following.
“What if I can’t get anyone interested in my product to tweet?” Then keep building, you clearly haven’t found the right niche or networked enough if you can’t build a few core supporters.
My message to the builders: build something cool enough that people naturally want to buy and write about, give tokens to the biggest supporters who align with your vision. Since:
1. They’re going to be much more likely to work overtime and go above and beyond to help you succeed since you demonstrated strong moral behaviour (a rarity in this space)
2. The ones you pay for a certain amount of posts will flake as soon as their deal is up and dump the tokens
I don’t do paid promotions of any sort but I’m obviously not going to turn down free tokens for a project I’m extremely bullish on with no strings attached. The best writers can’t be bought and will be turned away if you try to.
Play the long game and don’t take shortcuts, it’ll reflect in your actions across every vertical of the business and smart traders can smell it from a mile away.
KOL campaigns are dead, 99.9% of marketing agencies are a waste of money and will be -ev for your business.
The only way to penetrate this market is having crypto native team members who are willing to get their hands dirty.
3,08K
OpenAI just confirmed my northern star thesis for AI today by releasing their operator agent.
Not only was this my guiding thesis for $CODEC, but every other AI investment I made, including those from earlier in the year during AI mania.
There’s been a lot of discussion with Codec in regards to Robotics, while that vertical will have its own narrative very soon, the underlying reason I was so bullish on Codec from day 1 is due to how its architecture powers operator agents.
People still underestimate how much market share is at stake by building software that runs autonomously, outperforming human workers without the need for constant prompts or oversight.
I’ve seen a lot of comparisons to $NUIT. Firstly I want to say I’m a big fan of what Nuit is building and wish nothing but for their success. If you type “nuit” into my telegram, you’ll see that back in April I said that if I had to hold one coin for multiple months it would have been Nuit due to my operator thesis.
Nuit was the most promising operator project on paper, but after extensive research, I found their architecture lacked the depth needed to justify a major investment or putting my reputation behind it.
With this in mind, I was already aware of the architectural gaps in existing operator agent teams and actively searching for a project that addressed them. Shortly after Codec appeared (thanks to @0xdetweiler insisting I look deeper into them) and this is the difference between the two:
$CODEC vs $NUIT
Codec’s architecture is built across three layers; Machine, System, and Intelligence, that separate infrastructure, environment interface, and AI logic. Each Operator agent in Codec runs in its own isolated VM or container, allowing near native performance and fault isolation. This layered design means components can scale or evolve independently without breaking the system.
Nuit’s architecture takes a different path by being more monolithic. Their stack revolves around a specialized web browser agent that combines parsing, AI reasoning, and action. Meaning they deeply parse web pages into structured data for the AI to consume and relies on cloud processing for heavy AI tasks.
Codec’s approach of embedding a lightweight Vision-Language-Action (VLA) model within each agent means it can run fully local. Which doesn’t require constant pinging back to the cloud for instructions, cutting out latency and avoiding dependency on uptime and bandwidth.
Nuit’s agent processes tasks by first converting web pages into a semantic format and then using an LLM brain to figure out what to do, which improves over time with reinforcement learning. While effective for web automation, this flow depends on heavy cloud side AI processing and predefined page structures. Codec’s local device intelligence means decisions happen closer to the data, reducing overhead and making the system more stable to unexpected changes (no fragile scripts or DOM assumptions).
Codec’s operators follow a continuous perceive–think–act loop. The machine layer streams the environment (e.g. a live app or robot feed) to the intelligence layer via the system layer’s optimized channels, giving the AI “eyes” on the current state. The agent’s VLA model then interprets the visuals and instructions together to decide on an action, which the System layer executes through keyboard/mouse events or robot control. This integrated loop means it adapts to live events, even if the UI shifts around, you won’t break the flow.
To put all of this in a more simple analogy, think of Codec’s operators like a self sufficient employee who adapts to surprises on the job. Nuit’s agent is like an employee who needs to pause, describe the situation to a supervisor over the phone, and wait for instructions.
Without going down too much of a technical rabbit hole, this should give you a high level idea on why I chose Codec as my primary bet on Operators.
Yes Nuit has backing from YC, a stacked team and S tier github. Although Codec’s architecture has been built with horizontal scaling in mind, meaning you can deploy thousands of agents in parallel with zero shared memory or execution context between agents. Codec’s team isn’t your average devs either.
Their VLA architecture opens a multitude of use cases which wasn’t possible with previous agent models due to seeing through pixels, not screenshots.
I could go on but I’ll save that for future posts.
15,82K
The thing is, if you really want to make it in this space everyone around you will think there’s something wrong with you.
To truly be the 0.001%, life outside of the trenches is almost non existent.
No girls, no hobbies, no social outings, no netflix or anything which takes you away from the grind.
It’s a type of mindset which is extremely unrelatable to even the likes of professional athletes because there’s no reason you can’t be online 24/7.
We’re stuck in our own paradox of freedom.
Everyone wants the magic ability to click buttons for money, until it’s time to say no to 95% of enjoyments.
Friends and family will constantly throw hints suggesting you have a form of mental illness and will never truly see the vision.
Jealously rises when bits of success creep through, if you watch people closely enough, they always reveal their true intentions, even if they didn’t mean to.
The smallest hints will give them away, usually from spontaneous emotional reactions in the moment where you only need to hear a few words slip, most of the time that’s all it takes.
As you become more successful, learn to stay quiet. There’s no need to mention your progress, as great as it would be to share with everyone and enjoy the fruits of your labour, it’ll only attract greed from others.
Most fail this as they make being the “crypto guy” or “investor” their whole persona. Even if you’re online 16 hours a day, you still need to have interests and ambitions outside of this industry.
Friends should want to hang out with you for the quality of your presence and mood difference you make while being there, not how many numbers you’ve made on a screen.
Living a private, secluded life with a small circle of quality individuals is the greatest life hack for peace of mind.
If your presence doesn’t make people feel something without talking about money, you’ve already lost.
5,75K
What is $CODEC
Robotics, Operators, Gaming?
All of the above and more.
Codec’s vision-language-action (VLA) is a framework agnostic model, allowing for dozens of use cases due to its unique ability to visualize errors in comparison to LLM’s.
Over the past 12 months, we've seen that LLMs function primarily as looping mechanisms, driven by predefined data and response patterns.
Because they’re built on speech and text, LLMs have a limited ability to evolve beyond the window of linguistic context they’re trained on. They can’t interpret sensory input, like facial expressions or real time emotional cues, as their reasoning is bound to language, not perception.
Most agents today combine transformer based LLMs with visual encoders. They “see” the interface through screenshots, interpret what's on screen, and generate sequences of actions, clicks, keystrokes, scrolls to follow instructions and complete tasks.
This is why AI hasn’t replaced large categories of jobs yet: LLMs see screenshots, not pixels. They don’t understand the dynamic visual semantics of the environment, only what’s readable through static frames.
Their typical workflow is repetitive: capture a screenshot, reason about the next action, execute it, then capture another frame and repeat. This perceive-think loop continues until the task is completed or the agent fails.
To truly generalize, AI must perceive its environment, reason about its state, and act appropriately to achieve goals, not just interpret snapshots.
We already have macros, RPA bots, and automation scripts, but they’re weak and unstable. A slight pixel shift or layout change breaks the flow and requires manual patching. They can’t adapt when something changes in the workflow. That’s the bottleneck.
Vision-Language-Action (VLA)
Codec’s VLA agents run on an intuitive but powerful loop: perceive, think, act. Instead of just spitting out text like most LLMs, these agents see its environment, decide what to do and then execute. It’s all packaged into one unified pipeline, which you can visual into three core layers:
Vision
The agent first perceives its environment through vision. For a desktop Operator agent, this means capturing a screenshot or visual input of the current state (e.g. an app window or text box). The VLA model’s vision component interprets this input, reading on screen text and recognizing interface elements or objects. Aka the eyes of the agent.
Language
Then comes the thinking. Given the visual context (and any instructions or goals), the model analyzes what action is required. Essentially, the AI “thinks” about the appropriate response much like a person would. The VLA architecture merges vision and language internally, so the agent can, for instance, understand that a pop up dialog is asking a yes/no question. It will then decide on the correct action (e.g. click “OK”) based on the goal or prompt. Serving as the agent’s brain, mapping perceived inputs to an action.
Action
Finally, the agent acts by outputting a control command to the environment. Instead of text, the VLA model generates an action (such as a mouse click, keystroke, or API call) that directly interacts with the system. In the dialog example, the agent would execute the click on the “OK” button. This closes the loop: after acting, the agent can visually check the result and continue the perceive–think–act cycle. Actions are the key separator which turns them from chat boxes to actual operators.
Use Cases
As I mentioned, due to the architecture, Codec is narrative agnostic. Just as LLM aren't confined by what textual outputs they can produce, VLA’s aren’t confined by what tasks they can complete.
Robotics
Instead of relying on old scripts or imperfect automation, VLA agents take in visual input (camera feed or sensors), pass it through a language model for planning, then output actual control commands to move or interact with the world.
Basically the robot sees what’s in front of it, processes instructions like “move the Pepsi can next to the orange,” figures out where everything is, how to move without knocking anything over, and does it with no hardcoding required.
This is the same class of system as Google’s RT-2 or PaLM-E. Big models that merge vision and language to create real world actions. CogAct’s VLA work is a good example, robot scans a cluttered table, gets a natural prompt, and runs a full loop: object ID, path planning, motion execution.
Operators
In the desktop and web environment, VLA agents basically function like digital workers. They “see” the screen through a screenshot or live feed, run that through a reasoning layer built on a language model to understand both the UI and the task prompt, then execute the actions with real mouse and keyboard control, like a human would.
This full loop, perceive, think, act runs continuously. So the agent isn’t just reacting once, it’s actively navigating the interface, handling multiple step flows without needing any hard coded scripts. The architecture is a mix of OCR style vision to read text/buttons/icons, semantic reasoning to decide what to do, and a control layer that can click, scroll, type, etc.
Where this becomes really interesting is in error handling. These agents can reflect after actions and replan if something doesn’t go as expected. Unlike RPA scripts that break if a UI changes slightly, like a button shifting position or a label being renamed, a VLA agent can adapt to the new layout using visual cues and language understanding. Makes it far more resilient for real world automation where interfaces constantly change.
Something I’ve personally struggled with when coding my own research bots through tools like playwright.
Gaming
Gaming is one of the clearest use cases where VLA agents can shine, think of them less like bots and more like immersive AI players. The whole flow is the same, the agent sees the game screen (frames, menus, text prompts), reasons about what it’s supposed to do, then plays using mouse, keyboard, or controller inputs.
It’s not focused on brute force, this is AI learning how to game like a human would. Perception + thinking + control, all tied together. DeepMind’s SIMA project ihas unlocked this by combining a vision-language model with a predictive layer and dropped it into games like No Man’s Sky and Minecraft. From just watching the screen and following instructions, the agent could complete abstract tasks like “build a campfire” by chaining together the right steps, gather wood, find matches, and use inventory. And it wasn’t limited to just one game either. It transferred that knowledge between different environments.
VLA gaming agents aren’t locked into one rule set. The same agent can adapt to completely different mechanics, just from vision and language grounding. And because it’s built on LLM infrastructure, it can explain what it’s doing, follow natural-language instructions mid game, or collab with players in real time.
We aren’t far from having AI teammates which adapt to your play style and personalizations, all thanks to Codec.

9,26K
ICM’s success isn’t dependent on Launchcoin or any single platform.
It’s a regime change from how we view utility projects onchain.
We went from multi billion dollar launches to pumpfun due to insane mismatches in price and fundamentals.
Now we’re shifting from vaporware to projects with real users, volume and revenue.
Majority will give up right as we turn the corner of real adoption.
6,62K
A mismatch in price and fundamentals.
$KNET ($8 mil) vs $ALCH ($120 mil)
@Kingnet_AI
Handles everything from 2D/3D modeling to full character rigs, animations, and even code generation, straight from natural language prompts. No code UI means anyone can go from idea to playable Web3 game demo without touching a line of code. Speeds up builds, cuts costs, and lowers the barrier massively.
It’s positioned toward Web3 native game devs, indie builders, and small studios. Heavy emphasis on asset generation + end to end prototyping. Basically turns game dev into a visual AI workflow, aimed at getting more content out faster, even if you’re non technical.
KNET powers everything, payments, AI queries, and eventually the marketplace for generated assets. Also has governance hooks. Tied to KingNet (large public gaming company), and already plugged into Solana, BNB, TON. Seeing early traction + hackathon wins.
Kingnet AI is backed by Kingnet Network Co. Ltd, a publicly listed Chinese gaming giant founded in 2008. With a track record of hit titles like Happy Tower, Shushan Legend, MU Miracle, and World of Warships Blitz, the company is one of the most renowned incubators in mobile gaming. Kingnet AI is built by SmileCobra Studio (Singapore) in exclusive partnership with Kingnet’s Hong Kong arm. Parent company is valued at over $5 billion with $1 billion on its balance sheet.
@alchemistAIapp
A broader no code platform that converts user prompts into fully functional apps or games.
It uses a multi agent AI engine (multiple specialized models) to parse user prompts, generate code, create visuals, and assemble full applications in real time. Targets a wide user base, from hobbyists to Web3 builders, looking to rapidly prototype tools, games, or websites.
The UX is very streamlined, for example you enter “a snake game with a brown wooden background”, and Alchemist’s Sacred Laboratory interface organizes AI agents to produce front end code, game logic, and even custom graphics on
ALCH is used in the Arcane Forge marketplace and to access Alchemist’s AI services. Users can earn ALCH by selling useful applications or games, the marketplace has tipping and discovery features to reward popular apps.
Alchemist was founded in 2024 by a team in Vietnam, is led by Thien Phung Van (founder/CFO), Trong Pham Van (co-founder), and Duc Loc “Louis” Nguyen (CTO). With backgrounds in software and entrepreneurship (Thien was previously CEO/CFO at Vistia), the small team launched Alchemist as an unfunded startup.
TLDR; Kingnet AI is specialized, with a focus to automate end-to-end game creation for Web3, backed by proven gaming infrastructure. Alchemist AI is broader in scope, offering a fast, LLM interface for building unique tooling and games with retail appeal. Kingnet is domain deep in gaming, while Alchemist is domain wide across several use cases.
Based on this, it’s quite clear Kingnet is severely undervalued in comparison. Kingnet is much earlier in their product lifecycle and haven’t fully fleshed out their UX and interfaces, though the quality of team, experience and backing significantly outweighs Alchemist’s platform while being 15x lower in mcap.
27,83K
People keep congratulating me on $CODEC, what for?
So far, we haven’t even seen:
- Token utility
- Incentives
- Roadmap
- Demos
- New website
- Marketplace
- Future Partnerships
- Use cases
+ more
All we’ve seen is a few partnerships and the release of their resource aggregator (Fabric).
I didn’t write multiple threads, multiple telegram posts, speak with the team on a near daily basis, advise on the marketing, branding, positioning to celebrate a 6 mil mcap.
A chatgpt wrapper of a anime girl with pink hair was enough for a 6 mil mcap back in AI szn.
Projects were sending to 9 figures overnight for winning a hackathon or getting spotlighted from large KOLs/researchers.
Everyone’s forgot what happens when the lights switch on and people believe once again.
The reason I’ve switched so bullish this past week for onchain is belief is at all time lows. The past month has been some of the largest progressions we’ve made in this industry along with positive macro backdrop.
Remember that feeling of money falling from the sky? Might not be too long until we get to experience it again.

9,42K
Johtavat
Rankkaus
Suosikit
Ketjussa trendaava
Trendaa X:ssä
Viimeisimmät suosituimmat rahoitukset
Merkittävin