Trendaavat aiheet
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.
AI’s next frontier is not compute or larger models, it’s better data. Today, we’re bringing on one of the few people who actually spent his life solving that problem.
Welcome @SPChinchali, our new Chief AI Officer.
The frontier of AI is no longer defined by models with more parameters, or compute clusters with more GPUs.
It will be defined by the scarcity of high-integrity, IP-cleared data from the physical world (think robotics, autonomous hardware, and edge devices). Sandeep has spent his career chasing this frontier. Now he’s helping us unlock it.
When I first met Sandeep, I was struck by his soft spoken, endearing attitude. He has a way of speaking that draws you in, regardless of whether he’s explaining neurosymbolic AI, or praising the alien looking ergonomic keyboard he carries wherever he goes.
His background speaks for itself: Stanford PhD, NASA JPL. Now professor at UT Austin, leading research at the intersection of decentralized ML and robotics Sandeep has been obsessed with how to get useful data for AI training to make it useful in the real world:
> creating data incentives for distributed networks,
> tackling the quintessential long-tail sampling problem in edge robotics, and
> designing systems that preserve provenance.
Sandeep also confirmed a thesis I’ve been obsessed with for years: the real moat is data. Not scraped Reddit forums or generic web text, but attributable, rights cleared, real world data. The messy, unpredictable data required to make physical systems robust cannot be simulated.
It’s sourcing and curating the messy, long‑tail data that physical systems see in the wild: the slippery loading‑dock robot at 2am, the faint micro‑crack on a wind‑turbine blade, the corner case a lidar unit has never seen before. Those moments are IP, and they’re precious. If we can make that IP programmable, licensable, traceable, and monetizable in real time, we unlock a flywheel for every AI team on the planet.
Programmable IP is the only backbone that makes this possible. Most crypto x AI attempts bolt “AI” onto existing infra. Sandeep’s joining because Story is built from the ground up to solve these types of data coordination challenges
Story is built for dynamic, composable relationships. Our protocol is designed for the graph based provenance, dynamic licensing, and automated royalty flows that modern AI systems demand. A photo can be licensed, a label can be added, a synthetic variation can be generated, and on Story, each action becomes a new, linked IP asset in a transparent graph, with value flowing back to every contributor.
Sandeep’s arrival is a turning point. Chapter 2 of Story is coming into focus, and the next phase of AI infrastructure is just getting started.
His combo of deep intellect, genuine curiosity, and quiet dedication is exactly what this moment calls for. We couldn’t be more excited to build the future of AI with him, and there’s a whole lot more coming.
Stay tuned!

17.7. klo 23.00
I’ve spent my career chasing one question: How do we gather the right data to make AI work in the real world?
From Stanford labs to UT Austin classrooms, I searched everywhere. The answer isn’t another AI lab, but a blockchain built to treat data as IP. That’s why I am joining @StoryProtocol as their Chief AI Officer.
At Stanford, I studied “cloud robotics,” how fleets of robots could use distributed compute to learn together. I even mounted a dashcam in my car to solve this:
If robots could only upload 5–10% of what they see, how do we pick the most valuable data?
Most of it was boring freeway footage. But <1% captured rare scenes: self-driving Waymos, construction sites, unpredictable humans. That “long-tail” data made models smarter. I hand-labeled it, even paid Google Cloud’s labeling service to annotate my footage with niche concepts like “LIDAR unit” and “autonomous vehicle”, and trained models that ran on a USB-sized TPU. But academia only goes so far.
At UT Austin, my questions shifted:
→ How do we crowdsource rare data to improve ML?
→ What incentive systems actually work?
That pulled me into crypto – blockchains, token economies, even DePIN. I blogged, wrote papers on decentralized ML, but still wondered: who’s actually building this infrastructure?
By total chance, I met the Story team. I was invited to give a talk at their Palo Alto office. It was 6PM, room still packed. I rambled about “Neuro-Symbolic AI” and ended with a slide called “A Dash of Crypto.” That talk turned into an advisory role, which now turned into something much bigger.
We’re at a pivotal moment. Compute is mostly solved. Model architectures are copied overnight. The real moat is data.
Not scraped Reddit. Not endless language. But rights-cleared, long-tail, real-world data that trains embodied AI – robots, AVs, systems that navigate our messy world.
Imagine this: I capture a rare driving scene on dashcam & register it on Story. A friend labels it. An AI agent creates synthetic variants. On Story’s graph-structured chain, each becomes linked IP. Royalties flow back automatically. Everyone gets paid, every step traceable on-chain.
That’s why I’m now Chief AI Officer at Story building the rails for decentralized, IP-cleared training data. It’s time to make data the new IP. Story is the place to do it.
Much more to come soon. Let’s go.



6,95K
Johtavat
Rankkaus
Suosikit