【周末短篇】AI不创造收入有问题吗?

【周末短篇】AI不创造收入有问题吗?


We have finally reached an inflection point: even with sufficient time, I cannot keep up with all the significant developments and updates. For instance, Microsoft's GraphRAG, Florence, Meta's Chameleon, Apple's 4M-21, not to mention the backlog of deployments like LLaMA Agents, NuExtract, Jina Reranker, ChatTTS, and so on.

Every time I see a solution launched by a semi-famous startup, I fall into a mental spiral of "finally, you're here" and "why am I so slow." This can only be resolved by rapid deployment and running through it once.

Even the best frontier models—Claude 3.5, GPT-4o, Gemini 1.5—still have many "silly" and primitive aspects. By the standards of the mobile internet era, they are laboratory products that fall far short of "delivery standards," let alone models and products beyond these.

Of course, there are some applications that go viral quickly, but after just a short while, they are either rarely visited or rendered unrecognizable by their own updates.

Aside from computing resources, the biggest contributor to all of this is likely social media traffic.

That is why Sequoia has spoken out again: the massive gap between AI costs and revenue has widened further.

When our market demands constant hits and targets that yield multi-fold or even ten-fold returns in short periods, Sequoia's judgment is correct.

However, this is a product that exists purely in the digital world. If the digital world is a giant laboratory, and the products within it stay in the lab rather than entering the physical world to generate massive revenue, is that a problem right now, or even in the next two to three years?

Whether it is called the latest "Industrial Revolution" or deemed unworthy of the title, one thing is certain: since the dawn of the Industrial Revolution, all product prototypes have originated from "laboratories."

It's just that today's "laboratory" has the highest number of participants in history—at least 5 million (the number of OpenAI developers), and likely over 10 million (if we include all programmers worldwide). They are all concentrated in the field called "AI," with the majority refining generative models and a minority seeking breakthroughs in underlying models or architectures.

ChatGPT's daily visits have exceeded 100 million, and the llama3-8B-instruct model has been downloaded 2.43 million times in the past month.

Perhaps we are simply in the process of rebuilding a "software world" on an order of magnitude far exceeding the past, in a much shorter timeframe.

Programmers who first saw "Hello, World!" on a screen in a school computer lab over a decade ago might have reached the age where they discuss the "35-year-old life crisis."

So, starting from a chatbot replying "Hello, I am your AI assistant!", how many years will it take to witness these "laboratory products" reach unimaginably staggering heights?

Perhaps what we are questioning is not the trend itself; we simply want an opportunity for a reshuffle similar to the bursting of the dot-com bubble in 2000.

Yes, valuations are expensive, and a massive adjustment within six months or a year is not a low-probability event. But that is just a market law, driven by natural "greed and fear." What does that have to do with being "buried in the lab"?

Most importantly, one only needs a technical perspective to see the "increasing possibilities." Today, amidst the mental friction of "finally, you're here" and "why am I so slow," the perceived "possibilities" are growing exponentially.

If it hasn't created "revenue" as defined by the "so-called market" (I never believe market practitioners represent the market itself), is there really a problem?

← Back to Blog