> Even if you think that OpenAI’s growth is impressive — it went from 700 million to 800 million weekly active users in the last two months — that is not the kind of growth that says “build capacity assuming that literally every single human being on Earth uses this all the time.”
I’d argue the other way around: 100M growth in two months suggests literally every single human being on Earth would benefit from using this all the time, and it’s just a matter of enabling them to.
Beware the sigmoidal curve, though. Growth is exponential till it’s not.
> 100M growth in two months suggests literally every single human being on Earth would benefit from using this all the time, and it’s just a matter of enabling them to.
This doesn’t make any sense. Popular is not the same as useful. You’d have a more compelling argument if you included data showing that all this increased LLM usage has had some kind of impact on productivity metrics.
Instead, some studies have shown that LLMs are making professionals less productive:
>This doesn’t make any sense. Popular is not the same as useful.
If you are using a service weekly for a long period, you find it useful.
>You’d have a more compelling argument if you included data showing that all this increased LLM usage has had some kind of impact on productivity metrics.
Why would you need to do that? Why is a vague (in this instance) notion of 'productivity' the only measure of usefulness? ChatGPT (not the API, just the app) processes over 2.6 B messages every single day. Most of these (1.9B) are for Non work purposes. So what 'productivity' would you even be measuring here ? Do you think everything that doesn't have to do with work is useless ? I hope not, because you'd be wrong.
If something makes you laugh consistently, it's useful. If it makes you happy, it's useful. 'Productivity' is not even close to being the be-all and end-all of usefulness.
This is free users though. The number of paid users is significantly less (like any other freemium product). Finding it useful enough to use weekly doesn't mean finding it useful enough to pay continuously for use.
Free users can still be monetized or else Google Search would be a money loser. Open AI's free users are at the moment not monetized in any way. It's clear they plan to change this, given recent hirings and the value they'd need to extract from their free active userbase to be profitable is pretty low, so it's not really a problem.
> If you are using a service weekly for a long period, you find it useful.
Do alcoholics find their daily usage of alcohol really useful? You can of course make a case for this, but it's quite a stretch. I think people use stuff weekly for all sorts of reasons besides usefulness for the most common interpretation of the word.
> Do alcoholics find their daily usage of alcohol really useful?
Of course they do. They use it to get drunk and to avoid withdrawals. You're trying to confuse useful with productive. Being productive does make a difference, though, because if something isn't productive it doesn't generate enough cash to buy more of it - you have to pull the cash from somewhere else.
So I think your feeling is correct, although your argument is wrong. Buying gas for your car is productive, because it gets you to work which gets you money to pay for more gas and other things (like alcohol.) Buying alcohol is not productive, and that means that people can't pay too much for it.
Exactly, for how many people is Instagram/TikTok and friends actually useful? Sure, they're popular and also used by billions, but would every human on earth benefit from using those services?
I finally used it for a couple little things, but mostly as a fuzzier replacement for search, where it does do pretty well. Of course nowadays classic search is in shambolic so it is kind like a mediocre prime-aged boxer fighting an 70 year old champion or something.
Anyway, I bet it will be really useful for cool stuff if it can ever run on my laptop!
idk "it will be really useful" is a bit too fuzzy and vague -- how do I infer about numbers related to return-in-investment?
of course, it's better than "this is so crap no one would buy it" -- but for investors, they want to know: "if I put X dollars now, would I get 10*X dollars or 1/10 X dollars?"
it's weird that all these comments on "usefulness" doesn't even attempt to explain whether the numbers add up ok or not
OpenAI's bottleneck first shifted from GPUs to energy. Next it will shift from energy to meatbags. I'm sure they will figure out some way to produce more of us to keep the growthrate going.
> 100M growth in two months suggests literally every single human being on Earth would benefit from using this all the time, and it’s just a matter of enabling them to.
For OpenAI I think the problem is that if eventually browsers, operating systems, phones, word processors [some other system people already use and/or pay for] integrate some form of generative AI that is good enough - and an integrated AI can be a lot less capable than the cutting edge to win, what will be the market for a stand alone AI for the general public.
There will always be a market for professional products, cutting edge research and coding tools, but I don’t think that makes a trillion dollar company.
About 10% of the total world population is using it on a weekly basis. Take out those too old or young or illiterate technically or otherwise. Now subtract out the people without reliable internet and computer/phone. That 10% gets a whole lot bigger.
That seems like pretty strong evidence that it is generally, if not universally, useful to everyone given the opportunity.
My work is apparently paying for seats in multiple AI tools for everybody. There's a corporate mandate that you "have to use AI for your job". People seem to mostly be using it to for (a) slide decks with cringe images (b) making their PRs look more impressive by generating a bunch of ineffective boilerplate unit tests.
It’s interesting with the whole quote:
“OpenAI has 800 million weekly active users, and putting aside the fact that OpenAI’s own research (see page 10, footnote 20) says it double-counts users who are logged out if they’re use different devices”
The number may not actually be too accurate - but I imagine it’s also paired with what another commentator has said - OpenAI is basically giving their product to companies and the companies are making the employees log in and use it in some way - it’s not natural growth in any sense of the word.
Utility and destructive addictiveness are two very different things. You could argue this way about opium back when recreational consumption was widespread.
I’m sorry but I don’t see much logic in an argument that boils down to “A lot of people use it and that means it would also be useful to the people who don’t use it”. Maybe the people who don’t use it have an actual reason not to use it.
I just stopped paying for ChatGPT this month, once I found out that they made the projects available to free users too. The free version is just as good and I can shuffle between Grok, Mistral, Deepseek and Gemini when I run out of free quota.
So maybe giving away more and more free stuff is good for growth? The product is excellent, ChatGPT is still my favorite but the competition isn't that behind. If fact, I started liking Grok better for most tasks
> 100M growth in two months suggests literally every single human being on Earth would benefit from using this all the time, and it’s just a matter of enabling them to.
Only about 5% see enough value to drop $20 a month... It's like VR and AR, if people get a headset for free they'll use them every now and then, but virtually nobody wants to drop money on these.
Reality check. UNICEF and the WHO say there are 2 billion people without access to clean drinking water. They have slightly more pressuring issues than trying to log into chatgpt. Only slightly.
The blockchain/bitcoin bros tried the same marketing spin. "Bitcoin will end poverty once we get it into everyone's hands." When that started slipping, NFTs will save us all.
Yeah. Sure. Been there. Done that. Just needs "more investment"... and then more... then more... all because of self reported "growth".
Nearly six billion people are using mobile phones, most of those are smartphones now. There's no reason to think extending that small cost utility device to the next billion adults isn't a good idea (so long as the cost isn't coming from their pocket, ie it should be subsidized). These are not at all mutually exclusive goals.
The latest LLMs are extraordinarily useful life agents as is. Most people would benefit from using them.
It'd be like pretending it's either water or education (pick one). The answer is both and you don't have to pick one or the other in reality at all. The entities trying to solve each aspect are typically different organization anyway.
Spare me the rehash of marketing hype rhetoric. It's either a white collar tool to avoid doing boring work or great to identify targets on a battlefield. That's it and both are still questionable. This techno-fetishism of "New technology good, ugga-ooga-booga." 99% of the blind evangelists just spew that same slop just because of fomo of making a few extra shekels by proving "I'm a true believer, and so should you by buying my AI course."
Someone who doesn't have access to clean water and stable food will not benefit from this, nor will powers at be that "make it available" will actually improve their lives. It's already apparent, the tech nerds of the late 90s and early 2000s were NOT the good guys. Being good at computers does not make you a good person. The business model for AI makes zero sense when you have real world experience. Without massive, complete social and economic absolute changes, it won't work out. And for those championing that change, what makes you think you'll be the special comrade brought up the ranks to benefit as the truest of believers?
Sorry, but this shit is really starting to rub me the wrong way, especially with the massive bubble investment that's growing around all of it. This wont be good. The housing collapsed sucked. The same pattern is emerging and I'm getting a bad, bad feeling this will make the housing collapse look like nothing due to long term ramifications.
It's a wonderful way to find new tools and materials. Its knowledge of materials is as encyclopedic as it is for everything else.
I use it as a much more efficient version of Wikipedia for quickly finding the basics on many design options in software and physical artifacts. Also great at finding specific words in English or other languages. Unlike a thesaurus it pulls a lot of background on each word including subjective shades.
I could go on, and on.
I think the usefulness of these models is very different for different people. As a way to quickly start digging into novel problems, with aspects that require discovery, I don't know of anything that could possibly compare. No human being or web resource comes close.
As many people relate, it can save me time, but the greater value is it makes my time far more productive, and I tackle problems without hesitation I wouldn't have ever had the time to even think about before.
I could not imagine going back to Google for instance. Stone-age. I still use Kagi, but often for its quick AI question responses. And I still use Wikipedia, and more specific resources.
It's also useful if your blind, I know this from personal experience. The ability to recognize objects, read package labels, read bios and boot menus, etc has been very useful to me. Claiming that the only things it's good for is white collar work or battlefield targeting isn't accurate. In spite of how useful I've found it I'm not claiming it's going to be net positive, I have no idea how this will all turn out.
I don't really think people understand there are all sorts of non-chatgpt users that pay OpenAI thousands of dollars PER DAY - (>100k customers like this). They're not going to publish the data, but agentic flows make ChatGPT look like a cereal box.
The number of users is irrelevant. Revenue is only slightly relevant. The only thing that matters is profit. It would even be a decent thing if they could show marginal profit per user.
This was an interesting article, but he completely misses the only real threat to either Open AI or Anthro.
Open source models like deepseek and llama 3 are rapidly catching up, if I can get 90% of the functionality for significantly less ( or free if I want to use my own GPU), what value does open AI really have .
I'm a paid subscriber of open AI, but it's really just a matter of convenience. The app is really good, and I find it's really great for double checking some of my math. However I don't know how they're going to ever truly become a corporate necessity at the prices they're going to need to bill at to become profitable.
Then again, open AI is obviously ran by some of the smartest people on planet Earth, with other extremely smart people giving them tons of money, so I can be completely wrong here
> Open source models like deepseek and llama 3 are rapidly catching up
Catching up for how long though? Large models are very expensive to train, and the only reason any of them are open is because ostensibly for-profit companies are absorbing the enormous costs on behalf of the open source scene. What's the plan B when those unprofitable companies (or divisions within companies) pull the ladder up behind them in pursuit of profits?
> Catching up for how long though? Large models are expensive to train and the money has to come from somewhere, and for now that somewhere is the charity of ostensibly for-profit companies which release their models for free and just eat the losses. That doesn't seem very sustainable to me.
Catching up enough to execute a business successfully on the open source (and free as in free beer) alternatives to OpenAI. Once you have a single model that works, you can bootstrap your own internal ML infra for just that one use case and grow from there.
Martin Casado from a16z stated an opinion that 80% of US startups are likely using less expensive open models, usually from China. Chamath P. on the All In Podcast said that his company is using Chinese models, but hosted in the US and he cited his company as a huge inference user.
Important to distinguish that they're likely using them for internal workflows (agents, etc) where the scope is well defined and they can tune their prompts and evals to accommodate a lower-performance model.
Nobody is advocating switching their coding agents to open source (yet), but that's not the bulk of the tokens in companies that have automated workflows integrated into their business.
I think what you're missing here is that China would be (is?) happy to fund the development, because it's in their national interest and necessary in order for their companies to stay competitive, so long as there are trade restrictions on chips. Another framing for this is that China and certain other entities (e.g. content distribution channels like Meta, Youtube) have a strong incentive to 'commoditize their [AI] complement' (https://gwern.net/complement).
I don’t think that’s the point he is making. The argument to me is looking at the numbers and grounding them in real life. It takes time to build data centers, it takes people to run them. The article makes the argument that the timelines are not feasible.
Meanwhile, the money asks, short time frames, and unclear demands for same are huge business problems. Self service (llama on my own gpus) I suppose is just another way to ask where the demand is coming from to argue for billions in new money. Something smells ...
The hardware required to run something like deepseek / kimi / glm locally at any speed fast enough for coding is probably around $50,000. You need hundreds of gigabytes of fast VRAM to run models that can come anywhere close to openai or anthropic.
$50k would be the cost to run it un-quantized, 10k could get you for example 4 5090 system, that would run the 671b q4 model which is 90% as good, which was the OPs target
128 is still not 300. Something like 4x 6000 blackwell is the minimum to run any model that is going to feel anything like claude locally.
To my deep disappointment the economics are simply not there at the moment. Openrouter using only providers with zero data retention policies is probably the best option right now if you care about openness, privacy and vendor lock-in.
For local use and experimentation you don't need to match a top of the line model. In fact something that you train or rather fine-tune locally might be better for certain use cases.
If I was working with sensitive data I sure would only use on prem models.
> I'm a paid subscriber of open AI, but it's really just a matter of convenience. The app is really good, and I find it's really great for double checking some of my math.
That right there is why they are valuable. Most people are absolutely incompetent when it comes to IT. That's why no one you meet in the real world uses ad blockers. OpenAI secured their position in the mind share of the masses. All they had to do to become the next google was find a way to force ads down the throats of their users. Instead they opted for the inflated bubble and scam investors strategy. Rookie mistake.
There is talk of 800 million weekly users or whatever. But real question to me is how much actual disposable income they have or willingness to spend it on expensive AI subscription.
And just because you have users doesn’t mean it’s easy to create a profitable ad business - ask Yahoo. Besides we still don’t know how much inference costs. But there is a real marginal costs that wouldn’t be covered by ads. They definitely couldn’t make enough on ads to cover their training costs and other costs.
And adding ads into the responses is _child's play_ find the ad with the most semantic similarity to the content in the context. Insert at the end of the response or every N responses with a convincing message that based on our discussion you might be interested in xyz.
For more subtle and slimier way of doing things, boost the relevance of brands and keywords, and when they are semantically similar to the most likely token, insert them into the response. Companies pay per impression.
When a guardrail blocks a response, play some political ad for a law and order candidate before delivering the rest of the message. I'm completely shocked nobody has offered free gpt use via an api supported by ad revenue yet.
This is such a techno-centric view, you're not even remotely aware of your own biases.
> but he completely misses the only real threat to either Open AI or Anthro.
Hard disagree. Economics matter, in fact more than tech.
Tech only gets to shine if the economics work out.
> Then again, open AI is obviously ran by some of the smartest people on planet Earth, with other extremely smart people giving them tons of money, so I can be completely wrong here
Nope.
Money != Intelligence
Open AI, SF and the West-Coast VC scene is run by very opionated, incentiviced people.
Yes, money can make things move, but all the money of the world don't matter if your unit econonmics don't work out.
And the startup graveyard is full of examples of this kind.
You know that this is literally not a concern right? It is part of business life to navigate such a situation.
When I was a SWE, I've done migrations between bare metal to AWS to GCP and then both, and then all plus Azure..
It is the cost of doing business. You pick what works best at a price that is optimized for your business needs now. You have a war chest so when they start becoming assholes, you have leverage to fight back or pivot.
I'd rather spend $200-400/mo to unblock myself NOW than do something dumb with 5 or even 100 tokens per second output that isn't actually that good as what the current providers offer. I'm going through millions of tokens a day.. I couldn't do that with local "RIGHT NOW" (<--- important)
Yes. But there is not only OpenAI. There is Gemini, Grok, whatever.
If it doesn't become a "the winner takes it all" but a commodity like web hosting, then the payoff breaks down.
You may be correct, but for my hobbyist projects I tend to use the cheaper models to get started, and then I'll switch to a more expensive one if the cheaper model gets stuck.
Unless the actual race is to create an AI Employee that operates and can deliver work without constant supervision. At that level, of course it would be cheaper to pay $2,000 a month straight to Open AI vs hiring a junior SWE
I feel like options for local inference are getting better. AMD has their Strix Halo. Intel's next CPU generation Arrow Lake will have better inference abilities as well.
> For actual work and not toying around the 10% gap is absolutely worth the cost.
I agree with this... for now. But the hosted commercial models aren't widening the gap as far as I can tell, if anything it appears to be narrowing.
And if the relative delta doesn't increase somehow I don't see any way in which the "AI race" doesn't end in a situation where locally run LLMs on relatively cheap hardware end up being good enough for virtually everyone.
Which in many ways is the best possible outcome, except for the likely severe economic effects when the bubble bursts on the commercial AI side.
I think you're wrong, in the same way that folks on HN were wrong about Dropbox–HN: why would I pay for something that provides so little value, it's just slightly more convenient file storage?
Just because open source models are almost as good, doesn't mean you can underestimate the convenience factor.
Both can be true: we're in an AI bubble, and the large incumbents will capture most of the value/be difficult to unseat.
On the other hand, no one has figured out how to make money providing AI yet, and everyone's operating at a loss. At some point they're going to need to monetize, and the cost/convenience compared to alternatives may not be worth it for a lot of people.
At one point you could get a Netflix subscription and it was convenient enough that people were pirating less. Now there's so many subscription services, we're basically back to cable packages, paying ever increasing amounts and potentially still seeing ads. I know I'm pirating a lot more again.
Uber vs cabs, Airbnb vs hotels - We've seen it time and time again, once the VC cashflow/infinite stonk growth dries up and they need to figure out how to monetize, the service becomes worse and people start looking for alternatives again.
Yeah, but not just that. I don't expect my mum to go find some high end consumer GPU and install it on a home server in order to run her own local LLM. I expect that people will be throwing chat interfaces running remixed versions of open weight models out on the internet so fast that it's impossible for anyone to monitise it in a reasonable way.
I also wonder whether, similar to bitcoin mining, these things end up on specialist ASICS and before we know it a medium tier mobile phone is running your own local models.
Well seeing how Dropbox is doing now, Steve Jobs was right - it isn’t a product, it’s a feature. For the same price of 2TB of storage on Dropbox you can get the same amount on Google or OneDrive with a full office suite.
People love to quote Dropbox ignoring all of the YC companies that are zombies or outright failed. Just looking at the ones that have gone public.
I am not saying the companies in aggregate don’t lead to successful outcomes for VCs. I am saying claiming Dropbox is a shining example of a “successful” company that HN was wrong about long term doesn’t jibe with reality.
They also didn’t have the massive fixed cost outlays nor did they have negative unit economics that OpenAI has.
I don't get this comparison. The non-Dropbox version was magnitudes less convenient to 99.99% of the population. A non-OpenAI chat interface is, at best, a fracfion less convenient.
A good number of people used to pay for email. Now a tiny fraction does. It all hangs on wbether OpenAI can figure out how to get ad revenue without people moving to a free competitor without them - and there will be plenty of those.
Of course, as that's where the money and power is, the only things SamA is in it for.
There will be equivalent models that are free. Likely, there will even be free ones without ads.
Free+ads can beat free without ads on pure incumbency, marketing and convenience. Most people don't use an adblocker, even though it's trivially easy to install.
Paid, however, can't beat free+ads. Too much friction.
> folks on HN were wrong about Dropbox–HN: why would I pay for something that provides so little value, it's just slightly more convenient file storage?
> Open source models like deepseek and llama 3 are rapidly catching up, if I can get 90% of the functionality for significantly less ( or free if I want to use my own GPU), what value does open AI really have
You have several providers who host both deep-seek and llama 3. They pay for the hardware and electricity, you pay for usage but it's significantly cheaper than using OpenAIs models.
Where are these providers and do they offer batch processing? If they don't how does there cost compare to Gemini and OpenAI batch processing? For the hobby project I'm working on batch processing is a great fit. The only cost comparison tool I've been able to find is openrouter and it doesn't support batch processing for cost savings.
In the not too distant future, like in 47 days, ChatGPT 6 Recurd is going to knock everyone's socks off. Instead of a better model, it's a recycled model (there's some recursion for you!) but it auto purchases 10 more ChatGPT 6 plans to help it perform much much better.
Then another 38 days later each of those plans upgrades, and scores improve one percent again. And then 24 days later, those plans purchase a cluster of upgrades.
It is very easy to underestimate super-exponentials. But by early 2026, OpenAI is likely to be selling trillions of licenses to these models. At the dawn of agentic computing, the number of human customers is not the limit anymore.
And if you are thinking that the global money supply is going to economically choke off his plan, well, Altman has a coin for that. And an automated loan system extending credit to every human and all those models. And, of course, compute futures (until the real world can catch up) for everybody. But there won't be too much coin for its price to rocket with demand, so get in early. It's a whole new world.
Can someone explain why we measure these datacenters in Gigawatts rather than something that actually measures compute like flops or whatever the AI equivalent of flops is?
To put it another way, I don't know anything but I could probably make a '1 GW' datacenter with a single 6502 and a giant bank of resistors.
As a reference for anyone interested - the cost is estimated to be $10 billion for EACH 500MW data center - this includes the cost of the chips and the data center infra.
Yes! The varying precisions and maths feels like just the start!
Look at next gen Rubin with it's CPX co-processor chip to see things getting much weirder & more specialized. There for prefilling long contexts, which is compute intensive:
> Something has to give, and that something in the Nvidia product line is now called the "Rubin" CPX GPU accelerator, which is aimed specifically at parts of the inference workload that do not require high bandwidth memory but do need lots of compute and, increasingly, the ability to process video formats for both input and output as part of the AI workflow.
To confirm what you are saying, there is no coherent unifying way to measure what's getting built other than by power consumption. Some of that budget will go to memory, some to compute (some to interconnect, some to storage), and it's too early to say what ratio each may have, to even know what ratios of compute:memory we're heading towards (and one size won't fit all problems).
Perhaps we end up abandoning HBM & dram! Maybe the future belongs to high bandwidth flash! Maybe with it's own Computational Storage! Trying to use figures like flops or bandwidth is applying today's answers to a future that might get weirder on us. https://www.tomshardware.com/tech-industry/sandisk-and-sk-hy...
Mh, in my recently slightly growing, but still tiny experience with HW&DC-Ops:
You have a lot more things in a DC than just GPUs consuming power and producing heat. GPUs are the big ones, sure, but after a while, switches, firewalls, storage units, other servers and so one all contribute to the power footprint significantly. A big small packet high throughput firewall packs a surprisingly high amount of compute capacity, eats a surprising amount of power and generates a lot of heat. Oh and it costs a couple of cars in total.
And that's the important abstraction / simplification you get when you start running hardware at scale. Your limitation is not necessarily TFlops, GHz or GB per cubic meter. It is easy to cram a crapton of those into a small place.
The main problem after a while is the ability to put enough power into the building and to move the heat out of it again. It sure would be easy to put a lot of resistors into a place to make a lot of power consumption. Hamburg Energy is currently building just that to bleed off excess solar power into the grid heating.
It's problematic to connect that to the 10kv power grid safely and to move the heat away from the system fast.
My understanding is that there is no universal measure of compute power that applies across different hardware and workloads. You can interpret the power number to mean something close to the maximum amount of compute you can get for that power at a given time (or at least at time of install). It also works across geographies, cooling methods, etc. It covers all that.
If you think about it like refining electricity. A data center has a supply of raw electricity, and a capacity for how must waste (heat) it can handle. The quality of the refining improving over time doesn't change the supply or waste capacity of the facility.
It simplifies marketing. They probably don't really know how much Flops or anything else they will end up anyway. So gigawatts is nice way to look big.
Assuming a datacenter is more or less filled with $current_year chips, the number of of flops is kind of a meaninglessly large number. It's big. How big? Big enough it needs a nuclear power plant to run.
Not to mention it would assume that number wouldn't change...but of course it depends entirely on what type of compute is there as well as the fact that every few years truckloads of hardware gets replaced and the compute goes up.
Because, to us tech nerds, GPUs are the core thing. With a PM hat on, it's the datacenter in toto. Put another way: how can we measure in flops? By the time all this is built out we're on the next gen of cards.
His "$400B in next 12 months" claim treats OpenAI as paying construction costs upfront. But OpenAI is leasing capacity as operating expense - Oracle finances and builds the data centers [1]. This is like saying a tenant needs $5M cash because that's what the building cost to construct.
The Oracle deal structure: OpenAI pays ~$30B/year in rental fees starting fiscal 2027/2028 [2], ramping up over 5 years as capacity comes online. Not "$400B in 12 months."
The deals are structured as staged vendor financing:
- NVIDIA "invests" $10B per gigawatt milestone, gets paid back through chip purchases [3]
- AMD gives OpenAI warrants for 160M shares (~10% equity) that vest as chips deploy [4]
- As one analyst noted: "Nvidia invests $100 billion in OpenAI, which then OpenAI turns back and gives it back to Nvidia" [3]
This is circular vendor financing where suppliers extend credit betting on OpenAI's growth. It's unusual and potentially fragile, but it's not "OpenAI needs $400B cash they don't have."
Zitron asks: "Does OpenAI have $400B in cash?"
The actual question: "Can OpenAI grow revenue from $13B to $60B+ to cover lease payments by 2028-2029?"
The first question is nonsensical given deal structure. The second is the actual bet everyone's making.
His core thesis - "OpenAI literally cannot afford these deals therefore fraud" - fails because he fundamentally misunderstands how the deals work. The real questions are about execution timelines and revenue growth projections, not about OpenAI needing hundreds of billions in cash right now.
There's probably a good critical piece to write about whether these vendor financing bets will pay off, but this isn't it.
Suffice it to say this is not the first time Ed Zitron has been egregiously wrong on both analysis and basic facts. It's not even the first time this week.
Finding an audience that wants to believe something, and then creating something that looks like justification for that belief is a method to gain notoriety, which may or may not lead to income. Works doubly well for issues that are "hot" in the public sphere, as you can tap into the supporters and the outraged.
Solid post, thanks for sharing. Zitron occupies his own echo chamber. I've seen some people share links to his articles with a smirk as a "proof" of how "bullshit LLMs are" — and I know for a fact that they have no understanding of LLMs or how to evaluate limitations, saying nothing about unit economics. Sadly, I don't think it's possible to reason with them.
To be clear, I do expect that the bubble will burst at some point (my bet is 2028/2029) — but that's due to dynamics between markets and new tech. The tech itself is solid, even in the current form — but when there's a lot of money to make you tend to observe repeatable social patterns that often lead to overvaluing of the stuff in question.
OpenAI is currently growing WAUs at ~122.8% annualized growth (down from ~461.8% just 10 months ago).
Assuming their growth rate is getting close to stabilizing and will be at ~100% for 3 years to end of 2028 - that'd be $104B in revenue, on 6.4B WAUs.
I wouldn't bank on either of those numbers - but Oracle and Nvidia kind of need to bank on it to keep their stocks pumped.
Their growth decay is around 20% every 2 months - meaning - by this time next year, they could be closer to 1.2B WAUs than to 1.6B WAUs, and the following year they could be closer to 1.4B WAUs than to 3.2B WAUs.
Impressive, for sure, but still well bellow Google and Facebook, revenue much lower and growth probably even.
They don't need to grow users if their acv increases or they grow their enterprise or API businesses
And of course I might pay $20/month for ChatGPT and another $20/month for sora (or some hypothetical future b2c app)
Codex is my current favorite code reviewer (compare to bug bot and others), although others have had pretty different experiences. Codex is also my current favorite programming model (although it's quite reasonable to prefer Claude code with sonnet 4.5). I would happily encourage my employer to spend even more on OpenAI tools, and this is ignoring the API spend that we have (also currently increasing)
OpenAI don't monetize the vast majority of their users yet. But the unit costs are really low, and once they start monetizing the free tier with ads, they'll be wildly profitable.
"OpenAI cannot actually afford to pay $60 billion / year" the article states with confidence. But that's the level of revenue they'd be pulling in from their existing free users if monetized as effectively as Facebook or Google. No user growth needed.
And it seems this isn't far off, given the Walmart deal. Of course they'll start off with unobtrusive ad formats used only in situations where the user has definite purchase intent, to make the feature acceptable to the users, and then tighten the screws over time.
Except google and facebook have locked in numbers at times of virtually no competition before they started scaling up ads. If Open AI starts scaling ads next year they will churn people at a rate that will not be offset by growth and will either plateau or more likely lose user numbers, as their product has no material edge to alternatives in the market.
I disagree with Zitron’s analysis on many points, but I don’t see Open AI achieving the numbers it needs. Investors backing it must have seen something in private disclosure to be fronting this much money. Or more precisely, I need to believe they have seen something and are not fronting all this money just based on well wishes and marketing.
Most people don't choose by blind taste test. How intrusive do those ads have to get before it overwhelms habit and familiarity? OpenAI might be betting on enough of its 800m and growing weekly users sticking around long enough to moot churn until a liquidity event pays everyone off.
> His "$400B in next 12 months" claim treats OpenAI as paying construction costs upfront. But OpenAI is leasing capacity as operating expense - Oracle finances and builds the data centers [1].
It is bagholders all the way down[1]! The final bagholder will be the taxpayer/pension holder.
It's going to be 2008 bailouts again, but much worse.
These companies are doing all sorts of round tripping on top of propping up the economy on a foundation of fake revenue on purpose so that when it does some crumbling down they can go cry to the feds "help! we are far too big to fail, the fate of the nation depends on us getting bailed out at taxpayer expense."
I feel like writing that down somewhere because that's pretty close to how the bailout will be pitched. "If you don't bail us out then our adversaries will get to AGI first and it will be game over". Very clever of them.
The capital cost is even less insane than the fact that power utility companies are the real constraint on this industry.
North American grids are starving for electricity right now.
Someone ought to do a deep dive into how much actual total excess power capacity we have today (that could feasibly be used by data center megacampuses), and how much capacity is coming online and when.
Power plants are massively slow undertakings.
All these datacenters deals seem to be making an assumption that capacity will magically appear in time, and/or that competition for it doesn't exist.
This might be slightly off topic, but after the Sora 2/anime controversy I just looked up how much does it cost to make your average anime - it turns out that top tier 26 ep anime shows like Chainsaw Man, Delicious In Dungeon or box office movies like Demon Slayer cost between $10-$20m to make. Now I don't know how much they spend on Sora 2, but I'd imagine tens of billions. For that money, you could make a thousand such shows.
While this post is full of conjecture, and somewhat unrelated to LLMs, but not their economics - I wonder how the insane capex is going to be justified even if AI becomes fully capable of replacing salaried professionals, they'll still end up paying much much more than what it'd have cost to just hire that armies of professionals for decades.
Can you even make "shows" with sora 2? I haven't used it but everytime I hear about it it's in the context of making "shorts". Making shows would require a technological leap from that point.
A consequence of tools like Sora and Google Flow is that there will be an increase in amateurs creating professional quality content for comparatively cheap. So a thousand such shows (probably many more) isn't in the realm of the impossible!
I think that kind of thing is one of the main problems with the 'AI bubble'. It's probably a misallocation of capital, spending billions on energy sucking data centers people don't especially want which leaves less money for things like paying creatives to make anime shows people do want.
In properly functioning capitalism entrepreneurs would look at what they can make a profit at due to people paying for it because they want it and invest in that but the hype wave seems to be causing an over allocation to heavily loss making activities.
And I say that as a believer that AGI is on the way but that it will come from smart computer folk designing better systems, not from more gigawatts of data centers.
That seems like a lot of money. How quickly can sustainable capacity be built up in terms of building power plants, data center construction, silicon design and fabrication, etc.? Are these industries about to experience stratospheric growth, followed by a massive and painful adjustment, or does this represent a printing press or industrial revolution like inflection point?
Would anyone like to found a startup doing high-security embedded systems infrastructure? Peter at my username dot com if you’d like to connect.
I don't think it's AGI, but rather video production. OpenAI wants to build the next video social network / ads / tv / movie production system. The moat is the massive compute required.
Is there any indication they are actually working on this and Altman is any good at pursuing this goal? I'm seriously asking, please inform the uninformed.
My impression is that I hear a lot more about basic research from the competing high-profile labs, while OpenAI feels focused on their established stable of products. They also had high-profile researchers leave. Does OpenAI still have a culture looking for the next breakthroughs? How does their brain trust rank?
Huh, my read is exactly the opposite: Altman wants to be a trillionaire and isn't picky about how he gets there. If AGI accomplishes it, great, but if that's not possible, "just" making a megacorporation which permanently breaks the negotiating power of labor is fine too. Amodei is the one who I think actually wants to build AGI.
Then why start a company where you have no equity? (Yes I believe he financially benefits from OpenAI, but the more straightforward way would be OpenAI equity)
I think his initial belief was that OpenAI would to be a research organization which would get acquihired or license its tech out, and then Chat-GPT unexpectedly happened. Notice how ever since then he's been trying to get the nonprofit status cancelled or evaded.
I think your read is right. There are a few people who want to be trillionaires and aren't too picky about to get there: Elon Musk, Sam Altman, Trump, Larry Ellison, Peter Thiel, Putin. Maybe Bezos and Zuckerberg.
Of course, there wouldn't be many people who don't want to be trillionaires. Rare exceptions[1]. But these are the people with means to get there.
I definitely would not want to be a trillionaire yeah. Having a million or so would be nice but more and you get roped into all kinds of power play and you have to get security goons with you all the time to avoid getting kidnapped. I'd much rather be anonymous.
> This also assumes that intelligence continues to scale with compute which is not a given.
Isn’t it? Evidence seems to suggest that the more compute you throw at a problem, the smarter the system behaves. Sure, it’s not a given, but it seems plausible.
But human brains are small and require far less energy to be very generally intelligent. So clearly, there must be a better way to achieve this AGI shit. Preferably something that runs locally in the palm of your hand.
Anthropic seems more comfortable using TPUs for overflow capacity. The recent Claude degradation was largely due to a bug from implementation differences with TPUs and from their writeup we got some idea of their mix between Nvidia and TPU for inference.
I'm not sure if OpenAI has been willing to deploy weights to Google infrastructure.
How much does the capex model of a datacenter change when the goal is 100% utilization, with no care for node uptime beyond capex efficiency/hardware value mainenance?
I wouldn't be surprised if the cost came down by at least one order of magnitude, two if NVidia and others adjust their margin expectations. If the bet is that OpenAI can ship crappy datacenters with crappy connectivity/latency characteristics in places with cheap/existing power - then that seems at least somewhat plausible.
OpenAI burning 40 billion dollars on datacenters in the next 1 year is almost guaranteed. Modern datacenter facilities are carefully engineered for uptime, I don't think OpenAI cares about rack uptime or even facility uptime at this scale.
People who have run these numbers still want tier IVish. I haven't seen evidence of crypto mining "tier zero" datacenters being converted to AI despite the obvious advantages.
He's not angry it's an angsty way of writing, a lot of used to write like that as teenagers. There was a time around 5 years ago where ever best selling book raced to have "fuck" or "vagina" in the title.
Why are you -not- angry at all of this insanity? I feel the same way as him, hype has blown the bubble bigger and bigger, and it's just a matter of time until it poops out and causes huge amounts of pain.
The PC revolution in this analogy is local models? However good and fast they get locally, the same model will run 10-1000x faster on a dedicated device. I think cloud models will be in demand for a long time even before you factor in efficiencies of scale.
Also there will be tons of loads that are natively server side and don't make sense to use a local model for (or where using server side models retains more control).
It is mathematicaly sure that OpenAI is burning investor money. Even if the whole world paid a 20$ monthly subscription, it would need 3 years to collect 400 billions. In reality it needs decades.
I was explaining the problem of lagging benefits for the huge expenditures for AI research and infrastructure this morning to my wife (my RC airplane flying club is an hour round trip drive so we really had time to get into it). She is not much interested in tech but she found the story of over investment and what it might do to our economy very interesting indeed.
There are many people in the USA who don’t overly care about technology but might care a lot about the economic risks of overly aggressively chasing strong AI capabilities.
I am forwarding this article to a few friends and family members.
>Why can't OpenAI keep projecting/promising massive data centre growth year after year, fail to deliver, and keep making Number Go Up?
Because eventually, Nvidia will run out of money, so the incestuous loop between Nvidia funding AI entities, who then use those funds to buy Nvidia chips, artificially propping up Nvidia's stock price, will eventually end and poof.
Competing forces are the market's insatiable need for growth every quarter, and other countries also chasing AIs and will not slow down if other countries, like the US, do slow down.
Let's assume that estimate is good. For some perspective an context, the last finalized DOD budget (2023) was $815B, and, plus supplementals, turned into about $852 billion.
AGI is absolutely a national security concern. Despite it being an enormous number, it'll happen. It may not be earmarked for OpenAI, but the US is going to ensure that the energy capability is there.
This may well be the PR pivot that's to come once it becomes clear that taxpayer funding is needed to plug any financing shortfalls for the industry - it's "too big to let fail". It won't all go to OpenAI, but be distributed across a consortium of other politically connected corps: Oracle, Nvidia/Intel, Microsoft, Meta and whoever else.
The top six US tech companies are generating ~$620 billion per year in operating income (likely to be closer to $700 billion in another 12-18 months). They can afford to spend $2 trillion on this over the next decade without missing a beat. Their profit over that timeline will plausibly be $8 to $10 trillion (and of course something dramatic could change that). That's just six companies.
Fears of an AI bubble originate from the use of external financing needed to pay for infrastructure investments, which may or may not pay off for the lenders.
These 6 companies are using only a small portion of their own cash reserves to invest, and using private credit for the rest. Meta is getting a $30 billion loan from PIMCO and Blue Owl for a datacenter [0], which they could easily pay for out of their own pocket. There are also many datacenters that are being funded through asset-backed securities or commercial mortgage-backed securities [0], the market for which can quickly collapse if expected income doesn't materialize, leading to mortgage defaults, as in 2008.
> but the US is going to ensure that the energy capability is there.
We're doing a pretty shit job of ensuring that today. Capacity is already intensely strained, and the govt seems to be decelerating investment into power capacity growth, if anything
How the hell does one spend $2 Billion on sales and marketing!!!
I really don’t get it. If AI is hot and profitable, can’t they fund their own expansion?
Either the $400B will be a piece of cake or they are wildly overestimating the demand and have an impractical business model.
Sure, there are those would like to have a car wash where the walls are made of crystal, organic artisan soaps, and only ultra pure water is used to wash the car. But such wouldn’t be a profitable business, no matter how much money is invested. No amount of marketing will change that.
At this point we all know this is just a massive bubble. I'm done paying attention to it really. I'm prepared for all my investments to go down in the next 1-5 years. If you're nearing retirement now is the time to cash out. Yes, investments could go up in a value a lot until the correction, but I don't really think that is worth the risk.
If you're reading this article and wondering "When is this house of cards going to collapse!?", a little advice, gained at a high price to myself: you can waste years waiting for it to collapse, 95% of the time, it never will. I never thought Uber or Tesla would survive COVID. I'd have $450K in bitcoin if I held onto the "joke" amount I bought for $200 in 2013.
Thing that make me skip this specific narrative:
- There's some heavy-handed reaching to get to $400B next 12 months: guesstimate $50B = 1 GW of capacity, then list out 3.3 gigawatts across Broadcom chip purchases, Nvidia, and AMD
- OpenAI is far better positioned than any of the obvious failures I foresaw in my 37 years on this rock. It's very, very, hard to fuck up to the point you go out of business.
- Ed is repeating narratives instead of facts ("what did they spend that money on!? GPT-5 was a big let down!" -- i.e. he remembers the chatgpt.com router discourse, and missed that it was the first OpenAI release that could get the $30-50/day/engineer in spend we've been sending to Anthropic)
Yeah, I've re-evaluated my misconceptions around the same time. As another datapoint - infamous Tether/Bitfinex corporation which was blatantly cheating and insolvent (per normal definition) for years and every time reasonable people were giving them less than a year to bankruptcy and jail. But here were are, Tether is still printing tokens like madman, no one is in jail and every analyst was proven both wrong and right at the same time. Apparently company can be insolvent and cheating and breaking law for a looong time, with zero repercussions. And no one bats an eye.
I suspect OpenAI is deeply in red today and any normal small company would went out of business years ago. But they are too big to fail and will continue working this way. They may even transition to being profitable later on, and people will retcon that they always have been so.
At this point I just wonder, what would be exact mechanism with which Sam will redirect his negatives to a regular public like us here. Bailouts? Irresponsible government investments? Banks overreaching and them getting bailouts? Something new?...
Couple points where our principled bear-ishness diverges and I'm bullish:
- I think it overstates the case to even get to the $150B #.
- $50B/GW sounds very wrong to me.
- They don't need anything, 2 of the 3 are GPUs-for-equity, and Broadcom isn't going to insist OpenAI make a $100B 100% full purchase commitment up front, I'd be surprised at $1B.
- All I see is undercapacity in the sector after 2-3 years of feverish buildouts, it's hard for me to foresee, in the short term, a place where market signals can't override deals in principle.
(random ramble: this reminds me somewhat of when Waymo "committed" to...80K Jaguar & Pacifica purchases? In 2016? I thought that meant they were definitely going to scale in the short term, hell or high water. In retrospect, I bet they're at a 1/4 of the 9 years later, and are yet entirely real.)
My search habits have evolved quite fast - when I search for something now I first ask for quick results from Chatgpt, which gives me pointers I then drill down on. Google's revenue for 2024 was 350 billion. I know it's not all Google ads, but s lot of it is. When you follow a link from Chatgpt, it always has a utm_source=Chatgpt in it, so companies are quickly learning how important getting linked there is.
I'm not saying there's no bubble, and I personally anticipate a lot of turmoil in the next year, but monetisation of that would be the most primitive way of earning a lot of money. If anyone is dead man walking it's Google. For better or worse, Chatgpt has become to AI what Google was to search, even though I think Gemini is also good or even better. I also have my own doubts about the value of LLMs because I've already experienced a lot of caveats with the stuff it gives you. But at the same time, as long as you don't believe it blindly, getting started with something new has never been easier. If you don't see value in that, I don't know what to tell you.
> For better or worse, Chatgpt has become to AI what Google was to search, even though I think Gemini is also good or even better.
Google definitely has the better model right now, but I think ChatGPT is already well on its way to becoming to AI what Google was to search.
ChatGPT is a household name at this point. Any non tech person I ask or talk about AI with it's default to be assumed it's ChatGPT. "ChatGPT" has become synonymous with "AI" for the average population, much in the same way "Google it" meant to perform an internet search.
So ChatGPT already has the popular brand. I think people are sleeping on Google though. They have a hardware advantage and aren't reliant on Nvidia, and have way more experience than OpenAI in building out compute and with ML, Google has been an "AI Company" since forever. Google's problem if they lose won't be because of tech or an inferior model, it will be because they absolutely suck at making products. What Google puts out always feels like a research project made public because someone inside thought it was cool enough to share. There's not a whole lot of product strategy or cohesion across the Google ecosystem.
Vast sums of money that are being invested in it (obviously in the hope of AGI) but I'm not sure the world would notice if OpenAI/current LLM products just disappeared tomorrow.
I don't think that AGI is necessary for LLMs to be revolutionary. I personally use various AI products more than I use Google search these days. Google became the biggest company in the world based on selling advertising on its search engine.
It's going to perhaps sound nuts, but I'm beginning to wonder if America is actually a giant Ponzi scheme.
I've been thinking about American exceptionalism - they way it is head and shoulders above Europe and the developed world in terms of GDP growth, market returns, start up successes etc. and what might be the root of this success. And I'm starting to think that, apart from various mild genuine effects, it is also a sequence of circular self-fulfilling prophecies.
Let's say you're a sophisticated startup and you want some funding. Where do you go? US of course - it has the easiest access to capital. It does so presumably because US venture funds have an easier time raising funds. And that's presumably because of their track record of making money for investors - real, or at least perceived. They invest in these startups and they exit at a profit, because US companies have better valuations than elsewhere, so at IPO investors lap up the shares and the VCs make money. It's easy to find buyers for US stocks because they're always going up. In turn, they're going up because, well, there's lots of investors. It's much easier to raise billions for data centres and fairy dust because investors are in awe of what can be done with the money and anyway line always go up. Stocks like TSLA have valuations you couldn't justify elsewhere. Maybe because they will build robot AI rocket taxis, or maybe because the collective American Allure means valuations are just high.
The beauty of this arrangement is that the elements are entangled in a complex web of financial interdependency. If you think about these things in isolation, you wouldn't conclude there's anything unusual. US VC funding is so good because there's a lot of capital - lucky them. This thought of circularity only struck me when trying to think of the root cause - the nuclear set of elements that drive it. And I concluded any reason I can think of is eventually recursive.
I'm not saying America is just dumb luck kept together by spittle, of course there are structural advantages the US has. I'm just not sure it really is that much better an economic machine than other similar countries.
One difference to a Ponzi scheme is that you might actually hit a stable level and stay there rather than crash and burn. So it's more like a collective investment into a lottery. OpenAI might burn $400bn and achieve singularity, then proceed to own the rest of the world.
But I can't shake the feeling that a lot of recent US growth is a bit of smoke and mirrors. After adjusting for tech, US indices didn't outperform European ones post GFC, IIRC. Much of its growth this year is AI, financed presumably by half the world and maintained by sky-high valuations. And no one says "check" because, well, it's the US and the line always go up.
I don't think you're describing a Ponzi scheme. More like a merry go around. Seems pretty common, just turned to 11.
Of course it can stop, an a little history reading will show that it has always stopped, but it can take a long time.
If anything, I fear the AI hype + the orange idiot you put in charge, can fuck you up much faster than otherwise. OTOH, Trump is also a symptom, showing that things were not going great.
Elon can give them 400B and he would still have a net worth of 100B and from that he can again get back to his former net worth in a few years. The world is not fair.
> Selling 100B worth of stocks for anything close to 100B is not possible. That volume would mini-crash the entire exchange
Nasdaq trades about half a trillion dollars a day [1]. Even if Musk were an idiot and dumped $100bn in one day, it would crash Tesla's stock price, not the Nasdaq.
If Musk wanted to give OpenAI $100bn, the best way to do it would be to (a) borrow against his shares or (b) given OpenAI his (non-voting) shares.
> Even if you think that OpenAI’s growth is impressive — it went from 700 million to 800 million weekly active users in the last two months — that is not the kind of growth that says “build capacity assuming that literally every single human being on Earth uses this all the time.”
I’d argue the other way around: 100M growth in two months suggests literally every single human being on Earth would benefit from using this all the time, and it’s just a matter of enabling them to.
Beware the sigmoidal curve, though. Growth is exponential till it’s not.
> 100M growth in two months suggests literally every single human being on Earth would benefit from using this all the time, and it’s just a matter of enabling them to.
This doesn’t make any sense. Popular is not the same as useful. You’d have a more compelling argument if you included data showing that all this increased LLM usage has had some kind of impact on productivity metrics.
Instead, some studies have shown that LLMs are making professionals less productive:
https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...
>This doesn’t make any sense. Popular is not the same as useful.
If you are using a service weekly for a long period, you find it useful.
>You’d have a more compelling argument if you included data showing that all this increased LLM usage has had some kind of impact on productivity metrics.
Why would you need to do that? Why is a vague (in this instance) notion of 'productivity' the only measure of usefulness? ChatGPT (not the API, just the app) processes over 2.6 B messages every single day. Most of these (1.9B) are for Non work purposes. So what 'productivity' would you even be measuring here ? Do you think everything that doesn't have to do with work is useless ? I hope not, because you'd be wrong.
If something makes you laugh consistently, it's useful. If it makes you happy, it's useful. 'Productivity' is not even close to being the be-all and end-all of usefulness.
[0] https://cdn.openai.com/pdf/a253471f-8260-40c6-a2cc-aa93fe9f1...
This is free users though. The number of paid users is significantly less (like any other freemium product). Finding it useful enough to use weekly doesn't mean finding it useful enough to pay continuously for use.
Free users can still be monetized or else Google Search would be a money loser. Open AI's free users are at the moment not monetized in any way. It's clear they plan to change this, given recent hirings and the value they'd need to extract from their free active userbase to be profitable is pretty low, so it's not really a problem.
> If you are using a service weekly for a long period, you find it useful.
Like sugar?
Yes, sugar is useful.
> If you are using a service weekly for a long period, you find it useful.
Do alcoholics find their daily usage of alcohol really useful? You can of course make a case for this, but it's quite a stretch. I think people use stuff weekly for all sorts of reasons besides usefulness for the most common interpretation of the word.
> Do alcoholics find their daily usage of alcohol really useful?
Of course they do. They use it to get drunk and to avoid withdrawals. You're trying to confuse useful with productive. Being productive does make a difference, though, because if something isn't productive it doesn't generate enough cash to buy more of it - you have to pull the cash from somewhere else.
So I think your feeling is correct, although your argument is wrong. Buying gas for your car is productive, because it gets you to work which gets you money to pay for more gas and other things (like alcohol.) Buying alcohol is not productive, and that means that people can't pay too much for it.
Productivity isn't the barrier of whether people can 'pay too much' for something or not. Gaming is one of the biggest industries around.
The difference is that besides being useful, alcohol is actively harmful.
Exactly, for how many people is Instagram/TikTok and friends actually useful? Sure, they're popular and also used by billions, but would every human on earth benefit from using those services?
I certainly benefited from deleting them!
You have inspired me to install them and do that!
I finally used it for a couple little things, but mostly as a fuzzier replacement for search, where it does do pretty well. Of course nowadays classic search is in shambolic so it is kind like a mediocre prime-aged boxer fighting an 70 year old champion or something.
Anyway, I bet it will be really useful for cool stuff if it can ever run on my laptop!
> nowadays classic search is in shambolic
Not sure how much one should expect or deserve switching from a free search engine to a free chatbot.
If you care about search, use Kagi [1].
[1] https://kagi.com
idk "it will be really useful" is a bit too fuzzy and vague -- how do I infer about numbers related to return-in-investment?
of course, it's better than "this is so crap no one would buy it" -- but for investors, they want to know: "if I put X dollars now, would I get 10*X dollars or 1/10 X dollars?"
it's weird that all these comments on "usefulness" doesn't even attempt to explain whether the numbers add up ok or not
OpenAI's bottleneck first shifted from GPUs to energy. Next it will shift from energy to meatbags. I'm sure they will figure out some way to produce more of us to keep the growthrate going.
Eventually, we can replace human consumers with LLM agent consumers, and things can scale indefinitely.
You too can qualify as an "ugly bag of mostly water" just give us your CC number!
hmm I bag to differ
Not just meatbags, but meatbags with _money_.
> 100M growth in two months suggests literally every single human being on Earth would benefit from using this all the time, and it’s just a matter of enabling them to.
For OpenAI I think the problem is that if eventually browsers, operating systems, phones, word processors [some other system people already use and/or pay for] integrate some form of generative AI that is good enough - and an integrated AI can be a lot less capable than the cutting edge to win, what will be the market for a stand alone AI for the general public.
There will always be a market for professional products, cutting edge research and coding tools, but I don’t think that makes a trillion dollar company.
>100M growth in two months suggests literally every single human being on Earth would benefit from using this all the time
In what way does it suggest that? What level of growth is evidence that a product is universally useful?
About 10% of the total world population is using it on a weekly basis. Take out those too old or young or illiterate technically or otherwise. Now subtract out the people without reliable internet and computer/phone. That 10% gets a whole lot bigger.
That seems like pretty strong evidence that it is generally, if not universally, useful to everyone given the opportunity.
My work is apparently paying for seats in multiple AI tools for everybody. There's a corporate mandate that you "have to use AI for your job". People seem to mostly be using it to for (a) slide decks with cringe images (b) making their PRs look more impressive by generating a bunch of ineffective boilerplate unit tests.
It’s interesting with the whole quote: “OpenAI has 800 million weekly active users, and putting aside the fact that OpenAI’s own research (see page 10, footnote 20) says it double-counts users who are logged out if they’re use different devices”
The number may not actually be too accurate - but I imagine it’s also paired with what another commentator has said - OpenAI is basically giving their product to companies and the companies are making the employees log in and use it in some way - it’s not natural growth in any sense of the word.
Like Microsoft 365 ... CoPilot ... something.
Only if you believe popularity is the same as usefulness.
idk even that "usefulness" won't help on the core question:
"does the numbers add up?"
this article is about NUMBERS regarding return-on-investment / etc
"useful" is so vague so it's too 'useless' to the discussion here... I'm not sure why everyone here is parrotting that like gpt hallucination
Utility and destructive addictiveness are two very different things. You could argue this way about opium back when recreational consumption was widespread.
I’m sorry but I don’t see much logic in an argument that boils down to “A lot of people use it and that means it would also be useful to the people who don’t use it”. Maybe the people who don’t use it have an actual reason not to use it.
Seriously! These two things are laughably far apart. What on earth kind of leap of logic is this?
I just stopped paying for ChatGPT this month, once I found out that they made the projects available to free users too. The free version is just as good and I can shuffle between Grok, Mistral, Deepseek and Gemini when I run out of free quota.
So maybe giving away more and more free stuff is good for growth? The product is excellent, ChatGPT is still my favorite but the competition isn't that behind. If fact, I started liking Grok better for most tasks
> 100M growth in two months suggests literally every single human being on Earth would benefit from using this all the time, and it’s just a matter of enabling them to.
Only about 5% see enough value to drop $20 a month... It's like VR and AR, if people get a headset for free they'll use them every now and then, but virtually nobody wants to drop money on these.
LLMs have already been commodified
>Only about 5% see enough value to drop $20 a month...
Is that a real number? I'm shocked it's that high. I figured paying customers would be well under 1%.
>100M growth in two months suggests literally every single human being on Earth would benefit from using this all the time
I'm not sure i understand the reasoning. lots of people use a thing, so everyone should?
I find myself using it less over time. It’s still useful but once you’ve been using it for a while you get to know best when not to use it.
Reality check. UNICEF and the WHO say there are 2 billion people without access to clean drinking water. They have slightly more pressuring issues than trying to log into chatgpt. Only slightly.
The blockchain/bitcoin bros tried the same marketing spin. "Bitcoin will end poverty once we get it into everyone's hands." When that started slipping, NFTs will save us all.
Yeah. Sure. Been there. Done that. Just needs "more investment"... and then more... then more... all because of self reported "growth".
"You told me I could find water in the well 20km North, but there wasn't any."
"Ah, you're absolutely right! Have you tried looking in the shop?"
Yeah AI will put a lot of people out of a job, it will bring people into poverty not out.
Nearly six billion people are using mobile phones, most of those are smartphones now. There's no reason to think extending that small cost utility device to the next billion adults isn't a good idea (so long as the cost isn't coming from their pocket, ie it should be subsidized). These are not at all mutually exclusive goals.
The latest LLMs are extraordinarily useful life agents as is. Most people would benefit from using them.
It'd be like pretending it's either water or education (pick one). The answer is both and you don't have to pick one or the other in reality at all. The entities trying to solve each aspect are typically different organization anyway.
"Most people would benefit from using them"
hmm maybe that "would benefit" is a bit too vague?
Spare me the rehash of marketing hype rhetoric. It's either a white collar tool to avoid doing boring work or great to identify targets on a battlefield. That's it and both are still questionable. This techno-fetishism of "New technology good, ugga-ooga-booga." 99% of the blind evangelists just spew that same slop just because of fomo of making a few extra shekels by proving "I'm a true believer, and so should you by buying my AI course."
Someone who doesn't have access to clean water and stable food will not benefit from this, nor will powers at be that "make it available" will actually improve their lives. It's already apparent, the tech nerds of the late 90s and early 2000s were NOT the good guys. Being good at computers does not make you a good person. The business model for AI makes zero sense when you have real world experience. Without massive, complete social and economic absolute changes, it won't work out. And for those championing that change, what makes you think you'll be the special comrade brought up the ranks to benefit as the truest of believers?
Sorry, but this shit is really starting to rub me the wrong way, especially with the massive bubble investment that's growing around all of it. This wont be good. The housing collapsed sucked. The same pattern is emerging and I'm getting a bad, bad feeling this will make the housing collapse look like nothing due to long term ramifications.
It's a wonderful way to find new tools and materials. Its knowledge of materials is as encyclopedic as it is for everything else.
I use it as a much more efficient version of Wikipedia for quickly finding the basics on many design options in software and physical artifacts. Also great at finding specific words in English or other languages. Unlike a thesaurus it pulls a lot of background on each word including subjective shades.
I could go on, and on.
I think the usefulness of these models is very different for different people. As a way to quickly start digging into novel problems, with aspects that require discovery, I don't know of anything that could possibly compare. No human being or web resource comes close.
As many people relate, it can save me time, but the greater value is it makes my time far more productive, and I tackle problems without hesitation I wouldn't have ever had the time to even think about before.
I could not imagine going back to Google for instance. Stone-age. I still use Kagi, but often for its quick AI question responses. And I still use Wikipedia, and more specific resources.
It's also useful if your blind, I know this from personal experience. The ability to recognize objects, read package labels, read bios and boot menus, etc has been very useful to me. Claiming that the only things it's good for is white collar work or battlefield targeting isn't accurate. In spite of how useful I've found it I'm not claiming it's going to be net positive, I have no idea how this will all turn out.
I don't really think people understand there are all sorts of non-chatgpt users that pay OpenAI thousands of dollars PER DAY - (>100k customers like this). They're not going to publish the data, but agentic flows make ChatGPT look like a cereal box.
Individuals who personally spent hundreds of thousands of dollars a year running agents? I would love to see one example.
I have 3 separate accounts on max plans. One of my co-workers has 8. yesterday
orgs
The number of users is irrelevant. Revenue is only slightly relevant. The only thing that matters is profit. It would even be a decent thing if they could show marginal profit per user.
That is completely wrong for this stage of a company. The ability to make profit in the future is important. Making profit while growing is not.
So exactly how are they going to make a profit if each user causes a marginal lost?
Any idiot can sell a dollar’s worth of value for 90 cents.
This was an interesting article, but he completely misses the only real threat to either Open AI or Anthro.
Open source models like deepseek and llama 3 are rapidly catching up, if I can get 90% of the functionality for significantly less ( or free if I want to use my own GPU), what value does open AI really have .
I'm a paid subscriber of open AI, but it's really just a matter of convenience. The app is really good, and I find it's really great for double checking some of my math. However I don't know how they're going to ever truly become a corporate necessity at the prices they're going to need to bill at to become profitable.
Then again, open AI is obviously ran by some of the smartest people on planet Earth, with other extremely smart people giving them tons of money, so I can be completely wrong here
> Open source models like deepseek and llama 3 are rapidly catching up
Catching up for how long though? Large models are very expensive to train, and the only reason any of them are open is because ostensibly for-profit companies are absorbing the enormous costs on behalf of the open source scene. What's the plan B when those unprofitable companies (or divisions within companies) pull the ladder up behind them in pursuit of profits?
> Catching up for how long though? Large models are expensive to train and the money has to come from somewhere, and for now that somewhere is the charity of ostensibly for-profit companies which release their models for free and just eat the losses. That doesn't seem very sustainable to me.
Catching up enough to execute a business successfully on the open source (and free as in free beer) alternatives to OpenAI. Once you have a single model that works, you can bootstrap your own internal ML infra for just that one use case and grow from there.
Martin Casado from a16z stated an opinion that 80% of US startups are likely using less expensive open models, usually from China. Chamath P. on the All In Podcast said that his company is using Chinese models, but hosted in the US and he cited his company as a huge inference user.
Important to distinguish that they're likely using them for internal workflows (agents, etc) where the scope is well defined and they can tune their prompts and evals to accommodate a lower-performance model.
Nobody is advocating switching their coding agents to open source (yet), but that's not the bulk of the tokens in companies that have automated workflows integrated into their business.
Listen to one of the Zuck interviews, he explains this everywhere.
To paraphrase, Meta is not releasing open models out of the goodness of their heart, they're doing it as part of their business strategy.
- They already have all the eyeballs.
- They want to be able to run the best model they can for those eyeballs.
- It doesn't make a difference to them whether others have a similar model or not. They don't care.
- Ergo, they release their weights, hoping someone can help them improve it. Which has happened: see quantization.
LLMs can be trained using a distributed approach [1]. So what’s stopping the OSS community from setting up a pool in the likes of seti@home and boinc?
[1] I asked Gemini and apparently it’s quite common
And then imagine Chinese do distributed training at home. What outcome can they achieve!
I think what you're missing here is that China would be (is?) happy to fund the development, because it's in their national interest and necessary in order for their companies to stay competitive, so long as there are trade restrictions on chips. Another framing for this is that China and certain other entities (e.g. content distribution channels like Meta, Youtube) have a strong incentive to 'commoditize their [AI] complement' (https://gwern.net/complement).
I don’t think that’s the point he is making. The argument to me is looking at the numbers and grounding them in real life. It takes time to build data centers, it takes people to run them. The article makes the argument that the timelines are not feasible.
Meanwhile, the money asks, short time frames, and unclear demands for same are huge business problems. Self service (llama on my own gpus) I suppose is just another way to ask where the demand is coming from to argue for billions in new money. Something smells ...
> free if I want to use my own GPU
The hardware required to run something like deepseek / kimi / glm locally at any speed fast enough for coding is probably around $50,000. You need hundreds of gigabytes of fast VRAM to run models that can come anywhere close to openai or anthropic.
$50k would be the cost to run it un-quantized, 10k could get you for example 4 5090 system, that would run the 671b q4 model which is 90% as good, which was the OPs target
which 671b quants can fit into 96GB VRAM? Everything I’m aware of needs hundreds at least (e.g. https://apxml.com/models/deepseek-r1-671b).
5090 is 32 GB so it's 128GB, not 96.
128 is still not 300. Something like 4x 6000 blackwell is the minimum to run any model that is going to feel anything like claude locally.
To my deep disappointment the economics are simply not there at the moment. Openrouter using only providers with zero data retention policies is probably the best option right now if you care about openness, privacy and vendor lock-in.
For local use and experimentation you don't need to match a top of the line model. In fact something that you train or rather fine-tune locally might be better for certain use cases.
If I was working with sensitive data I sure would only use on prem models.
> I'm a paid subscriber of open AI, but it's really just a matter of convenience. The app is really good, and I find it's really great for double checking some of my math.
That right there is why they are valuable. Most people are absolutely incompetent when it comes to IT. That's why no one you meet in the real world uses ad blockers. OpenAI secured their position in the mind share of the masses. All they had to do to become the next google was find a way to force ads down the throats of their users. Instead they opted for the inflated bubble and scam investors strategy. Rookie mistake.
The mind share openAI has is next to none.
The reality is they're a paid service, and even if they 10x their prices they're still in the red.
Consumers do actually care about price. They will easily, and quickly, move to a cheaper service. There's no lock in here.
There is talk of 800 million weekly users or whatever. But real question to me is how much actual disposable income they have or willingness to spend it on expensive AI subscription.
Not true, for the non tech crowd ChatGPT is the AI. There are a few people using Grok or Gemini, fewer outside the coding crowd would know anthropic
This is just not true. They don't even know what an OpenAI is, they just know what chat is. It's a chat window.
You make another chat window and you're golden.
Ok make one and show us your market share. I’m talking about mind share and ubiquitous. Who’s going to your talk.ai no one.
I said everyone knows ChatGPT you responded about OpenAI
Almost a 3rd of users use ad blockers
https://backlinko.com/ad-blockers-users
And just because you have users doesn’t mean it’s easy to create a profitable ad business - ask Yahoo. Besides we still don’t know how much inference costs. But there is a real marginal costs that wouldn’t be covered by ads. They definitely couldn’t make enough on ads to cover their training costs and other costs.
And adding ads into the responses is _child's play_ find the ad with the most semantic similarity to the content in the context. Insert at the end of the response or every N responses with a convincing message that based on our discussion you might be interested in xyz.
For more subtle and slimier way of doing things, boost the relevance of brands and keywords, and when they are semantically similar to the most likely token, insert them into the response. Companies pay per impression.
When a guardrail blocks a response, play some political ad for a law and order candidate before delivering the rest of the message. I'm completely shocked nobody has offered free gpt use via an api supported by ad revenue yet.
This is such a techno-centric view, you're not even remotely aware of your own biases.
> but he completely misses the only real threat to either Open AI or Anthro.
Hard disagree. Economics matter, in fact more than tech.
Tech only gets to shine if the economics work out.
> Then again, open AI is obviously ran by some of the smartest people on planet Earth, with other extremely smart people giving them tons of money, so I can be completely wrong here
Nope.
Money != Intelligence
Open AI, SF and the West-Coast VC scene is run by very opionated, incentiviced people.
Yes, money can make things move, but all the money of the world don't matter if your unit econonmics don't work out.
And the startup graveyard is full of examples of this kind.
For actual work and not toying around the 10% gap is absolutely worth the cost.
Don’t underestimate the cost of getting locked into a tool that is 100% guaranteed to rugpull you on both cost and privacy.
You know that this is literally not a concern right? It is part of business life to navigate such a situation.
When I was a SWE, I've done migrations between bare metal to AWS to GCP and then both, and then all plus Azure..
It is the cost of doing business. You pick what works best at a price that is optimized for your business needs now. You have a war chest so when they start becoming assholes, you have leverage to fight back or pivot.
I'd rather spend $200-400/mo to unblock myself NOW than do something dumb with 5 or even 100 tokens per second output that isn't actually that good as what the current providers offer. I'm going through millions of tokens a day.. I couldn't do that with local "RIGHT NOW" (<--- important)
Yes. But there is not only OpenAI. There is Gemini, Grok, whatever. If it doesn't become a "the winner takes it all" but a commodity like web hosting, then the payoff breaks down.
You may be correct, but for my hobbyist projects I tend to use the cheaper models to get started, and then I'll switch to a more expensive one if the cheaper model gets stuck.
Unless the actual race is to create an AI Employee that operates and can deliver work without constant supervision. At that level, of course it would be cheaper to pay $2,000 a month straight to Open AI vs hiring a junior SWE
I feel like options for local inference are getting better. AMD has their Strix Halo. Intel's next CPU generation Arrow Lake will have better inference abilities as well.
This highly depends on the price difference and value you get form those 10%.
How clearly are you able to define “actual work” and “toying around?”
Or is this the case of every HN discussion where what you do is “actual work” and what other people do is “toying around?”
Are you making [actual] money? That's literally it.
I don’t know why you’re downvoted. It’s a good and fair answer. I disagree with it to some extent but at least it makes sense.
No worries. I stopped caring about points and validation years ago lol just being honest is enough for me.
> For actual work and not toying around the 10% gap is absolutely worth the cost.
I agree with this... for now. But the hosted commercial models aren't widening the gap as far as I can tell, if anything it appears to be narrowing.
And if the relative delta doesn't increase somehow I don't see any way in which the "AI race" doesn't end in a situation where locally run LLMs on relatively cheap hardware end up being good enough for virtually everyone.
Which in many ways is the best possible outcome, except for the likely severe economic effects when the bubble bursts on the commercial AI side.
Sure. I'm talking about now.
I don't know what will happen tomorrow, let alone 1 year or more down the line.
If it ever becomes economical to run and maintain bare metal GPU compute to run LLMs, then that's what will need to be done..
I think you're wrong, in the same way that folks on HN were wrong about Dropbox–HN: why would I pay for something that provides so little value, it's just slightly more convenient file storage?
Just because open source models are almost as good, doesn't mean you can underestimate the convenience factor.
Both can be true: we're in an AI bubble, and the large incumbents will capture most of the value/be difficult to unseat.
On the other hand, no one has figured out how to make money providing AI yet, and everyone's operating at a loss. At some point they're going to need to monetize, and the cost/convenience compared to alternatives may not be worth it for a lot of people.
At one point you could get a Netflix subscription and it was convenient enough that people were pirating less. Now there's so many subscription services, we're basically back to cable packages, paying ever increasing amounts and potentially still seeing ads. I know I'm pirating a lot more again.
Uber vs cabs, Airbnb vs hotels - We've seen it time and time again, once the VC cashflow/infinite stonk growth dries up and they need to figure out how to monetize, the service becomes worse and people start looking for alternatives again.
Yeah, but not just that. I don't expect my mum to go find some high end consumer GPU and install it on a home server in order to run her own local LLM. I expect that people will be throwing chat interfaces running remixed versions of open weight models out on the internet so fast that it's impossible for anyone to monitise it in a reasonable way.
I also wonder whether, similar to bitcoin mining, these things end up on specialist ASICS and before we know it a medium tier mobile phone is running your own local models.
A top end iphone is capable of running smaller local LLMs today.
Well seeing how Dropbox is doing now, Steve Jobs was right - it isn’t a product, it’s a feature. For the same price of 2TB of storage on Dropbox you can get the same amount on Google or OneDrive with a full office suite.
People love to quote Dropbox ignoring all of the YC companies that are zombies or outright failed. Just looking at the ones that have gone public.
https://medium.com/@Arakunrin/the-post-ipo-performance-of-y-...
Public...? Oh, you mean the ones meant to be left holding the bag.
When there's real money to be made investing in YC is off limits to the public: https://jaredheyman.medium.com/on-the-176-annual-return-of-a...
I am not saying the companies in aggregate don’t lead to successful outcomes for VCs. I am saying claiming Dropbox is a shining example of a “successful” company that HN was wrong about long term doesn’t jibe with reality.
They also didn’t have the massive fixed cost outlays nor did they have negative unit economics that OpenAI has.
I don't get this comparison. The non-Dropbox version was magnitudes less convenient to 99.99% of the population. A non-OpenAI chat interface is, at best, a fracfion less convenient.
A good number of people used to pay for email. Now a tiny fraction does. It all hangs on wbether OpenAI can figure out how to get ad revenue without people moving to a free competitor without them - and there will be plenty of those.
Does it have to be ads? :/
Of course, as that's where the money and power is, the only things SamA is in it for.
There will be equivalent models that are free. Likely, there will even be free ones without ads.
Free+ads can beat free without ads on pure incumbency, marketing and convenience. Most people don't use an adblocker, even though it's trivially easy to install.
Paid, however, can't beat free+ads. Too much friction.
> folks on HN were wrong about Dropbox–HN: why would I pay for something that provides so little value, it's just slightly more convenient file storage?
https://news.ycombinator.com/item?id=42392302
(Before discussion of your comment devolves into nonsense about this.)
> Open source models like deepseek and llama 3 are rapidly catching up, if I can get 90% of the functionality for significantly less ( or free if I want to use my own GPU), what value does open AI really have
They pay for the hardware and electricity /s.
You have several providers who host both deep-seek and llama 3. They pay for the hardware and electricity, you pay for usage but it's significantly cheaper than using OpenAIs models.
Where are these providers and do they offer batch processing? If they don't how does there cost compare to Gemini and OpenAI batch processing? For the hobby project I'm working on batch processing is a great fit. The only cost comparison tool I've been able to find is openrouter and it doesn't support batch processing for cost savings.
Aws?
> you pay for usage
Plenty of people don't. That's an enduring advantage of using GPT over anything locally hosted.
There is no threat, both are private public partnerships that greatly benefit the military/empire. Read Pynchon.
Everybody calm down.
Altman has this.
In the not too distant future, like in 47 days, ChatGPT 6 Recurd is going to knock everyone's socks off. Instead of a better model, it's a recycled model (there's some recursion for you!) but it auto purchases 10 more ChatGPT 6 plans to help it perform much much better.
Then another 38 days later each of those plans upgrades, and scores improve one percent again. And then 24 days later, those plans purchase a cluster of upgrades.
It is very easy to underestimate super-exponentials. But by early 2026, OpenAI is likely to be selling trillions of licenses to these models. At the dawn of agentic computing, the number of human customers is not the limit anymore.
And if you are thinking that the global money supply is going to economically choke off his plan, well, Altman has a coin for that. And an automated loan system extending credit to every human and all those models. And, of course, compute futures (until the real world can catch up) for everybody. But there won't be too much coin for its price to rocket with demand, so get in early. It's a whole new world.
Alt/World!
Back to your local station.
Can someone explain why we measure these datacenters in Gigawatts rather than something that actually measures compute like flops or whatever the AI equivalent of flops is?
To put it another way, I don't know anything but I could probably make a '1 GW' datacenter with a single 6502 and a giant bank of resistors.
Because that's the main constraint for building them - how much power can you get to the site, and the cooling involved.
Also the workloads completely change over time as racks get retired and replaced, so it doesn't mean much.
But you can basically assume with GB200s right now 1GW is ~5exaflops of compute depending on precision type and my maths being correct!
As a reference for anyone interested - the cost is estimated to be $10 billion for EACH 500MW data center - this includes the cost of the chips and the data center infra.
With such price tag the power plant should be included.
Yes! The varying precisions and maths feels like just the start!
Look at next gen Rubin with it's CPX co-processor chip to see things getting much weirder & more specialized. There for prefilling long contexts, which is compute intensive:
> Something has to give, and that something in the Nvidia product line is now called the "Rubin" CPX GPU accelerator, which is aimed specifically at parts of the inference workload that do not require high bandwidth memory but do need lots of compute and, increasingly, the ability to process video formats for both input and output as part of the AI workflow.
https://www.nextplatform.com/2025/09/11/nvidia-disaggregates...
To confirm what you are saying, there is no coherent unifying way to measure what's getting built other than by power consumption. Some of that budget will go to memory, some to compute (some to interconnect, some to storage), and it's too early to say what ratio each may have, to even know what ratios of compute:memory we're heading towards (and one size won't fit all problems).
Perhaps we end up abandoning HBM & dram! Maybe the future belongs to high bandwidth flash! Maybe with it's own Computational Storage! Trying to use figures like flops or bandwidth is applying today's answers to a future that might get weirder on us. https://www.tomshardware.com/tech-industry/sandisk-and-sk-hy...
[flagged]
Mh, in my recently slightly growing, but still tiny experience with HW&DC-Ops:
You have a lot more things in a DC than just GPUs consuming power and producing heat. GPUs are the big ones, sure, but after a while, switches, firewalls, storage units, other servers and so one all contribute to the power footprint significantly. A big small packet high throughput firewall packs a surprisingly high amount of compute capacity, eats a surprising amount of power and generates a lot of heat. Oh and it costs a couple of cars in total.
And that's the important abstraction / simplification you get when you start running hardware at scale. Your limitation is not necessarily TFlops, GHz or GB per cubic meter. It is easy to cram a crapton of those into a small place.
The main problem after a while is the ability to put enough power into the building and to move the heat out of it again. It sure would be easy to put a lot of resistors into a place to make a lot of power consumption. Hamburg Energy is currently building just that to bleed off excess solar power into the grid heating.
It's problematic to connect that to the 10kv power grid safely and to move the heat away from the system fast.
My understanding is that there is no universal measure of compute power that applies across different hardware and workloads. You can interpret the power number to mean something close to the maximum amount of compute you can get for that power at a given time (or at least at time of install). It also works across geographies, cooling methods, etc. It covers all that.
Back of the napkin: 1 gigawatt would power roughly 1.43 billion 6502s.
I appreciate you
Measurement in unit of power because this is the ultimate use-cost, assuming scaling in compute efficiencies, capex costs, etc.
If you think about it like refining electricity. A data center has a supply of raw electricity, and a capacity for how must waste (heat) it can handle. The quality of the refining improving over time doesn't change the supply or waste capacity of the facility.
It simplifies marketing. They probably don't really know how much Flops or anything else they will end up anyway. So gigawatts is nice way to look big.
Assuming a datacenter is more or less filled with $current_year chips, the number of of flops is kind of a meaninglessly large number. It's big. How big? Big enough it needs a nuclear power plant to run.
Not to mention it would assume that number wouldn't change...but of course it depends entirely on what type of compute is there as well as the fact that every few years truckloads of hardware gets replaced and the compute goes up.
Because, to us tech nerds, GPUs are the core thing. With a PM hat on, it's the datacenter in toto. Put another way: how can we measure in flops? By the time all this is built out we're on the next gen of cards.
[dead]
His "$400B in next 12 months" claim treats OpenAI as paying construction costs upfront. But OpenAI is leasing capacity as operating expense - Oracle finances and builds the data centers [1]. This is like saying a tenant needs $5M cash because that's what the building cost to construct.
The Oracle deal structure: OpenAI pays ~$30B/year in rental fees starting fiscal 2027/2028 [2], ramping up over 5 years as capacity comes online. Not "$400B in 12 months."
The deals are structured as staged vendor financing: - NVIDIA "invests" $10B per gigawatt milestone, gets paid back through chip purchases [3] - AMD gives OpenAI warrants for 160M shares (~10% equity) that vest as chips deploy [4] - As one analyst noted: "Nvidia invests $100 billion in OpenAI, which then OpenAI turns back and gives it back to Nvidia" [3]
This is circular vendor financing where suppliers extend credit betting on OpenAI's growth. It's unusual and potentially fragile, but it's not "OpenAI needs $400B cash they don't have."
Zitron asks: "Does OpenAI have $400B in cash?"
The actual question: "Can OpenAI grow revenue from $13B to $60B+ to cover lease payments by 2028-2029?"
The first question is nonsensical given deal structure. The second is the actual bet everyone's making.
His core thesis - "OpenAI literally cannot afford these deals therefore fraud" - fails because he fundamentally misunderstands how the deals work. The real questions are about execution timelines and revenue growth projections, not about OpenAI needing hundreds of billions in cash right now.
There's probably a good critical piece to write about whether these vendor financing bets will pay off, but this isn't it.
[1] https://www.cnbc.com/2025/09/23/openai-first-data-center-in-...
[2] https://w.media/openai-to-rent-4-5-gw-of-data-center-power-f...
[3] https://www.cnbc.com/2025/09/22/nvidia-openai-data-center.ht...
[4] https://techcrunch.com/2025/10/06/amd-to-supply-6gw-of-compu...
Suffice it to say this is not the first time Ed Zitron has been egregiously wrong on both analysis and basic facts. It's not even the first time this week.
I wrote a post about his insistence that the "cost of inference" is going up. https://crespo.business/posts/cost-of-inference/
Finding an audience that wants to believe something, and then creating something that looks like justification for that belief is a method to gain notoriety, which may or may not lead to income. Works doubly well for issues that are "hot" in the public sphere, as you can tap into the supporters and the outraged.
It would be nice if your blog had an RSS feed :)
Thank you. I will add one soon.
Solid post, thanks for sharing. Zitron occupies his own echo chamber. I've seen some people share links to his articles with a smirk as a "proof" of how "bullshit LLMs are" — and I know for a fact that they have no understanding of LLMs or how to evaluate limitations, saying nothing about unit economics. Sadly, I don't think it's possible to reason with them.
To be clear, I do expect that the bubble will burst at some point (my bet is 2028/2029) — but that's due to dynamics between markets and new tech. The tech itself is solid, even in the current form — but when there's a lot of money to make you tend to observe repeatable social patterns that often lead to overvaluing of the stuff in question.
OpenAI is currently growing WAUs at ~122.8% annualized growth (down from ~461.8% just 10 months ago).
Assuming their growth rate is getting close to stabilizing and will be at ~100% for 3 years to end of 2028 - that'd be $104B in revenue, on 6.4B WAUs.
I wouldn't bank on either of those numbers - but Oracle and Nvidia kind of need to bank on it to keep their stocks pumped.
Their growth decay is around 20% every 2 months - meaning - by this time next year, they could be closer to 1.2B WAUs than to 1.6B WAUs, and the following year they could be closer to 1.4B WAUs than to 3.2B WAUs.
Impressive, for sure, but still well bellow Google and Facebook, revenue much lower and growth probably even.
They don't need to grow users if their acv increases or they grow their enterprise or API businesses
And of course I might pay $20/month for ChatGPT and another $20/month for sora (or some hypothetical future b2c app)
Codex is my current favorite code reviewer (compare to bug bot and others), although others have had pretty different experiences. Codex is also my current favorite programming model (although it's quite reasonable to prefer Claude code with sonnet 4.5). I would happily encourage my employer to spend even more on OpenAI tools, and this is ignoring the API spend that we have (also currently increasing)
OpenAI don't monetize the vast majority of their users yet. But the unit costs are really low, and once they start monetizing the free tier with ads, they'll be wildly profitable.
"OpenAI cannot actually afford to pay $60 billion / year" the article states with confidence. But that's the level of revenue they'd be pulling in from their existing free users if monetized as effectively as Facebook or Google. No user growth needed.
And it seems this isn't far off, given the Walmart deal. Of course they'll start off with unobtrusive ad formats used only in situations where the user has definite purchase intent, to make the feature acceptable to the users, and then tighten the screws over time.
Except google and facebook have locked in numbers at times of virtually no competition before they started scaling up ads. If Open AI starts scaling ads next year they will churn people at a rate that will not be offset by growth and will either plateau or more likely lose user numbers, as their product has no material edge to alternatives in the market.
I disagree with Zitron’s analysis on many points, but I don’t see Open AI achieving the numbers it needs. Investors backing it must have seen something in private disclosure to be fronting this much money. Or more precisely, I need to believe they have seen something and are not fronting all this money just based on well wishes and marketing.
Most people don't choose by blind taste test. How intrusive do those ads have to get before it overwhelms habit and familiarity? OpenAI might be betting on enough of its 800m and growing weekly users sticking around long enough to moot churn until a liquidity event pays everyone off.
We should also consider the $2,000/month and $20,000/month plans rolling out in the future.
> His "$400B in next 12 months" claim treats OpenAI as paying construction costs upfront. But OpenAI is leasing capacity as operating expense - Oracle finances and builds the data centers [1].
It is bagholders all the way down[1]! The final bagholder will be the taxpayer/pension holder.
[1]https://en.wikipedia.org/wiki/Turtles_all_the_way_down
It's going to be 2008 bailouts again, but much worse.
These companies are doing all sorts of round tripping on top of propping up the economy on a foundation of fake revenue on purpose so that when it does some crumbling down they can go cry to the feds "help! we are far too big to fail, the fate of the nation depends on us getting bailed out at taxpayer expense."
I feel like writing that down somewhere because that's pretty close to how the bailout will be pitched. "If you don't bail us out then our adversaries will get to AGI first and it will be game over". Very clever of them.
The capital cost is even less insane than the fact that power utility companies are the real constraint on this industry.
North American grids are starving for electricity right now.
Someone ought to do a deep dive into how much actual total excess power capacity we have today (that could feasibly be used by data center megacampuses), and how much capacity is coming online and when.
Power plants are massively slow undertakings.
All these datacenters deals seem to be making an assumption that capacity will magically appear in time, and/or that competition for it doesn't exist.
How this is not more examined is beyond me….
This might be slightly off topic, but after the Sora 2/anime controversy I just looked up how much does it cost to make your average anime - it turns out that top tier 26 ep anime shows like Chainsaw Man, Delicious In Dungeon or box office movies like Demon Slayer cost between $10-$20m to make. Now I don't know how much they spend on Sora 2, but I'd imagine tens of billions. For that money, you could make a thousand such shows.
While this post is full of conjecture, and somewhat unrelated to LLMs, but not their economics - I wonder how the insane capex is going to be justified even if AI becomes fully capable of replacing salaried professionals, they'll still end up paying much much more than what it'd have cost to just hire that armies of professionals for decades.
Can you even make "shows" with sora 2? I haven't used it but everytime I hear about it it's in the context of making "shorts". Making shows would require a technological leap from that point.
> Now I don't know how much they spend on Sora 2, but I'd imagine tens of billions
I think it's magnitudes less, actually.
A consequence of tools like Sora and Google Flow is that there will be an increase in amateurs creating professional quality content for comparatively cheap. So a thousand such shows (probably many more) isn't in the realm of the impossible!
Not off topics at all. That massive investment must be baked by something with a bigger ROI than just a chatbot.
I think that kind of thing is one of the main problems with the 'AI bubble'. It's probably a misallocation of capital, spending billions on energy sucking data centers people don't especially want which leaves less money for things like paying creatives to make anime shows people do want.
In properly functioning capitalism entrepreneurs would look at what they can make a profit at due to people paying for it because they want it and invest in that but the hype wave seems to be causing an over allocation to heavily loss making activities.
And I say that as a believer that AGI is on the way but that it will come from smart computer folk designing better systems, not from more gigawatts of data centers.
That seems like a lot of money. How quickly can sustainable capacity be built up in terms of building power plants, data center construction, silicon design and fabrication, etc.? Are these industries about to experience stratospheric growth, followed by a massive and painful adjustment, or does this represent a printing press or industrial revolution like inflection point?
Would anyone like to found a startup doing high-security embedded systems infrastructure? Peter at my username dot com if you’d like to connect.
Almost nothing in tech is sustainable outside of gold recycling.
Why doesn't Anthropic needs similar levels of capital (or do they)?
Anthropic is more secretive about their costs, Ed Zitron is right now investigating their costs, specifically on GCP
Sure he is
because this is for building "AGI", this has little to nothing to do with their current offerings.
This also assumes that intelligence continues to scale with compute which is not a given.
> this is for building "AGI"
I’m increasingly convinced this is AI’s public relations strategy.
When it comes to talking to customers and investors, AGI doesn’t come up. At fireside chats, AGI doesn’t come up.
Then these guys go on CNBC or whatnot and it’s only about AGI.
I don't think it's AGI, but rather video production. OpenAI wants to build the next video social network / ads / tv / movie production system. The moat is the massive compute required.
I'm sure they're not against building this, and they definitely have competing priorities.
But my personal belief is Sam Altman has a singular goal: AGI. Everything else keeps the lights on.
Is there any indication they are actually working on this and Altman is any good at pursuing this goal? I'm seriously asking, please inform the uninformed.
My impression is that I hear a lot more about basic research from the competing high-profile labs, while OpenAI feels focused on their established stable of products. They also had high-profile researchers leave. Does OpenAI still have a culture looking for the next breakthroughs? How does their brain trust rank?
Huh, my read is exactly the opposite: Altman wants to be a trillionaire and isn't picky about how he gets there. If AGI accomplishes it, great, but if that's not possible, "just" making a megacorporation which permanently breaks the negotiating power of labor is fine too. Amodei is the one who I think actually wants to build AGI.
Then why start a company where you have no equity? (Yes I believe he financially benefits from OpenAI, but the more straightforward way would be OpenAI equity)
I think his initial belief was that OpenAI would to be a research organization which would get acquihired or license its tech out, and then Chat-GPT unexpectedly happened. Notice how ever since then he's been trying to get the nonprofit status cancelled or evaded.
I think your read is right. There are a few people who want to be trillionaires and aren't too picky about to get there: Elon Musk, Sam Altman, Trump, Larry Ellison, Peter Thiel, Putin. Maybe Bezos and Zuckerberg.
Of course, there wouldn't be many people who don't want to be trillionaires. Rare exceptions[1]. But these are the people with means to get there.
[1]: No means NO - Do you want a one million dollar answer NO!: https://www.youtube.com/watch?v=GtWC4X628Ek
I definitely would not want to be a trillionaire yeah. Having a million or so would be nice but more and you get roped into all kinds of power play and you have to get security goons with you all the time to avoid getting kidnapped. I'd much rather be anonymous.
to be fair I couldn't recognize 99% of billionaires and crazy people are easy to deal with if you know what I mean.
To be fair, you're probably not an organized crime syndicate looking for targets either.
crime syndicates are just as afraid of people with that much money.
[dead]
[dead]
Isn't that just PR?
> This also assumes that intelligence continues to scale with compute which is not a given.
Isn’t it? Evidence seems to suggest that the more compute you throw at a problem, the smarter the system behaves. Sure, it’s not a given, but it seems plausible.
No, that's no longer the case: https://www.newyorker.com/culture/open-questions/what-if-ai-...
It also depends on the amount of training data, that isn't really growing much after they scraped all the internet.
it's not mathematically proven therefore it is not a given.
> it's not mathematically proven therefore it is not a given
Given doesn't mean proven, it means accepted as true. We can give variables fixed values, for example.
Entire classes of proofs, moreover, prove something cannot be true because if you assume it is you get a paradox or nonsense.
we had this problem in mathematics that's why all ambiguity was eliminated and why adding two numbers together is like 30 pages of proofs.
> a problem
That word is carrying a heavy load. There's no evidence that scaling works indefinitely on this particular sort of problem.
In fact there is no evidence that scaling solves computing problems generally.
In more narrow fields more compute gets better results but that niche is not so large.
In a brute force poorly architected way, perhaps.
But human brains are small and require far less energy to be very generally intelligent. So clearly, there must be a better way to achieve this AGI shit. Preferably something that runs locally in the palm of your hand.
I believe they do, but the author seems to focus on OpenAI since they're more of a household name.
Anthropic will need it if their growth continues.
Anthropic seems more comfortable using TPUs for overflow capacity. The recent Claude degradation was largely due to a bug from implementation differences with TPUs and from their writeup we got some idea of their mix between Nvidia and TPU for inference.
I'm not sure if OpenAI has been willing to deploy weights to Google infrastructure.
How much does the capex model of a datacenter change when the goal is 100% utilization, with no care for node uptime beyond capex efficiency/hardware value mainenance?
I wouldn't be surprised if the cost came down by at least one order of magnitude, two if NVidia and others adjust their margin expectations. If the bet is that OpenAI can ship crappy datacenters with crappy connectivity/latency characteristics in places with cheap/existing power - then that seems at least somewhat plausible.
OpenAI burning 40 billion dollars on datacenters in the next 1 year is almost guaranteed. Modern datacenter facilities are carefully engineered for uptime, I don't think OpenAI cares about rack uptime or even facility uptime at this scale.
People who have run these numbers still want tier IVish. I haven't seen evidence of crypto mining "tier zero" datacenters being converted to AI despite the obvious advantages.
Aye- given the margins involved, is imagine you could get quite favorable insurance policies from NVDA on tier 0 facilities.
Why is this guy so angry?
That aside, his math is wrong
He's not angry it's an angsty way of writing, a lot of used to write like that as teenagers. There was a time around 5 years ago where ever best selling book raced to have "fuck" or "vagina" in the title.
If you're emotionally on edge when reading it, it's easier to miss that his math is wrong and he's no expert. Writing that way benefits him.
dang.. that's an angle I haven't thought of
Us British have a unique relationship with profanity as a way to communicate.
edit: Aussies and kiwis too!
Yeah his writing is emotionally exhausting
How is the math wrong?
Why are you -not- angry at all of this insanity? I feel the same way as him, hype has blown the bubble bigger and bigger, and it's just a matter of time until it poops out and causes huge amounts of pain.
Yes it’s like watching the titanic. But the question is if the Titanic is 30% of way there or just before hitting an iceberg.
It like dumping all your cash into mainframes right before the PC revolution.
The PC revolution in this analogy is local models? However good and fast they get locally, the same model will run 10-1000x faster on a dedicated device. I think cloud models will be in demand for a long time even before you factor in efficiencies of scale.
Also there will be tons of loads that are natively server side and don't make sense to use a local model for (or where using server side models retains more control).
It is mathematicaly sure that OpenAI is burning investor money. Even if the whole world paid a 20$ monthly subscription, it would need 3 years to collect 400 billions. In reality it needs decades.
I was explaining the problem of lagging benefits for the huge expenditures for AI research and infrastructure this morning to my wife (my RC airplane flying club is an hour round trip drive so we really had time to get into it). She is not much interested in tech but she found the story of over investment and what it might do to our economy very interesting indeed.
There are many people in the USA who don’t overly care about technology but might care a lot about the economic risks of overly aggressively chasing strong AI capabilities.
I am forwarding this article to a few friends and family members.
The emperor wears no clothes
"a man best known for needing, at any given moment, another billion dollars" (c) I'm writing it down, this is peak AI :)
I'm not sure if I missed this in the article, but what's the cost of failure?
Why can't OpenAI keep projecting/promising massive data centre growth year after year, fail to deliver, and keep making Number Go Up?
If they keep missing the hype will burn out eventually
>Why can't OpenAI keep projecting/promising massive data centre growth year after year, fail to deliver, and keep making Number Go Up?
Because eventually, Nvidia will run out of money, so the incestuous loop between Nvidia funding AI entities, who then use those funds to buy Nvidia chips, artificially propping up Nvidia's stock price, will eventually end and poof.
Competing forces are the market's insatiable need for growth every quarter, and other countries also chasing AIs and will not slow down if other countries, like the US, do slow down.
Let's assume that estimate is good. For some perspective an context, the last finalized DOD budget (2023) was $815B, and, plus supplementals, turned into about $852 billion.
AGI is absolutely a national security concern. Despite it being an enormous number, it'll happen. It may not be earmarked for OpenAI, but the US is going to ensure that the energy capability is there.
> AGI is absolutely a national security concern.
This may well be the PR pivot that's to come once it becomes clear that taxpayer funding is needed to plug any financing shortfalls for the industry - it's "too big to let fail". It won't all go to OpenAI, but be distributed across a consortium of other politically connected corps: Oracle, Nvidia/Intel, Microsoft, Meta and whoever else.
The top six US tech companies are generating ~$620 billion per year in operating income (likely to be closer to $700 billion in another 12-18 months). They can afford to spend $2 trillion on this over the next decade without missing a beat. Their profit over that timeline will plausibly be $8 to $10 trillion (and of course something dramatic could change that). That's just six companies.
Fears of an AI bubble originate from the use of external financing needed to pay for infrastructure investments, which may or may not pay off for the lenders.
These 6 companies are using only a small portion of their own cash reserves to invest, and using private credit for the rest. Meta is getting a $30 billion loan from PIMCO and Blue Owl for a datacenter [0], which they could easily pay for out of their own pocket. There are also many datacenters that are being funded through asset-backed securities or commercial mortgage-backed securities [0], the market for which can quickly collapse if expected income doesn't materialize, leading to mortgage defaults, as in 2008.
[0] https://www.reuters.com/legal/transactional/meta-set-clinch-...
[1] https://www.etftrends.com/etf-strategist-channel/securitizin...
The overextension is too complex for most people, same as the 2008 overextension…
> but the US is going to ensure that the energy capability is there.
We're doing a pretty shit job of ensuring that today. Capacity is already intensely strained, and the govt seems to be decelerating investment into power capacity growth, if anything
> AGI is absolutely a national security concern.
> Despite it being an enormous number, it'll happen.
Care to share your crystal ball ?
I'd be happy with $4000
What if they can't get it? What happens to companies that are built on their models like this Meeting Prep AI I just launched today https://news.ycombinator.com/item?id=45617686
They sure are writing cheques fast. Presumable Sam has a plan
Ed Zitron the author of the article talked about this in an earlier podcast:
https://youtu.be/_wStScmT748
How the hell does one spend $2 Billion on sales and marketing!!!
I really don’t get it. If AI is hot and profitable, can’t they fund their own expansion?
Either the $400B will be a piece of cake or they are wildly overestimating the demand and have an impractical business model.
Sure, there are those would like to have a car wash where the walls are made of crystal, organic artisan soaps, and only ultra pure water is used to wash the car. But such wouldn’t be a profitable business, no matter how much money is invested. No amount of marketing will change that.
Sorry, best I can do is $20
Related:
They Don't Have the Money: OpenAI Edition
https://news.ycombinator.com/item?id=45545236
At this point we all know this is just a massive bubble. I'm done paying attention to it really. I'm prepared for all my investments to go down in the next 1-5 years. If you're nearing retirement now is the time to cash out. Yes, investments could go up in a value a lot until the correction, but I don't really think that is worth the risk.
So cash out, and then what? Buy gold? Hang onto your cash while inflation takes off and dilutes it to nothing?
If you're reading this article and wondering "When is this house of cards going to collapse!?", a little advice, gained at a high price to myself: you can waste years waiting for it to collapse, 95% of the time, it never will. I never thought Uber or Tesla would survive COVID. I'd have $450K in bitcoin if I held onto the "joke" amount I bought for $200 in 2013.
Thing that make me skip this specific narrative:
- There's some heavy-handed reaching to get to $400B next 12 months: guesstimate $50B = 1 GW of capacity, then list out 3.3 gigawatts across Broadcom chip purchases, Nvidia, and AMD
- OpenAI is far better positioned than any of the obvious failures I foresaw in my 37 years on this rock. It's very, very, hard to fuck up to the point you go out of business.
- Ed is repeating narratives instead of facts ("what did they spend that money on!? GPT-5 was a big let down!" -- i.e. he remembers the chatgpt.com router discourse, and missed that it was the first OpenAI release that could get the $30-50/day/engineer in spend we've been sending to Anthropic)
Yeah, I've re-evaluated my misconceptions around the same time. As another datapoint - infamous Tether/Bitfinex corporation which was blatantly cheating and insolvent (per normal definition) for years and every time reasonable people were giving them less than a year to bankruptcy and jail. But here were are, Tether is still printing tokens like madman, no one is in jail and every analyst was proven both wrong and right at the same time. Apparently company can be insolvent and cheating and breaking law for a looong time, with zero repercussions. And no one bats an eye.
I suspect OpenAI is deeply in red today and any normal small company would went out of business years ago. But they are too big to fail and will continue working this way. They may even transition to being profitable later on, and people will retcon that they always have been so.
At this point I just wonder, what would be exact mechanism with which Sam will redirect his negatives to a regular public like us here. Bailouts? Irresponsible government investments? Banks overreaching and them getting bailouts? Something new?...
Couple points where our principled bear-ishness diverges and I'm bullish:
- I think it overstates the case to even get to the $150B #.
- $50B/GW sounds very wrong to me.
- They don't need anything, 2 of the 3 are GPUs-for-equity, and Broadcom isn't going to insist OpenAI make a $100B 100% full purchase commitment up front, I'd be surprised at $1B.
- All I see is undercapacity in the sector after 2-3 years of feverish buildouts, it's hard for me to foresee, in the short term, a place where market signals can't override deals in principle.
(random ramble: this reminds me somewhat of when Waymo "committed" to...80K Jaguar & Pacifica purchases? In 2016? I thought that meant they were definitely going to scale in the short term, hell or high water. In retrospect, I bet they're at a 1/4 of the 9 years later, and are yet entirely real.)
My search habits have evolved quite fast - when I search for something now I first ask for quick results from Chatgpt, which gives me pointers I then drill down on. Google's revenue for 2024 was 350 billion. I know it's not all Google ads, but s lot of it is. When you follow a link from Chatgpt, it always has a utm_source=Chatgpt in it, so companies are quickly learning how important getting linked there is.
I'm not saying there's no bubble, and I personally anticipate a lot of turmoil in the next year, but monetisation of that would be the most primitive way of earning a lot of money. If anyone is dead man walking it's Google. For better or worse, Chatgpt has become to AI what Google was to search, even though I think Gemini is also good or even better. I also have my own doubts about the value of LLMs because I've already experienced a lot of caveats with the stuff it gives you. But at the same time, as long as you don't believe it blindly, getting started with something new has never been easier. If you don't see value in that, I don't know what to tell you.
> For better or worse, Chatgpt has become to AI what Google was to search, even though I think Gemini is also good or even better.
Google definitely has the better model right now, but I think ChatGPT is already well on its way to becoming to AI what Google was to search.
ChatGPT is a household name at this point. Any non tech person I ask or talk about AI with it's default to be assumed it's ChatGPT. "ChatGPT" has become synonymous with "AI" for the average population, much in the same way "Google it" meant to perform an internet search.
So ChatGPT already has the popular brand. I think people are sleeping on Google though. They have a hardware advantage and aren't reliant on Nvidia, and have way more experience than OpenAI in building out compute and with ML, Google has been an "AI Company" since forever. Google's problem if they lose won't be because of tech or an inferior model, it will be because they absolutely suck at making products. What Google puts out always feels like a research project made public because someone inside thought it was cool enough to share. There's not a whole lot of product strategy or cohesion across the Google ecosystem.
BECAUSE OpenAI is apart of the military. This doesnt matter. Americans really should be reading Pynchon.
Vast sums of money that are being invested in it (obviously in the hope of AGI) but I'm not sure the world would notice if OpenAI/current LLM products just disappeared tomorrow.
I don't think that AGI is necessary for LLMs to be revolutionary. I personally use various AI products more than I use Google search these days. Google became the biggest company in the world based on selling advertising on its search engine.
Women on dating apps would.
Have a search for "chatfishing".
I think the fix for that one is easy enough fortunately. Meet quickly. They can't 'chatfish' you in person.
> Meet quickly.
Always a good dating strategy.
It's going to perhaps sound nuts, but I'm beginning to wonder if America is actually a giant Ponzi scheme.
I've been thinking about American exceptionalism - they way it is head and shoulders above Europe and the developed world in terms of GDP growth, market returns, start up successes etc. and what might be the root of this success. And I'm starting to think that, apart from various mild genuine effects, it is also a sequence of circular self-fulfilling prophecies.
Let's say you're a sophisticated startup and you want some funding. Where do you go? US of course - it has the easiest access to capital. It does so presumably because US venture funds have an easier time raising funds. And that's presumably because of their track record of making money for investors - real, or at least perceived. They invest in these startups and they exit at a profit, because US companies have better valuations than elsewhere, so at IPO investors lap up the shares and the VCs make money. It's easy to find buyers for US stocks because they're always going up. In turn, they're going up because, well, there's lots of investors. It's much easier to raise billions for data centres and fairy dust because investors are in awe of what can be done with the money and anyway line always go up. Stocks like TSLA have valuations you couldn't justify elsewhere. Maybe because they will build robot AI rocket taxis, or maybe because the collective American Allure means valuations are just high.
The beauty of this arrangement is that the elements are entangled in a complex web of financial interdependency. If you think about these things in isolation, you wouldn't conclude there's anything unusual. US VC funding is so good because there's a lot of capital - lucky them. This thought of circularity only struck me when trying to think of the root cause - the nuclear set of elements that drive it. And I concluded any reason I can think of is eventually recursive.
I'm not saying America is just dumb luck kept together by spittle, of course there are structural advantages the US has. I'm just not sure it really is that much better an economic machine than other similar countries.
One difference to a Ponzi scheme is that you might actually hit a stable level and stay there rather than crash and burn. So it's more like a collective investment into a lottery. OpenAI might burn $400bn and achieve singularity, then proceed to own the rest of the world.
But I can't shake the feeling that a lot of recent US growth is a bit of smoke and mirrors. After adjusting for tech, US indices didn't outperform European ones post GFC, IIRC. Much of its growth this year is AI, financed presumably by half the world and maintained by sky-high valuations. And no one says "check" because, well, it's the US and the line always go up.
I don't think you're describing a Ponzi scheme. More like a merry go around. Seems pretty common, just turned to 11.
Of course it can stop, an a little history reading will show that it has always stopped, but it can take a long time.
If anything, I fear the AI hype + the orange idiot you put in charge, can fuck you up much faster than otherwise. OTOH, Trump is also a symptom, showing that things were not going great.
now do TSLA
[dead]
[dead]
[flagged]
[flagged]
1) What
4
Elon can give them 400B and he would still have a net worth of 100B and from that he can again get back to his former net worth in a few years. The world is not fair.
That's not how that works. Same reason we don't tax unrealized gains (unless you are Norway).
And The Netherlands
You do realize that a large part of his wealth is tied to the valuation of Tesla (and SpaceX and many other investments).
Selling 100B worth of stocks for anything close to 100B is not possible. That volume would mini-crash the entire exchange.
> Selling 100B worth of stocks for anything close to 100B is not possible. That volume would mini-crash the entire exchange
Nasdaq trades about half a trillion dollars a day [1]. Even if Musk were an idiot and dumped $100bn in one day, it would crash Tesla's stock price, not the Nasdaq.
If Musk wanted to give OpenAI $100bn, the best way to do it would be to (a) borrow against his shares or (b) given OpenAI his (non-voting) shares.
[1] https://www.nasdaqtrader.com/Trader.aspx?id=DailyMarketSumma...
what does this mean?