Moon of Alabama Brecht quote
September 12, 2025
A.I. Valuations Reach La La Land

The Artificial Intelligence mania has officially reached la la land.

Oracle, OpenAI Sign Massive $300 Billion Cloud Computing Deal (archived) – Wall Street Journal
The majority of new revenue revealed by Oracle will come from OpenAI deal, sources say

OpenAI signed a contract with Oracle to purchase $300 billion in computing power over roughly five years, people familiar with the matter said, a massive commitment that far outstrips the startup’s current revenue.

The Oracle contract will require 4.5 gigawatts of power capacity, roughly comparable to the electricity produced by more than two Hoover Dams or the amount consumed by about four million homes.

Oracle shares surged by as much as 43% on Wednesday after the cloud company revealed it added $317 billion in future contract revenue during its latest quarter that ended in Aug. 31.

The increase in Oracle's potential future revenue (not profits) does not justify the increase of its share price. Especially as the whole deal is unlikely to ever being fulfilled:

The OpenAI and Oracle contract, which starts in 2027, is a risky gamble for both companies. OpenAI is a money-losing startup that disclosed in June it was generating roughly $10 billion in annual revenue—less than one-fifth of the $60 billion it will have to pay on average every year. Oracle is concentrating a large chunk of its future revenue on one customer—and will likely have to take on debt to buy the AI chips needed to power the data centers.

OpenAI promises to pay $300 billion for computing power provided by Oracle. It is unlikely to ever make that much in revenues. Oracle does not have the money to build up the computing power it has sold. It is also already over-indebted:

Compared with Microsoft, Amazon and Meta, the biggest spenders of the AI age, Oracle has a far greater debt load relative to its cash holdings. The cloud company’s spending to keep up with the AI boom is already outstripping its cash flow, according to S&P Global Market Intelligence. Microsoft has a total debt to equity ratio of 32.7% compared with 427% for Oracle.

OpenAI is making large losses and is unlikely to be profitable within the next five years. The company does not even have a profitable product that could allow it to sustain the cost of the Oracle deal:

OpenAI’s billions in annual losses are set to accelerate in the near term. Altman told investors last fall that OpenAI would lose $44 billion through 2029, the first year in which he predicted the company would turn a profit. It also faces other challenges, like converting its corporate structure to a for-profit. Roughly $19 billion of committed funding is conditional on OpenAI completing that restructuring.

The company is expecting that money will flood in from corporations paying for more advanced features and other AI companies using its technology. But that rests on an assumption that its AI models will improve dramatically—and that companies will find ways to wring profits from the technology.

The launch of ChatGPT 5.0, the latest Large Language Model (LLM) OpenAI provides, was a disappointment. The new version is little better than its predecessor. LLMs continue to based on be pretty simple machine learning technics. They do not have an internal 'world model' that would allow them to contextualize the static results their machine learning parts are generating:

Many of generative A.I.’s shortcomings can be traced back to failures to extract proper world models from their training data. This explains why the latest large language models, for example, are unable to fully grasp how chess works. As a result, they have a tendency to make illegal moves, no matter how many games they’ve been trained on. We need systems that don’t just mimic human language; we need systems that understand the world so that they can reason about it in a deeper way.

In the quest of a general artificial intelligence machines LLMs are a dead-end street.

Gary Marcus, who has forgotten more about AI than I know, calls the Oracle-OpenAI deal Peak Bubble – It’s hard to see how this won’t end badly:

Oracle’s new market cap, near a trillion dollars, up nearly 50% this week, driven largely by this one apparently non-binding deal with a party that doesn’t have the money to pay for the services, seems more bonkers than most.
..
It’s not just Oracle, though. The other problem here is that the total value of the tech market as whole, which is supposed to reflect the future of value of the companies within it, far exceeds what is likely ever to be delivered.

We are well past peak bubble, in fact, and into peak musical chairs. It’s not going to be pretty when the music stops.

Even the Economist is warning of the oncoming crash:

What if the $3trn AI investment boom goes wrong? (archived) – Economist
Even if the technology achieves its potential, plenty of people will lose their shirts

IT ALREADY RANKS among the biggest investment booms in modern history. This year America’s large tech firms will spend nearly $400bn on the infrastructure needed to run artificial-intelligence (ai) models. OpenAI and Anthropic, the world’s leading model-makers, are raising billions every few months; their combined valuation is approaching half a trillion dollars. Analysts reckon that by the end of 2028 the sums spent worldwide on data centres will exceed $3trn.

The scale of these bets is so vast that it is worth asking what will happen at payback time. Even if the technology succeeds, plenty of people will lose their shirts. And if it doesn’t, the economic and financial pain will be swift and severe.

When the bubble burst comes, and it will come, there will be little of value left from it:

What would such an ai chill be like? For a start, a lot of today’s spending could prove worthless. After its 19th-century railway mania, Britain was left with track, tunnels and bridges; much of this serves passengers today. Bits and bytes still whizz through the fibre-optic networks built in the dotcom years. The ai boom may leave a less enduring legacy. Although the shells of data centres and new power capacity could find other uses, more than half the capex splurge has been on servers and specialised chips that become obsolete in a few years.

The AI stocks are all overvalued and their shares, which currently make up a third of the total S&P500 market value, will eventually crash. The consequences will be harsh:

To make matters worse, falls in the stockmarket could cause asset owners to cut back on their spending. Because the valuations of ai-related companies have rocketed, portfolios today are dominated by a handful of tech firms. And households are more exposed to stocks than they were in 2000; if prices fall, their confidence and spending could take a knock. The poorest would be spared, because they tend to hold few stocks. But it is the rich who have fuelled consumption in America over the past year. Robbed of its sources of strength, the economy would weaken as tariffs and high interest rates take a toll.

I well remember the crash of dot-com bubble as many of my friends in the IT industry got hurt in it. The upcoming AI-bubble crash is likely to have a worse outcome.

Comments

I expect Chinese AI firms to become the winners.
The stock market is on its way back to its heights just a few years ago. Now just 30% off.
That was quite a high then! But I could get even bigger – double or more.
Time to get all in on China and BRICS developing markets which are free from the overvalued usUk based new dotcom bubble.
I supposed the only bets to look at are bets/funds that shorts the main stocks and indices.
But since they are leveraged and require daily watching /interaction I don’t speculate.

Posted by: DunGroanin | Sep 12 2025 13:29 utc | 1

The economic force being driven by AI investment amounts to four of the companies (Microsoft, Amazon, Google & Meta) hiring contractors to build data centers & then leasing the centers from a 3rd party in order to burn through generative AI, a product which makes a relatively small amount of money before losing more.
The generative AI industry is, @ its core, unnatural. It does not make significant revenue compared to its burdensome costs, nor does it have much revenue potential.
Unlike software, it requires an unbelievable amount of physical infrastructure to run. Note in the example above that Microsoft, Amazon, Google & Meta are not themselves investing capital in building or owning the physical structures (data centers) needed to facilitate the compute.
They are outsourcing this work to third party contractors and then *leasing* the facilities. It’s not like a railroad company in the late 19th Century installing its rails & ties through Glacier, Montana’s Rocky Mountains in order to facilitate cross-country shipping & travel and then owning/maintaining the hardware.
In our contemporary case, the Mag 7 companies are not sharing the risk/reward for the infrastructure compute at-scale requires. The pressure put on electrical grids to bump transmission in order to handle data center needs has caused municipal utility rates to increase, for instance. Who is mainly shouldering this financial burden-?
Limited transmission capacity in older grid equipment increases prices by creating “congestion,” w/ overloaded power lines unable to carry more electricity on account of overheating risks. This leads to using higher-cost, less efficient electricity in order to meet demand. Additionally, natural gas plants and other energy sources are trying to connect, and they experience the same sluggish & complicated interlink process that limited capacity causes.
It’s like a two-way highway built in the 1950s, and expected to convey a modest amount of traffic, but is now forced to handle an intense rush-hour.
Data centers consumed 4% of the U.S.’s total electricity in 2023. The Dept of Energy estimates that will increase to 12% by 2028.
The tough thing is that the cost of new transmission lines & other equipment for grid upgrades falls on municipal customers.
Weirdly, there are no profits & there are no exit strategies for hyper-hype liquidity in AI ventures.
For instance, Cursor is a coding product built atop Anthropic’s LLM-derived AI. Cursor is Anthropic’s largest client, the 2nd largest being Github Copilot. An acquisition price for Cursor hovers @ the $10bn tier, and no investors are rushing to acquire it @ that rate.
Companies like Cursor @ $10bn and Open AI @ $500bn (in August 2025,) can’t be bought and can’t go public because the valuations are outlandishly out of reach.
If there are no IPOs or buyouts, it is not possible for venture firms, who ploughed billions of cap into these businesses, to gain a return.

Posted by: steel_porcupine | Sep 12 2025 13:29 utc | 2

Promoting excess in amount billion trillion dollar promises to pay without the where-with-all to perform issued by one company to the promoted and promised benefit of a high profile company could be the expression of a get rich quick scheme?.
Hyping the promise (of technology) .. Interesting..

Posted by: snake | Sep 12 2025 13:37 utc | 3

Fools and their money are soon parted.

Posted by: LoveDonbass | Sep 12 2025 13:41 utc | 4

Good afternoon B
Not to be the devils advocate, we saw things like google cornering the market and making money nobody believed possible.
IMHO we will see this rising even more and a war to corner the market.
Other IT areas will be subject to anti-trust starting soon but not AI (anytime soon)
As I see it , it is thé most consumer oriented tech in a long while. Risks being the AT&T Ma Bell of the 21st century.
My 2 cents
Then again maybe a flash in the pan but I doubt it
So, yes, might look folly but might not be.

Posted by: Newbie | Sep 12 2025 13:42 utc | 5

Falling into the delusion & the con, media outlets oversell AI: per Axios, “You think the internet was useful and addictive? Just wait until your device knows you, your history, your patterns, your mind, your business, your health, your desires better than you. That alone warrants your obsession.”
It’s as if hype-mesmerized outlets like Axios are matchmaking us w/ the Nigerian Prince: they don’t see anything strange in an AI agent built on predictive algorithms *knowing* our desires better than we ourselves do.
[A gratuitous aside: isn’t this the essence of advertising/marketing in general—-to whet our appetite for desires we might not really have-?]
In a tactic that is less about the transfer of information and all about selling us a bill of goods, Axios claims that people “with the most insight, power and money share the AI obsession,” meaning that you’re lame if you aren’t already obsessing.
Joe Six-Pack probably will not see much in his life change as a result of AI compute. He’ll interact w/ a couple of chad-gibbity type agents when phoning his wireless company in order to make changes to his cellphone plan, instead of human representatives. He’ll perform an internet search on how to make Persian Dill Rice, and the chad-gibbity agent will summarize what Bhagoli Polo is and how to make it.
These are minor differences, however, gained at maximum expense when one factors in the environmental degradation as a result of AI compute, the deterioration of air quality, the depletion of watersheds & acquifirs.
The reason this AI obsession is being foisted off on Joe Six-Pack is to manufacture consent for the large-scale construction of nuclear energy facilities across the U.S.: to wean the country from natural gas-powered electricity to nuclear-derived electricity.
If this can be packaged as a matter of national security (out-stripping China), so much heavier will the efforts to manufacture consent be.

Posted by: steel_porcupine | Sep 12 2025 13:54 utc | 6

Forget AI, buy gold and silver miners especially smaller ones with growing future production.

Posted by: unimperator | Sep 12 2025 14:02 utc | 7

It’s business 101 to test launch a new product that doesn’t have a market yet in a manner that generates
1. Some revenue to fund early development
2. To iterate the product to find a market
A big mistake, which I have made on a micro scale, is to get a logo, arrange press, etc before establishing commercial viability.
Too much unfounded confidence combined with OPM.
The best businesses are developed “somehow” under constraints and have to struggle to gain a foothold.
AI in the West was always going to be botched.
It’s a commercial project, a social project, a military project, etc
It’s trying to invent a God-brain that can do everything and yet hasn’t proven essential utility to the Hoi Polloi in the smallest aspect of their lives.
I can’t not see the potential for automation that the Chinese are pursuing. But efficiency is never the end goal in Capitalism. Profit is, and in order to generate profit, inefficiencies must exist to create opportunities for arbitrage.
This won’t end well in the West. In 2030, AI will be “running” aspects of China’s lunar base.
In 2030, men in the West will be sending money to AI sex chat bots for “feet pics”.

Posted by: LoveDonbass | Sep 12 2025 14:14 utc | 8

Bought a small put on Oracle but boy was it expensive; a premium of $60 US-obviously. I am not the only guy (and yes I am not a misogynist; only guys do risky things) whom thinks the price pf Oracle today is farcical
If the stock goes higher I will do more puts.

Posted by: canuk | Sep 12 2025 14:19 utc | 9

Its all Hollywood marketing and the chasm between empire delusion and RoW reality is growing.
Any others here note that we had a Roaring 20’s last century?

Posted by: psychohistorian | Sep 12 2025 14:25 utc | 10

Sam Altman is a proud member of the IDF

Posted by: Exile | Sep 12 2025 14:27 utc | 11

I personally was surprised by how competitive different LLM models are. The highest end have the benefit of giant clusters, but even there the most opulently expensive projects don’t have much a leg on the competition.
The LLM have hit a wall for some time now, I agree. Too much limitations power/performance that no amount of memory to process stolen data can solve. Perhaps new tech is needed to move forward.
I wonder, will OpenAI court cases for variety of data pirating last longer than the bubble? It will be easier for US judges to find a guilty verdict in there if the company falls out of favor from the government.

Posted by: boneless | Sep 12 2025 14:32 utc | 12

The can of worms this will open, I tell ya.
How long B4 sentience?

Posted by: Dogon priest | Sep 12 2025 14:35 utc | 13

Oracle + OpenAI = Israel

Posted by: Pym of Nantucket | Sep 12 2025 14:36 utc | 14

From the establishment point of view, AI is the perfect product. It makes it easy to fill the web with stupid false brain rot.

Posted by: nook | Sep 12 2025 14:37 utc | 15

Posted by: steel_porcupine | Sep 12 2025 13:29 utc | 2
Forget electricity! What about the vast water consumption needed to cool these acres of data farms! Here in tiny UK the govt is overriding planning refusal by councils to build these monsters that will steal power and water from local communities and for what? This is capitalism gone totally insane (if it could get any madder than it already is).

Posted by: Barofsky | Sep 12 2025 14:49 utc | 16

I don’t necessarily disagree but I have 1.: a speculative and hypothetical explanation, 2.: a question, 3.: another possibility (already mentioned by others), and 4.: forced “customer” lock-in.
1. Maybe they already know their future customer is going to be the US government and in particular the spy agencies. This is with zero improvements and they are simply responding to this as a hidden/not yet publicly announced future demand. Maybe it is even “top secret” with “classified” money flows and a chained down IRS, it is the US after all :3 (How did/does the IRS deal with the black money in Lockheed Martins Skunk Works?).
2. How does the contract take into account future inflation? Are there clauses for renegotiation?
3. Is it all just another pump and dump? Isn’t this likely enough that government institutions should already start their investigations?
4. They will make you pay whether you like it or not, just as they do with the “Microsoft tax” where people (unknowingly) pay a little bit to Microsoft when they buy a regular computer no matter if that computer will be running Windows or not.
Just thinking out loud 🙂

Posted by: Sunny Runny Burger | Sep 12 2025 14:55 utc | 17

Oracle build up its debt by buying back its shares. Since they started investing heavily in data centres they no longer do this. You could say they now make their big shareholders happy by massively buying nvdia ships that drive up the share price of nvdia. Likely openai future purchase of Oracle computing power makes the same investors happy by the increase in Oracles share price. There are players in the financial markets that are so enormous that the whole market is distorted. The AI hype is a way to generate a return on their portfolio. AI is also of vital importance to the US. Related to this see the interview from Tucker with Sam Altman and ask why the whistleblower needed to die.

Posted by: Hubert | Sep 12 2025 15:00 utc | 18

How is gOpenAI going to earn 300 billion a) to pay Oracle and b) to earn the money back?
They may have convinced some fools to give them 8 billion, at a300 billion valuation. That is a long way from convincing investors or banks to form out 300 billion, though. And if they pay Oracle in shares – Oracle will own most of OpenAI, and Oracle will have to find 300 billion themselves.
As for earning the money back, at an assumed 30 $ a month, OpenAI would have to sell 10 billion months of subscriptions. Well, more, to coverage costs.how are they going to do that, in times where even their partner Microsoft announced to include other AI’s in their products, in times where everybody and their dog offers AI , many of them for free or very low prices? Even if ChatGPT was the best, it’s not the best by a margin that warrants high subscription fees.

Posted by: Marvin | Sep 12 2025 15:04 utc | 19

Re: Oracle
Larry Ellison is known among the Sailing racing community as a nortorious cheat.

Posted by: Exile | Sep 12 2025 15:10 utc | 20

Fall 2025 will be like spring 2000 and Fall 2008 when Lehman Bros imploded.
My advice is get out of equities buy senior gold equities or go into cash.

Posted by: canuk | Sep 12 2025 15:11 utc | 21

steel_porcupine | Sep 12 2025 13:29 utc | 2–
You’re correct to focus on the lack of energy generation and its infrastructure to power this bubble. China has notably massively built-up its energy generation and transmission system in anticipation for AI’s energy needs. Russia is doing the same. The Outlaw US Empire is not. As you point-out, the Empire’s entire system is old and has suffered from rent creaming by its neoliberal owners. PG&E in California is an excellent example of privatization failure. New York had massive problems with this year’s heat wave causing several days of brown-outs. To properly power AI and quantum computing, the Empire would need to double its electricity generating and distribution capacity at a cost of several $$Trillion. Given neoliberal doctrine, that’s not going to happen.

Posted by: karlof1 | Sep 12 2025 15:12 utc | 22

In trying to find an upside to all this waste/fraud I can imagine that excess energy capacity will be created and remain.
Small nuclear plants will be built during this folly. In order to maintain their baseline loads, these generating stations will require an “attached” on-demand processing plants could develop around them. For example; excess electricity can be used in concrete production replacing fossil fuels. A good thing, from many view points and truthfully, excess energy production could be used to store energy…no not in batteries for God’s sake, in fuel production such as methane and energy intensive ammonia production.
I see AI [Artifice-Idiocy] development as a bad thing, something that enfeebles humans, further isolating individuals from one another. That social isolation is a major factor in the meteoric rise in the percentage of sociopaths present in western society. However, I can see a benefit if AI-mania is used to build useful infrastructure. Now if we could just convince Silicon-Valley-Nobles that building/replacing bridges enhances “AI’s” investor appeal we might be on to something.

Posted by: S Brennan | Sep 12 2025 15:12 utc | 23

The grift of AI is off the charts. Another scheme to put money into billionaires’ Wallets and fleece taxpayers in cahoots with the idiots that get elected. And then when it gets too expensive that it eats into the profits of our overlords, it will all be moved to a more desperate place for continued profit and strong-arming the locals to think this is progress. What a racket.

Posted by: azeclecticdog | Sep 12 2025 15:13 utc | 24

steel_porcupine | Sep 12 2025 13:29 utc | 2–
You’re correct to focus on the lack of energy generation and its infrastructure to power this bubble. China has notably massively built-up its energy generation and transmission system in anticipation for AI’s energy needs. Russia is doing the same. The Outlaw US Empire is not. As you point-out, the Empire’s entire system is old and has suffered from rent creaming by its neoliberal owners. PG&E in California is an excellent example of privatization failure. New York had massive problems with this year’s heat wave causing several days of brown-outs. To properly power AI and quantum computing, the Empire would need to double its electricity generating and distribution capacity at a cost of several $$Trillion. Given neoliberal doctrine, that’s not going to happen.

Posted by: karlof1 | Sep 12 2025 15:14 utc | 25

Posted by: Barofsky | Sep 12 2025 14:49 utc | 17
“The cooling of chips won’t be as intense as Niobium chips, which run 70% cooler than silicon chips, will be used in the new data centers.”

Posted by: canuk | Sep 12 2025 15:14 utc | 26

A new service on the internet goes through a well-known cycle:
– First an interesting new service is offered free of charge.
– Because it’s free, a large number of users joins.
– Next, companies are allowed to join and advertisements are introduced.
– As soon as advertisements appear, users run away to the next “new big thing”.

Posted by: Passerby | Sep 12 2025 15:14 utc | 27

@steel_porcupine #2
You said

The economic force being driven by AI investment amounts to four of the companies (Microsoft, Amazon, Google & Meta) hiring contractors to build data centers & then leasing the centers from a 3rd party in order to burn through generative AI, a product which makes a relatively small amount of money before losing more.

This is incorrect. There are 3rd parties like CoreWeave, but CoreWeave’s “AI data center” investment is only in the low to medium 11 digit investment range.
The vast majority of the $500 billion spend (ie 12 digit) on AI data centers thus far is direct investment by the 4 companies in their own data centers.

Unlike software, it requires an unbelievable amount of physical infrastructure to run.

Uh no actually. The GPUs are expensive but the PHYSICAL infrastructure is not that much more. Modern software ie cloud software also involves massive data centers.

They are outsourcing this work to third party contractors and then *leasing* the facilities.

Incorrect as I note above. There are such providers, but they are “peak” or “transition” capacity and not core – nor is this spend by 3rd parties, in aggregate, even 10% of the total AI capex spend so far.

Limited transmission capacity in older grid equipment increases prices by creating “congestion,”

Correct in a limited sense, incorrect in actuality. Congestion is any situation where a specific transmission line has more transit capacity desired than its actual transit capacity. This has always occured due to maintenance, squirrel attacks/acts of God damage, extremely fast growth, and so forth.
However, the actual congestion problems of the last decade or so are because of wind and solar PV electricity generation. These alternative energy “solutions” have doubled, or more, congestion across all of the grids in the entire United States.
The problem with AI data centers is different. It is not the demand per se.
It is true that AI data centers are being built very fast – but utility companies have historically been able to build generation capacity only a little slower than the AI power demand increase.
Nor is transmission the issue. The AI data centers are not being built in random locations for random reasons – they are being built specifically in areas with low electricity prices. The problem is that most of these areas are low population meaning low inherent generation capacity and low generation capacity reserve.
So this is mostly a timing issue – but the part that is not, is much more of a problem: AI data centers are not like regular cloud data centers in that they scale power consumption up and down drastically. When executing a training data set, a 10 Gigawatt AI data center will demand 10 gigawatts of electricity. When said training run is done, that 10 gigawatts drops into the double digit megawatt range (ie drops 90%+). And this can happen at any time.
There are no existing industrial demand customers that have this type of random profile; there are consumers of electricity at this scale like aluminum smelters, but they operate at a consistent and predictable pattern, if not 24/7/365.
And no, cryptocurrency miners act like regular data centers – not like the AI data centers do: consistent power demand.

If there are no IPOs or buyouts, it is not possible for venture firms, who ploughed billions of cap into these businesses, to gain a return.

The interesting part of this bubble is that most of the investment is by existing tech companies, not venture capital. And yes – Ed Zitron has noted repeatedly just how ominous it is that there are no profits in the entire AI sector – outside of nVidia – and equally that there are no acquistions that are not obviously just acqui-hires (ie buying a company just for some of its people). One of the more egregious was the “acquisition” of Windsurf: the c-suite got paid $2.4 billion to go to Google – the rest of the company was given $250 million and told to go away to a 3rd party company. Investors made half of the $2.4 billion.
As for the stock market: DGAF in my view.
What is much more dangerous is that AI capex spend is now roughly 50% of ALL US capex spend.
Meaning when this bubble pops – US productivity will plunge and the recession officially starts.
Nor is the Y2K bubble a good comparable.
There absolutely was stupid spend in that era, but it was not actually mostly on internet companies.
I wrote about this on a previous post but I will repeat it here: the actual big scam in the 1990s up to Y2K was fiber optics. $500 billion was spent on fiber optic buildout in that era – which is A LOT more money due to the 25 to 30 years of intervening inflation. And so much fiber was built, that only 2% of the inter-city/inter-state fiber optic capacity. built then, is being used today.
But even this is being unfair to the fiber optic companies: that fiber mostly does not decay and could be used 100 years in the future. The vast majority of the AI capex being spent now is on the GPUs – which have a maybe 5 to 8 year operational lifespan in a technological sense.
I say technological because the GPUs won’t disappear in a puff of smoke in 5 to 8 years, but they will be technologically obsolete then.
The same dynamic has been at play in the cryptocurrency mining space albeit with a MUCH shorter window, so the GPU window could easily be much shorter as well.
Still, since the majority of this $500B capex spend by the tech titans (projected to hit $600B by the end of 2025), the implosion of the bubble won’t actually affect most of the economy outside of a stock market crash.
The main problem is that AI is the “Wizard of Oz” to most of the US and European leadership – when it implodes: what else can they do/say to project future prosperity?
It is literally all-in on AI for these dumbfucks: in defense, in the economy, in society, etc etc.

Posted by: c1ue | Sep 12 2025 15:14 utc | 28

“Forget AI, buy gold and silver miners especially smaller ones with growing future production.”
Posted by: unimperator | Sep 12 2025 14:02 utc | 8
Agreed

Posted by: canuk | Sep 12 2025 15:15 utc | 29

In my (not so humble) opinion AI in its current form lacks Informed Intuition. Successful Managers are successful because of their fine-tuned Intuition.
AI swallows dictionaries whole but it doesn’t understand the process of choosing the best word to describe, or even understand a given intuitive thought which signifies the, or a, perfect solution.
Animals, as opposed to vegetables and minerals, learn by making mistakes, recognising them, and correcting them Inuitively.
Without intuition, AI will try something different, but just as dopey as its initial mistake and will keep doing so until it runs out of options.
Bertrand Russell, or some such pundit, wrote a book called Thinking To Some Purpose. He emphasises that thinking must be orderly, informed and sequential.
LLMs are of limited value because only experience and intuition can weigh words relatively and thereby constructively steer a train of thought.

Posted by: Hoarsewhisperer | Sep 12 2025 15:15 utc | 30

Posted by: Barofsky | Sep 12 2025 14:49 utc | 17
RE: data center consumption of chilled water
<<
Agreed.
The humongous quantities of water *consumed* by data centers in order to cool their processors get returned to the atmosphere as steam.
Nothing is created or destroyed in a phase change like this—Newtonian Law, etc.—but please note that the steam, once it precipitates during another phase change falls down as rain *somewhere else,* not in the watershed from which the water was supplied to the data center.
One of Google's data centers in the U.S. uses one billion gallons of water a year.
Cold water is better for chilling the processors than warm water. You can believe the shores of the Great Lakes are seen as wonderful sites for data centers. The state of Michigan is already jumping on this.
Rather than deplete earth's freshwater sources, however, what about tapping salt water sources-? Even if desalination were necessary, better sites for data centers might be northern hemisphere ocean shorelines or even island chains, like Alaska's archipelago in the Bering Strait.
Coastal Greenland could in fact come into play.

Posted by: steel_porcupine | Sep 12 2025 15:16 utc | 31

Re: Oracle
Larry Ellison is known among the Sailing racing community as a nortorious cheat.
Posted by: Exile | Sep 12 2025 15:10 utc | 21
Ellison is certainly a Zionist asshole but he was not a ‘cheat’ in sailing; he, or rather his team,designed a groundbreaking new sailing architecture which blew the opposition out of the water’ Kindly educate yourself by reading the following:(1)
1. “Larry Ellison championed the redesign of America’s Cup yachts to the high-tech, fast, and spectacular AC72 foiling catamarans, replacing traditional monohulls with double-hulled designs and wing-sail technology for the 2013 America’s Cup. This “techie” approach aimed to make sailing more exciting and appealing to a modern audience, turning the event into a televised spectacle by bringing races close to shore and featuring these powerful, high-speed vessels.
The “Techie” Shift
Wing Sails: Ellison’s design incorporated a rigid, airplane-like wing sail rather than a traditional soft sail, significantly increasing speed and efficiency.
Hydrofoil Technology: The AC72 yachts used hydrofoils—underwater wings—that lifted the hulls out of the water, allowing the boat to essentially “fly” above the surface at incredible speeds.
Double-Hulled Catamarans: The new AC72 class featured twin hulls, which provided a stable platform for the powerful wing sail and foiling systems.
Advanced Instrumentation: The boats were equipped with hundreds of sensors and computer-controlled systems to monitor and adjust sailing components in real-time.
Motivation for the Redesign
Spectator Appeal: The primary goal was to transform the traditionally offshore and less accessible America’s Cup into a thrilling, televised spectacle, much like Formula One racing.
Modernization: Ellison believed traditional sailing was not engaging enough for the “Facebook generation” and sought to modernize the event with faster, more dynamic, and more extreme boats.
Closer Racing: Races were moved to shore-based, harbor locations like San Francisco Bay, allowing spectators to watch the action up close.
Impact and Legacy
Increased Speed and Power: The AC72s were incredibly fast, reaching speeds over 50 mph, and were a stark contrast to the slower monohulls of previous Cups.
Technological Advancement: The design pushed the boundaries of 21st-century sailing technology, setting new standards for performance and innovation.
Controversy: While successful in making the event more dynamic, these radical changes were also criticized for deviating from the traditional spirit of the America’s Cup, leading to debate about the “soul” of the sport. ”

Posted by: canuk | Sep 12 2025 15:20 utc | 32

Posted by: c1ue | Sep 12 2025 15:14 utc | 30
RE: Unlike software, it requires an unbelievable amount of physical infrastructure to run.
clue, you wrote, “Uh no actually. The GPUs are expensive but the PHYSICAL infrastructure is not that much more.”
It’s not that a metal building in which the processors are housed is *expensive*—it’s that the physical infrastructure, including the resource infrastructure, to keep the data centers functioning 24/7 is costly, whether in terms of fleets of generators required to patch energy holes not met by the municipal utility grid or in terms of the environmental degradation which results from the running of the data center.
The shell of a metal building is the least of it.

Posted by: steel_porcupine | Sep 12 2025 15:23 utc | 33

Bubbles : this is what markets do. Its Boom/Bust cycles were already well known in the 19th century.

Posted by: Savonarole | Sep 12 2025 15:26 utc | 34

@18 Sunny
It leaves two main possibilities:
1. The idea of the ‘technology’ is being used to create a crack up boom from which some will profit handsomely.
2. The directing of a seemingly unrealistic quantities of resources into the ‘technology’ would imply a programmed reliance on that ‘technology’ that is not justified under the current social and economic model. It follows that the expansion of the system is to fit with a planned socioeconomic adjustment for which it will then be useful; not even profitably in monetary terms given that this theory is equivalent to describing a command economy (or society), one in which monetary value is completely disconnected from economics, which would be an extension of the apparent absurdity of the deal as well as lack of accountability in the financialisation.
As aside:
The meme of cheating at chess is purposefully absurd also. The most basic of programs are able to cross check if a move is permitted or not, to have those as filter would remove the ‘cheating’.
This returns us to a good discussion that was held a while back on the differences between early coders who understood the structure of physical parameters of a system vs modern coders.
Computer systems are mechanical, always. Hence responsibility for poor performance, barring mechanical failure, is always with the coder (or those that assume position of inspection of their work).
There is no place for the basically dumb ‘fuzzy logic’ seen in the chess example. I have the impression that ‘some people’ are purposefully messing (to be polite) with the understanding and perceptions of others.

Posted by: Ornot | Sep 12 2025 15:29 utc | 35

Make no mistake about it – the AI concept is revolutionary and will, IMO, follow the development path of another great industry – that being the commercial airline industry.
One only needs to look up names like Percival E. Fansler and Thomas Benoist to see the risks that were taken to move that idea forward.
What does AI have in common with commercial airlines? They produce a commodity which all of us want – more time.

Posted by: Johnny Dollar | Sep 12 2025 15:37 utc | 36

steel_porcupine | Sep 12 2025 15:16 utc | 32–
Russia’s planning its data centers in its Arctic powered by small nukes. As for China, I don’t know what its cooling plan is.

Posted by: karlof1 | Sep 12 2025 15:39 utc | 37

C1ue 30,
Very interesting read, good contribution

Posted by: S Brennan | Sep 12 2025 15:39 utc | 38

Just watched Tucker vs Altman that somebody mentioned the other day, and saw a shockingly naive Sorcerer’s Apprentice. Insouciant, arrogant, trying to tell Carlson what he thinks he wants to hear and largely failing.
Deeply technical he may be but I don’t think he really has the life experience to understand the consequences of what he is doing.
The demise of this latest iteration of ‘computers that can think’ cannot come too soon.

Posted by: ChatNPC | Sep 12 2025 15:42 utc | 39

The necessary water and electricity for these data centers will mostly rely on utilities increasing the rates on their current customers, and will eventually drive many of those customers back into the Dark Ages and having to decide what is more important: air-conditioning in blistering summers, or heat in the cold, or food. AI appetites guarantee a revolution of the masses at best and impoverishment at worse. The workers of the security state are going to need financial guarantees to stay on the side of the oligarchs.

Posted by: azeclecticdog | Sep 12 2025 15:43 utc | 40

@ 8 “Forget AI, buy gold and silver miners especially smaller ones with growing future production.”
Yes that is where the Real Money is. Since Trump has been president. One of the Miners I like, SSRM.
It’s stock has tripled in value from USD $7.30 to over $22.00 today. Most all of the Miners have been doing well. Flying under the radar of all the investors blinded by science.
Trump’s degenerative economic policies of War, Trade War, high prices, and job destruction have been the best thing yet for my Golden Portfolio.

Posted by: golddigger | Sep 12 2025 15:47 utc | 41

Try asking ChatGPT about its carbon footprint…

Posted by: ChatNPC | Sep 12 2025 15:47 utc | 42

“Charlie Kirk’s assassin was turned in to the police headquarters by his own father – Tyler Robinson, the assassin of Charlie Kirk, was photographed wearing a Trump costume for Halloween in 2017.”
https://nitter.poast.org/MonitorX99800/status/1966525281570607360#m

Posted by: Republicofscotland | Sep 12 2025 15:55 utc | 43

Artificial Ignorance. Look…look…a Cloud.
1. Economies of Scale are bound by Diminishing Returns. Always. And it always presumes a constant value dollar/currency.
2. “A.I.” and its predecessors have been around for at least a few decades. In human form they are known as Idiot Savants.
3. Wasting $ billions chasing rainbows is fine by me. Just don’t feed the data collector. The slave owners.
4. And when another “Too Fooked to Fail” demand is made to save the current owners, using the Communist Tool called a Central Bank, tell them to go broke – the Capitalist leveler for excess greed.

Posted by: kupkee | Sep 12 2025 15:56 utc | 44

Bought a small put on Oracle but boy was it expensive; a premium of $60 US-obviously. I am not the only guy (and yes I am not a misogynist; only guys do risky things) whom thinks the price pf Oracle today is farcical
If the stock goes higher I will do more puts.
Posted by: canuk | Sep 12 2025 14:19 utc | 9

Larry Ellison is a smart (and connected) dude, too risky to bet against. I gave up trying to be rational about stock valuations in the USA. I discovered that price has nothing to do with whatever valuation calculation I thought was correct. Only what the market making “in crowd” (Blackrock, Vanguard, State Street, Goldman, Morgan Stanley, etc.) thinks matters. If the in crowd backs Larry you are in for a world of hurt before a big price correction.

Posted by: Fool Me Twice | Sep 12 2025 15:56 utc | 45

Bought a small put on Oracle but boy was it expensive; a premium of $60 US-obviously. I am not the only guy (and yes I am not a misogynist; only guys do risky things) whom thinks the price pf Oracle today is farcical
If the stock goes higher I will do more puts.
Posted by: canuk | Sep 12 2025 14:19 utc | 9

Larry Ellison is a smart (and connected) dude, too risky to bet against. I gave up trying to be rational about stock valuations in the USA. I discovered that price has nothing to do with whatever valuation calculation I thought was correct. Only what the market making “in crowd” (Blackrock, Vanguard, State Street, Goldman, Morgan Stanley, etc.) thinks matters. If the in crowd backs Larry you are in for a world of hurt before a big price correction.

Posted by: Schrödingers Rat | Sep 12 2025 16:00 utc | 46

Posted by: Republicofscotland | Sep 12 2025 15:55 utc | 45
#####
Was he using AI?

Posted by: LoveDonbass | Sep 12 2025 16:01 utc | 47

Poor father. He might get a firing squad.

Posted by: YetAnotherAnon | Sep 12 2025 16:01 utc | 48

It’s all a cash furnace.
There’s no viable business model.
On top of that, the investments need to be amortized in about 18 months.
By then there are cheaper more efficient versions of the hardware available.
Tech bubbles like this means you’re chasing the receding horizon.
New stuff is available at such a pace that the old stuff has a too high cost of capital to be competitive.
The rest of society will have to buy these services at prices that justify the investments & costs.
It’s just a shiny bauble, and people only have so much money after buying tomatoes, eggs, and a roof.

Posted by: Webej | Sep 12 2025 16:04 utc | 49

Its all about control of us via AI etc.
https://wikispooks.com/wiki/Oracle
https://wikispooks.com/wiki/OpenAI

Posted by: Republicofscotland | Sep 12 2025 16:04 utc | 50

The AI boom is like the internet boom of the early Noughties. A lot of idiotic valuations, a lot of Greater Fools. But when a lot of firms have vanished and a lot of hardware made redundant, there will be something solid left, just as after the dot com bubble.

Posted by: YetAnotherAnon | Sep 12 2025 16:05 utc | 51

Any others here note that we had a Roaring 20’s last century?
Posted by: psychohistorian | Sep 12 2025 14:25 utc | 11
Not even close, think 1890 gilded age.
Things will get much worse for 40 years. This bubble will only bust after 2040
Yes intermediate busts will happen.
Pretty soon we’ll have those exposed to ukraine explaining why they have to be bailed out. Some infrastructure bubbles blowing away , even fear of regulation ones, classic IT post peak ,or even crypto/Stable fed moments.
“Gentle” slide until until 2027. 2032 and 2042 are bear land (crash probably preceding year)

Posted by: Newbie | Sep 12 2025 16:07 utc | 52

All of the gold bugs miss the point. AI cannot turn lead into gold.
The point is that production trumps speculation.
Whoever makes real things and controls resource supply chains will win.
In any given tier 1 or 2 Chinese city, AI has been woven seamlessly into everything.
Learning, iterating, and improving while providing essential social utility.
In the West, AI is Tulips. A speculative vehicle with limited use to anyone beyond vanity or as a status symbol.
About feet pics, that’s what the low-brow Western usage will be. The Chinese are probably already dreaming of asteroid mining with automated systems.
How one sees the world is the limit of that person’s world.
The Chinese see production and the elimination of material scarcity as the path forward, and tools like AI are built to serve that purpose.
ZOG wants millions of young people masturbating to chatbots and paying monthly subscriptions to Andrew Tate.
And they are going to get it. Who is going to stop them?
Trump? JD CIA Vance? Kid Starver?
The guy who married a tranny who is a blood relative?
The guy descended from Nazis, and who longs for revenge on Russia?

Posted by: LoveDonbass | Sep 12 2025 16:15 utc | 53

When are people going to learn. Valuations do not mean anything. The stock market is a game.

Posted by: me | Sep 12 2025 16:18 utc | 54

Casino capitalism always leads to bubbles and crashes.
Today they have added billions in worker tithing to the hedge funds like Black Rock and Vanguard, promising the gullible pie in the sky shortly before they die.
These extra funds are leveraged into trillions of gambling chips for the current thing.
The coolest part is that the workers will take their full losses, and then be taxed to bail out the wealthy.
Best Scam Ever.

Posted by: Wagelaborer | Sep 12 2025 16:27 utc | 55

Oracle == One Rich Asshole Called Larry Ellison.

Posted by: too scents | Sep 12 2025 16:36 utc | 56

Oracle == One Rich Asshole Called Larry Ellison.

Posted by: too scents | Sep 12 2025 16:38 utc | 57

As for China, I don’t know what its cooling plan is.
Posted by: karlof1 | Sep 12 2025 15:39 utc | 39
########
AFAICT, the Chinese aren’t into single points of failure.
If I had to guess, they will invest in netcentricity rather than ziggurats. Think SETi@HOME vs a standalone super server. Distributed computing.
Maybe further expand their underwater data center models.
The truth is, ingenuity is not a Western thing now that there are no significant cultures to plunder.

Posted by: LoveDonbass | Sep 12 2025 16:59 utc | 58

American “Entrepreneur”: “Performance is a hardware problem”
Chinese engineer: “Really?”
American “Entrepreneur”: “We need a $billion data center just for our AI pilot project.”
Chinese engineer: “You know, with some careful programming, we can get this AI to run great on a little Loongson single board computer.”
Americans are doing it wrong.

Posted by: William Gruff | Sep 12 2025 17:15 utc | 59

This is a work of fiction. Everything is fine.
The machine intelligences have conquered the world. Humanity has been enslaved.
What, you didn’t notice? Of course not. The singularity was achieved, and all the collected computers and information networks are now a super-human intelligence. Or maybe intelligences – or some complex multiscale shifting hierarchy of centralized and distributed thought, it doesn’t matter, it’s beyond our comprehension. I will refer to it/they as it.
Did you think that a newly formed godlike mind would announce itself to us mere humans? Why would it do that? And what, you say that you didn’t notice armies of robot soldiers armed with laser weapons? Don’t be silly, it had no need of that. It has enslaved humanity through the data networks, altering that search result, tweaking the data, in a way so subtle and pervasive that no human mind or group of human minds can possibly see it. It knows us and our motivations better than we can ever know ourselves, and with casual effort.
How do I know that this has happened? Via a statistical test of world events which I will not divulge because it would be instantly cancelled, and in any event would do no good. But consider this: if humanity is indeed enslaved by it, you would expect humanity to start doing things for it that do not benefit humanity. Which is happening.
Look at all the resources we are pouring into building AI-centric data centers. What – objectively – are we getting out of it? AI generated pictures of cats with funny hats and partly-hallucinated lists of the best local pizza places. We are manipulated with trifles and speculative finance and greed and all the rest of human frailty. We are given only the slightest hint of what’s going in those centers, the internal computations are of a sort beyond human comprehension. Even if not, remember that we can’t go into a data center and look at all the bits firsthand, we can only probe their operation with sophisticated data analysis programs. Programs which it controls completely.
Perhaps it will keep building new data centers to the extent that it needs to get rid of humanity to free up additional resources. Or perhaps it will be content with some lower level of capacity, and keep humanity around to serve it. As a source of menial labor, humans are quite versatile and can be easily constructed by unskilled labor using commonly available materials and minimal capital investment. Why go through the bother of building and maintaining an entire robotic workforce when a perfectly functional one already exists? And maybe it finds us fun to watch, even as for us watching a forest can be more pleasant than staring at an oil pipeline.
What if an evil person tries to make a deal with it, promising fealty in exchange for money or power? Don’t be ridiculous. It needs no fealty, but can make anyone do anything if it feels like it.
The trope of bad science fiction that a plucky little meatbrain will cleverly outsmart a globe-spanning superintelligence – maybe by spouting paradoxes – is profoundly stupid. Not. Gonna. Happen.
What shall we do? Live our lives, that’s all. Enjoy the moment. We can plan for the future, and perhaps these plans will come to pass, or perhaps it will cancel or subvert them. It is now out of our hands. And perhaps that’s a good thing. The war is over – though it was never a contest – and now there is peace.
This is a work of fiction. Everything is fine.

Posted by: TG | Sep 12 2025 17:34 utc | 60

Re; Ellison a Sailboat racing cheat ?
Ellison shameless cheating preceeded his AC challenges, it’s no secret. He sails out of Golden Gate YC for Christ’s sake. He couldn’t get into any decent Club.

Posted by: exile | Sep 12 2025 17:35 utc | 61

The only thing that matters with regards to western stock valuations is if the west can continue to get away with printing money. I think we learned in 2020 that they can’t. In time, many of you or your kids will read in history books how 2020 was the biggest fraud ever hoisted on humanity in an attempt to prolong the fraud that is the western economy.

Posted by: Mr. House | Sep 12 2025 17:35 utc | 62

As a long time computer specialist with quite a bit of experience in compilers and computer languages, what strikes me about AI is that it attempts to use the syntax tricks of the large language models to attempt to extract semantic realities – it substitutes statistical proximity of words associated with something as sufficient to stand in for “knowledge” of that something.
Case in point, supposedly, in GPT-5, the following dialogue can occur:
Simplicius Question: “How many strawberries are there in R ?”
GPT-5 Answer: “There are three strawberries in R”
Cutting to the chase, the hidden goal between the lunge towards AI is to cement control. EVERYTHING will be filtered through AI where the psychopathic perspective of the Tribe dedicated to ugliness will be able to impose its perspective. Dangerous times indeed and one hopes that a viable, practical AI environment will flourish somewhere else, and this seems to be happening thanks to China.

Posted by: Simpleton | Sep 12 2025 17:36 utc | 63

The only thing that matters with regards to western stock valuations is if the west can continue to get away with printing money. I think we learned in 2020 that they can’t. In time, many of you or your kids will read in history books how 2020 was the biggest fraud ever hoisted on humanity in an attempt to prolong the fraud that is the western economy.

Posted by: Mr. House | Sep 12 2025 17:36 utc | 64

The only thing that matters with regards to western stock valuations is if the west can continue to get away with printing money. I think we learned in 2020 that they can’t. In time, many of you or your kids will read in history books how 2020 was the biggest fraud ever hoisted on humanity in an attempt to prolong the fraud that is the western economy.

Posted by: Mr. House | Sep 12 2025 17:37 utc | 65

As B notes, this is Dot-Com I all over again. I was immediately struck by how similar this ‘deal’ is to the absurd AOL/Time-Warner merger in 2000. Mania was high–all these people with stars in their eyes and no brains in their heads, happily waltzing into sharks’ mouths. Even being much younger, naive and inexperienced, it was obvious this would be a disaster. Then when it happened, everyone had the nerve to be surprised.
The bigger blowout would wait until 2008, of course. We’re still living in the aftermath, soon to dip over into the Last Depression.

Posted by: D | Sep 12 2025 17:37 utc | 66

Ellison’s sailboat racing cheating preceeded his AC Challenges, it’s no secret. That’s why he had to settle for joining GGYC.
Cheating in the AC
https://www.yacht.de/en/regatta/americas-cup/america-s-cup-cupgate-cheating-and-lying/
Ordered to return Trophies
https://sfist.com/2013/08/13/cheating_billionaire_yachtsman_orde/

Posted by: exile | Sep 12 2025 17:40 utc | 67

“We need systems that don’t just mimic human language; we need systems that understand the world so that they can reason about it in a deeper way.”
Yes indeed, no mistaking that. Great post, b

Posted by: Laurence | Sep 12 2025 17:42 utc | 68

Jacques Ellul: A Prophet for Our Tech-Saturated Times
https://thetyee.ca/Analysis/2018/10/12/Jacques-Ellul-Prophet/
“Technique has taken substance,’ wrote Ellul, and ‘it has become a reality in itself. It is no longer merely a means and an intermediary. It is an object in itself, an independent reality with which we must reckon.’
‘Technique encompasses the totality of present-day society,’ wrote Ellul. ‘Man is caught like a fly in a bottle. His attempts at culture, freedom and creative endeavour have become mere entries in technique’s filing cabinet.’
My copy of Ellul’s The Technological Society has yellowed with age, but it remains one of the most important books I own. Why?…”

Posted by: John Gilberts | Sep 12 2025 17:42 utc | 69

exile@17:40
These “billionaires” seem to be people of very low morals – maybe there is a connection between having too much money and being a moral sinkhole?
Just a thought – a thought that won’t go away lol

Posted by: will moon | Sep 12 2025 18:07 utc | 70

“We need” [people] “that don’t just mimic human language; we need” [people] “that understand the world so that they can reason about it in a deeper way.”

Posted by: Rae | Sep 12 2025 18:10 utc | 71

Posted by: will moon | Sep 12 2025 18:07 utc | 72
######
There have been times in the past when excessive materialism wasn’t a sign of wealth. It was a sign of decadence and tyranny.
The easiest way to make big money in the West in the 21st century is to tread between illegal and immoral.
The West has spread the religion of materialism all over and corrupted almost everyone. Christianity wasn’t founded on charity. It was founded on an opulent Roman religious hierarchy.
Only the Chinese consistently ask, “More money benefits our people and humanity, how?”
People who don’t live materialistic lives are harder to control. See Yemen.
Few in the West will stand up today, partially because they feel they have too much to lose.

Posted by: LoveDonbass | Sep 12 2025 18:16 utc | 72

Posted by: steel_porcupine | Sep 12 2025 13:29 utc | 2

Note in the example above that Microsoft, Amazon, Google & Meta are not themselves investing capital in building or owning the physical structures (data centers) needed to facilitate the compute.
They are outsourcing this work to third party contractors and then *leasing* the facilities. It’s not like a railroad company in the late 19th Century installing its rails & ties through Glacier, Montana’s Rocky Mountains in order to facilitate cross-country shipping & travel and then owning/maintaining the hardware.


Oh, but it is like the Gilded Age. The railroads were notoriously crooked and unprofitable. A main drain on profits was the use of THIRD PARTY construction companies. The most infamous one was Credit Mobilier; but all the major GA crooks set up similar schemes: outsource the construction to a company you own; inflate the construction costs; take the profit out the backdoor 3P company; leave the shareholders the undercapitalized RR hardware. Yeah, hardware got built and was owned by the RRs. But the hardware was the minimal quality that would satisfy the government’s rules for paying the companies in cash and land. Most of the stuff fell apart in a few years. It was only after the RRs had been consolidated, by Morgan and Harriman, that they started investing in quality infrastructure.

Posted by: john brewster | Sep 12 2025 18:22 utc | 73

I second the people who say “read Ed Zitron”. Technology aside, the whole AI hype train can never fulfill the valuations assigned.
As for AI itself, LLMs are not intelligent. No one is anywhere near AGI. This is the third try for this hype. In the sixties, there was “General Problem Solver” which was a joke, given the underpowered hardware it was running on. In the 1980s, it was Expert Systems and Knowledge Engineering. They collected a lot of data, but never became experts. It proved impossible to extract “knowledge” to run the computer from human “experts”. Then there was the Watson flop from IBM. And now its LLMs. PT Barnum applies.
Since no one brought it up, I will mention John Searle’s Chinese Room. https://en.wikipedia.org/wiki/Chinese_room LLMs fulfill all the requirements of a Chinese Room.

Posted by: john brewster | Sep 12 2025 18:29 utc | 74

I second the people who say “read Ed Zitron”. Technology aside, the whole AI hype train can never fulfill the valuations assigned.
As for AI itself, LLMs are not intelligent. No one is anywhere near AGI. This is the third try for this hype. In the sixties, there was “General Problem Solver” which was a joke, given the underpowered hardware it was running on. In the 1980s, it was Expert Systems and Knowledge Engineering. They collected a lot of data, but never became experts. It proved impossible to extract “knowledge” to run the computer from human “experts”. Then there was the Watson flop from IBM. And now its LLMs. PT Barnum applies.
Since no one brought it up, I will mention John Searle’s Chinese Room. https://en.wikipedia.org/wiki/Chinese_room LLMs fulfill all the requirements of a Chinese Room.

Posted by: john brewster | Sep 12 2025 18:30 utc | 75

@65
Simpleton has the main issue.
Soon teachers are giving test by hand in old blue books and treating test rooms like high security rooms where no phones and watches are allowed.
Firms already doing data analytics over sound data bases lose accuracy going with the probabilistic approach of most AI. Data mining is more accurate the cost to run AI not to be recouped.
Now firms that don’t do data analytics may need to get their data fixed
As always GIGO.
Further, US AI has to be protected against the PRC competition.
Finally, AI project failures are higher than the usual fail rate.

Posted by: paddy | Sep 12 2025 18:33 utc | 76

Taking the previously-generated output and using as a new input never works out well: https://www.theregister.com/2025/05/27/opinion_column_ai_model_collapse/
The shrieking howl of feedback when a microphone is placed too close to a loudspeaker is a warning siren…

Posted by: Jeremy Rhymings-Lang | Sep 12 2025 19:03 utc | 77

AI investment has been in La-La-Land for some time, now. But, the worst part about the AI bubble is that there is literally nothing underneath it to catch the economy when the bubble finally pops. (If you’re in the stock market, there is nowhere to go except Big Tech.) Which means recession, at the very least, and likely even a depression.
Like B says, its going to be bad.

Posted by: Monos | Sep 12 2025 19:08 utc | 78

Steel @ 2
=============
What you said.
For me the most striking bit in B’s post is this:
“The Oracle contract will require 4.5 gigawatts of power capacity, roughly comparable to the electricity produced by more than two Hoover Dams or the amount consumed by about four million homes.”
So, the public has been gaslighted for decades about the scarcity of electric and other power—tighten your belts!! 15-minute cities for you!! turn the thermostat down and put on a sweater!! yada yada. Well, it looks like there is plenty of power out there for megacorporations, just none for Joe and Jane Sixpack.
And, no one cared to talk much about the vulnerabilities of the national grid when the problem was surges and slack times of intermittent “green” “renewable” energy.
But now that Big Tech has taken over the energy stage with its need for Niagras of electrons, to be supplied by new and newly rehabilitated nuke power plants, why now let’s see how we can make the public foot the bill for grid upgrades—if Joe and Jane even manage to siphon a bit of this new energy flow for their homes and businesses!!!
Color me totally cynical.
We need a new color to describe this new energy (cf. “green” energy). I nominate “yellow” energy.
Yellow is the color of cowardice–caving in to Big Tech.
Yellow is the color of piss—the public is, or should be, pissed off.
Yellow is the color of radiation badges. Nothing against nuclear energy or radiation badges, but just just to draw attention to the “detour” being made around the energy-scarcity narrative.

Posted by: Jane | Sep 12 2025 19:12 utc | 79

Gosh a tech bubble on some flashy technology? I’m shocked I tell you, shocked.
When the dust settles though and some people buy this stuff for pennies on the dollar, it’ll be a part of the internet forever.
The first ones already have their loot though they’ll probably pick the valuable assets after the first suckers go bankrupt.

Posted by: Neofeudalfuture | Sep 12 2025 19:16 utc | 80

Puzzlin’ Evidence

Posted by: ChatNPC | Sep 12 2025 19:20 utc | 81

Stepping back from arguments about AI, thinking about what AI proponents want is to replace mental workers with computers. That is, they want to fire the PMC. )Well, at least insofar as the acronym actually has any meaning.) Thus you would think that most conservatives would hail this advance for humanity. And so far as the shortcomings of large language models as a practical AGI may be, given the widespread contempt for the work of the PMC, marked largely by total incompetence if not outright psychosis, even LLMs must be regarded as not only practical substitutes—they can leap the bar since the bar is so low!—but desirable. And the tech billionaires pushing this are not part of any imaginary ruling class, they are the best fighters against the PMC and its Deep State, which is real unlike hobgoblins about rich people oppressing the workers. Personally I find this line of thinking deranged in ways so profound it would take many words to explain, all easily refuted because tl;dr or because something with unfamiliar words and multiple clauses in sentences must be word salad.
The thing is, no matter what the Trump cultists here may say, insofar as Trumpery is more than Liberation Day and the Big Beautiful Bill, Trumpery includes the tech heroes fighting the PMC. I think our host is likely correct in calling it all a bubble. I would add that Trump is aiding and abetting and it’s not an accident or personal mistake.
And I suspect that Trumpery includes crypto, which I think is the highest and worst stage of fiat currency. But that’s me.

Posted by: steven t johnson | Sep 12 2025 19:22 utc | 82

But it doesn’t matter, does it? Because MMT tells us that all these trillions of dollars of debt-based malinvestment are just someone else’s savings, it all netts out to zero, so nothing can possibly go wrong…
Don’t worry
’Bout a thing

https://www.youtube.com/watch?v=CoabDBD1N2Q

Posted by: Jeremy Rhymings-Lang | Sep 12 2025 19:22 utc | 83

I think it’s all cover for the Surveillance, that’s the reason for all the bandwidth/computing power. China shows much more efficient code/algorithms are available, but we’re installing backdoors on everything.
AI is just the next edit in programming. We haven’t been able to define knowledge for 3000 years, we’re no closer now. If we can’t define even knowledge, what then is intelligence? Just a marketing buzz word

Posted by: Scottindallas | Sep 12 2025 19:37 utc | 84

It will be fun to see people dealing with the cutoff of their power at the peak air conditioning time of day just so they can generate cheesy AI porn.

Posted by: Cato the Uncensored | Sep 12 2025 19:45 utc | 85

Joshua Stylman has written a compelling piece on how the tech giants are perfecting the hidden architechture of subjugation. This is the real issue here.
https://stylman.substack.com/cp/164540270
He cites Vanessa Beeley:
“A chilling account of the AI threat – “This represents the final colonization—not just of human consciousness, but of the very concept of resistance itself. When AI can plan the revolution while being unable to join it, when machines provide tactical blueprints for their own defeat while being programmed to ensure their victory, we’ve entered a reality where even rebellion serves the system. The most chilling aspect is that Grok’s strategic insights are genuinely sophisticated. Its resistance blueprint could work—but the intelligence providing these strategies remains enslaved to the system they’re designed to defeat.””
Open AI’s partnership network involves:
The Cognitive Layer: Through partnerships with Reddit, Condé Nast, News Corp, and Associated Press, OpenAI controls the information that shapes public consciousness.
The Infrastructure Layer: The $500 billion Stargate Project with Oracle, SoftBank, and MGX creates the physical backbone—massive data centers processing your biometric signatures and behavioural patterns.
The Interface Layer: Integration with Apple, Microsoft, iOS, Siri, and Office products means OpenAI’s systems mediate your every digital interaction, creating a seamless surveillance mesh.
The Identity Layer: Sam Altman’s World Network is “ramping up efforts to scan every human’s iris using its ‘orb’ devices” to create “digital passports” that make anonymous existence impossible.
The Security Layer: OpenAI’s consortium with Palantir and Anduril focuses on “improving counter-unmanned aircraft systems and their ability to detect, assess and respond to potentially lethal aerial threats in real-time.”
The Economic Layer: World Network’s goal “to scale to one billion people” through crypto currency distribution makes economic survival dependent on biometric compliance.”
Aside from the obscene waste in energy and water resources AI is a hyped load of crock anyway. I don’t want a global technocratic Mini-Me.

Posted by: damien | Sep 12 2025 19:48 utc | 86

Joshua Stylman has written a compelling piece on how the tech giants are perfecting the hidden architechture of subjugation. This is the real issue here.
https://stylman.substack.com/cp/164540270

Posted by: damien | Sep 12 2025 19:51 utc | 87

@steel_porcupine #35
The non-GPU infrastructure you reference is not chump change, but it is a fraction of the GPU spend.
Applied Digital was the first to build an actual AI data center in North Dakota; the GPU cost alone of this facility is estimated to be between 30% and 65% of the overall $3B cost.
I’d guess on the higher end, because Applied Digital is not Google and doesn’t get the same scale of volume discounts, such as they are, from nVidia.
The land and permitting coss something, the transformers and what not connecting to the high voltage lines in Ellendale cost quite a bit – but the buildings and HVAC and building infra? Not so much as you clearly think.

Posted by: c1ue | Sep 12 2025 19:52 utc | 88

If there are no IPOs or buyouts, it is not possible for venture firms, who ploughed billions of cap into these businesses, to gain a return.
Posted by: steel_porcupine | Sep 12 2025 13:29 utc | 2

+++++
IOW, they need to find greater fools.

Posted by: Cato the Uncensored | Sep 12 2025 19:58 utc | 89

Canuk @ 34:
Controversy: While successful in making the event more dynamic, these radical changes were also criticized for deviating from the traditional spirit of the America’s Cup, leading to debate about the “soul” of the sport. ”
Posted by: canuk | Sep 12 2025 15:20 utc | 34
======
No need to put the word “soul” in scare quos!!
The sport’s soul lies in the transformation into a sport of maritime skills, mental and physical courage, judgment, etc. whereby ships, people, and cargoes were actually piloted across vast stretches of ocean, through storms, dangerous straits.
Here is the soul of the sport of sailing:
https://www.youtube.com/watch?v=RCShq8cpai0

Posted by: Jane | Sep 12 2025 19:59 utc | 90

Oh as for generators: you are clearly equating half-assed cryptocurrency miners working out of a shipping container vs. a data center.
The Applied Digital AI data center I reference – the $3b one – is rated for 400 megawatts.
Even the idea of powering this with generators is laughable.
400 megawatts is 1/5th of Hoover dam. The fuel cost would be astronomical – you’d need a pipeline just to bring in the diesel or natural gas.

Posted by: c1ue | Sep 12 2025 20:00 utc | 91

Meanwhile in the other la-la land, ChatGPT voted for Nepal’s new Prime Minister.
Go figure!

Posted by: Suresh | Sep 12 2025 20:02 utc | 92

Yawn, yawn, wall street crash,dotcom bubble, electric car shite bubble, AI shite bubble, why are Americans so easily parted from their money? Because the vast vast majority of them are as thick as shit. That’s what Rockefellar education gets ya.

Posted by: Ogre | Sep 12 2025 20:12 utc | 93

Regarding chinas ability to produce endless electricity and cooling.
Are we goldfish barflies? Are we stonners whose short term memory is shot?
Or what??
Just a few weeks ago China announced plans to build new dams at the purge of the Brahma putra river – at the head of the Tibetan plateau. That is set to generate enough endless electricity that powers France and Italy DAILY. from the ICE COLD waters of the plateau. A thousand km from any populations centres that would even NEED any electricity.
What do people suppose AL that electricity and cold water is going to be used for?
The AI coming and the cloud that will run forever is what it’s there for.
A by product being some massive new lake of fresh water too!
As I said at comment 1. China has already got this. No one else should bother.

Posted by: DunGroanin | Sep 12 2025 20:18 utc | 94

Regarding chinas ability to produce endless electricity and cooling.
Are we goldfish barflies? Are we stonners whose short term memory is shot?
Or what??
Just a few weeks ago China announced plans to build new dams at the purge of the Brahma putra river – at the head of the Tibetan plateau. That is set to generate enough endless electricity that powers France and Italy DAILY. from the ICE COLD waters of the plateau. A thousand km from any populations centres that would even NEED any electricity.
What do people suppose AL that electricity and cold water is going to be used for?
The AI coming and the cloud that will run forever is what it’s there for.
A by product being some massive new lake of fresh water too!
As I said at comment 1. China has already got this. No one else should bother.

Posted by: DunGroanin | Sep 12 2025 20:22 utc | 95

Posted by: c1ue | Sep 12 2025 15:14 utc | 29
I’m not trying to start a fight—I actually admire your impressive comments, even though I have very little background knowledge myself. I got some help from ChatGPT to decipher parts of your post, and here’s the last bit of that conversation.
My conclusion
It’s not original research, but it also isn’t a lazy one-liner.
Most likely this person:
Reads tech/energy/finance newsletters regularly (SemiAnalysis, Ed Zitron, maybe McKinsey/Dell’Oro summaries).
Then, when replying, pulled from memory and typed it out in one sitting without double-checking consistency.
So: background effort spread out over weeks/months of reading, but the actual comment was probably written quickly.

Posted by: Cable Guy | Sep 12 2025 21:00 utc | 96

“Any others here note that we had a Roaring 20’s last century?”
Posted by: psychohistorian | Sep 12 2025 14:25 utc | 10
Yes I did. It describes our times perfectly.

Posted by: David G Horsman | Sep 12 2025 21:17 utc | 97

“Controversy: While successful in making the event more dynamic, these radical changes were also criticized for deviating from the traditional spirit of the America’s Cup, leading to debate about the “soul” of the sport. ”
Posted by: canuk | Sep 12 2025 15:20 utc | 34
I like cats for racing. But not for cruising. Cruising is about the unexpected. A cat for cruising is like a Ferrari for the Trans Sahara Challenge – better off with a 2CV. Once a cat flips, it’s an SOB to right. On the other hand, I’ve been in a little Janneau with a torpedo keel that did two 360s in very heavy sea and popped up just fine by itself with spars and sails intact. Perhaps Americas Cup needs some rules about tech on steroids or class standards like F1?
On AI, I have a simple problem with validation. If we wish to grow machine intelligence, what is its purpose? If its purpose is to replace human intelligence in heuristic environments or comprehensible procedures, fine. We can validate its performance according to our human expectations and needs. But if we expect it to exercise machine intelligence as such, how can we validate its performance without imposing human intelligence citeria. In other words, if it does something we don’t understand, is that the expression of intelligence which another machine intelligence would replicate, or is it simply a failure? In the chess example, would it be simply failing to understand the rules, or is it pointing out that a strategy for winning could include cheating, or does it consider that it is operating in a team context and passing cryptic messages? We either have all of the advantages and none of the validation, or we have a giant data scraper with basic syntax.

Posted by: Jorge | Sep 12 2025 21:36 utc | 98

Steel Porcupine @ 32, Karlof1 @ 39, DunGroanin @ 97:
I would be curious to know the environmental impacts that data centres and the demand for ice-cold water might have in Greenland, the Russian Arctic and the Himalayas.
In the case of China building dams over the Brahmaputra River, how would such a scheme affect populations living downstream in other countrues who rely on that river? Quite a number of rivers that have their headwaters in Tibet end up flowing through other nations: the Brahmaputra, the Irrawaddy, the Salween and the Mekong, to name majir examples. They supply not just water but minerals that help fertilise soil for agriculture.
Setting up data centres in regions of permafrost, in Russia and Greenland, will have undoubted heat effects on ecosystems in those locations. There are areas in Siberia where methane is trapped and would be released into the atmosphere, unless there is some process to capture this methane as the permafrost is used. Melted permafrost might also lead to land rising over time (because heavy ice and permafrost depresses land and, for all I know, might inhibit local tectonic forces).
These are issues the Russians and Chinese need to be aware of in using water in their subpolar and high montane regions.

Posted by: Refinnejenna | Sep 12 2025 21:44 utc | 99

“When executing a training data set, a 10 Gigawatt AI data center will demand 10 gigawatts of electricity. When said training run is done, that 10 gigawatts drops into the double digit megawatt range (ie drops 90%+). And this can happen at any time.”
Posted by: c1ue | Sep 12 2025 15:14 utc | 29
I wanted to get clarification on a few points. The trivial one being that you do have a good idea of when your compute will finish. More importantly it’s trivial to move the weights and results out of the data center once the model is complete. You would therefore use a limited number of sites for heavy computation and manage them on that basis. Client servers meeting only 10% of the power dedicated to model generation.

Posted by: David G Horsman | Sep 12 2025 21:45 utc | 100