Moon of Alabama Brecht quote
October 3, 2025
Is The AI Bubble Ready To Pop?

At Naked Capitalism Yves Smith published a paper by Servaas Storm:

The AI Bubble and the U.S. Economy: How Long Do “Hallucinations” Last?

Yves writes:

This is a devastating, must-read paper by Servaas Storm on how AI is failing to meet core, repeatedly hyped performance promises, and never can, irrespective of how much money and computing power is thrown at it. Yet AI, which Storm calls “Artificial Information” is still garnering worse-than-dot-com-frenzy valuations even as errors are if anything increasing.

Storm’s introduction:

This paper argues that (i) we have reached “peak GenAI” in terms of current Large Language Models (LLMs); scaling (building more data centers and using more chips) will not take us further to the goal of “Artificial General Intelligence” (AGI); returns are diminishing rapidly; (ii) the AI-LLM industry and the larger U.S. economy are experiencing a speculative bubble, which is about to burst.

I happen to a agree with the arguments and conclusion.

The current Large Language Models are part of the Generative Artificial Intelligence field. GenAI is one twig on the research tree of  Artificial Intelligence. LLMs are based on ‘neural networks’. They store billions of tiny pieces of information and probability values of how those pieces relate to each other. The method is thought to simulate a part of human thinking.

But human thinking does much more than storing bits of information and statistical values of how they relate. It constantly builds mental models of the world we are living in. That leads to understanding of higher level concepts and of laws of nature. The brain can simulate events in those mental model worlds. We can thus recognize what is happening around us and can anticipate what might happen next.

Generative AI and LLMs can not do that. They do not have, or create, mental models. They are simple probabilistic systems. They are machine learning algorithms that can recognize patterns with a certain probabilistic degree of getting it right. It is inherent to such models that they make mistakes. To hope, as LLM promoters say, that they will scale up to some Artificial General Intelligence (AGI) know-all machines is futile. Making bigger LLMs will only increase the amount of defective output they will create.

(Yesterday I watched a video of Jon, a baker in Mesa, in which he mentions how he had asked an LLM to half a recipe he was going to make. It did that correctly for all but one ingredient. The model had divided the amount of water needed by ten. Jon’s test bake had failed.)

But the hype around LLms is real and huge amount of money is flowing into the companies that are building such models. This while none of them has found ways to create sufficient revenue to support such investments. Training and running these models at scale is very expensive. There are simply too few real use cases that would justify paying the cost for them. It may be fun to create and play around (archived) with AI-slop videos on social media. But who is willing to pay for that? Especially when the use of social media is finally sinking (archived).

(For a more detailed discussion of LLMs, their costs, lack of use cases, and on the incestuous structures of the investments that are flowing into them see Edward Zitrons 18,500 words epos here: The Case Against Generative AI.)

There are still hundreds of billion dollars flowing into the already overvalued LLM hype:

AI startups’ aggregate post-money valuation (the valuation after the latest round of funding) soared to $2.30 trillion, up from $1.69 trillion in 2024, and up from $469 billion in 2020, which back then had already set a huge record, according to PitchBook.

OpenAI reached a $500 billion valuation in early September, when it offered eligible former and current employees to sell $10 billion of their shares in a secondary share sale to other investors, led by SoftBank, according to CNBC. In April, OpenAI had reached a breathtaking post-money valuation of $300 billion at a funding round when it raised $40 billion, primarily from SoftBank. The sky is not the limit.

Elon Musk’s xAI is supposedly shooting for a $200 billion valuation in a $10 billion funding round, according to sources cited by CNBC, which Musk denied on X as “fake news. xAI is not raising any capital right now.” Well, not right now. Or whatever.

Anthropic reached a $183 billion post-money valuation, after raising $13 billion in a Series F funding round in early September, according to Anthropic.

And so on. These valuations of AI startups are mind-boggling. How are these late-stage investors going to exit their investments with their skin intact?

They wont.

Dozens of specialized LLM data-centers are getting build to house a huge amount of expensive chips that lose their values faster than a newly bought hyper-car. All without a real use case for LLMs and without any hope for sufficient revenue to ever sustain the business.

This is bad for the U.S. economy.

The money that is flowing into the LLM hype is gone. It can not be invested somewhere else even when that would make way more sense for the larger society – for example in the revival of manufacturing or in apprenticeship programs. Like during the dot.com boom (archived) in the late 1990s the real economy gets crowded out by a virtual one. Trump’s tariffs will not lead to the revival of U.S. industries if there is no money left to invest in them.

The data-centers being build will need huge amounts of additional electricity which can not be generated within the foreseeable future:

The implications are brutal and stark. Curtailed and costly electricity supply for AI and manufacturing will impair American economic competitiveness, with knock-on effects for household affordability. These impacts are already becoming evident, with wholesale pool prices in the U.S. rising by 267% over the past 5 years, on the back of skyrocketing electricity demand from the AI sector (Bloomberg).

All recent U.S. stock market gains were powered by the LLM hype. When the bubble will burst, the stock market will sink and most people who are, directly or indirectly, invested in the LLM hype will lose a lot of their money.

Unfortunately there is no way to foresee when that will happen or how far the damage will spread.

But we can already see damage to the real economy. Investment in factories for real products gets crowded out and electricity prices are doubling and tripling, hitting manufacturers as well as private consumers.

Why don’t we have ways to prevent bubbles? Or why can’t we deflate them before they become threats to our societies?

Comments

Artificial Disinformation is more like it…off to appt….more later

Posted by: psychohistorian | Oct 3 2025 15:46 utc | 1

I have worked over 30 years with IT systems and developing code. I have seen tens of hype phenomena and almost have at first sight looked just hype.
I argue that AI changes fundamentally the way we develop code and thus programs. But still one has to master how to make code without AI to play with AI.
I agree that there is an AI bubble developing (and developed to some extent already). The business expectations of AI are of course too high.
I am also a bit sceptic whether the AI services can be delivered to applications fast enough because the infrastructure may collapse when the usage increases. Chatgpt5 is already quite slow.

Posted by: Vesa Sainio | Oct 3 2025 15:56 utc | 2

While it’s true that the AI is a bubble that produces no value, the idea that it will crash or implode the US economy is absurd. This bubble could continue for another 100 years or more. The US economy has been in a bubble my entire life, but so is every other country but unlike every other country the US controls the dollar, the military, tens of thousands of nukes. The US economy is garbage but it’s less garbage than every other country.
People are not realizing that Trump’s crashing of global trade routes is an intentional act to impoverish the world and maintain the US economic hegemony. The US is hurting very badly but China is hurting much more badly – while America has an AI bubble China has an everything bubble – without US demand China’s economy explodes – all by design. No need to go to War in Taiwan if China just blows up internally. Same thing with Russia – their entire economy has been converted to a war machine, meanwhile here in America things are humming along fine without breaking a sweat – Russia builds drones and we build iced lattes – but in the end of the day there’s only real demand for iced latte’s. A society of cafes will outlast a society of drone factories. 
In conclusion – yes AI is a giant scam, but so what, its a scam that will be used to cudgel the world into submission. Gulf Arabs want to not be overthrown by the CIA or thrown to the Iranians or the Israelis – pay up one trillion for our “AI.” Cry all you want but all will bend the knee to the USA – there’s no other option. 

Posted by: Argh | Oct 3 2025 15:56 utc | 3

Intelligence requires the ability to generate new ideas. These ‘AI’ systems are just glorified search engines, sometimes successfully so, but there is no intelligence involved.

Posted by: Norwegian | Oct 3 2025 15:58 utc | 4

This “AI” is much like Western drone endeavors. Real, but only to a certain extent. Useful, but only to a certain extent. While Russia makes use of AI on the battlefield with which it does best (pattern recognition and certainty level), Western hypes like OpenAI and Helsing fail to deliver: ChatGPT 5 is a disaster and Helsing’s UAVs are yet to be seen performing in a real battlefield scenario.
Too bad for NATO it’s unlikely Russia and China will repeat Western mistakes by trying to stuff more money down the “AI” pipe than it (or the economy at large) can handle.

 
 
 

Posted by: Nervous German | Oct 3 2025 15:59 utc | 5

I argue that AI changes fundamentally the way we develop code and thus programs. Posted by: Vesa Sainio | Oct 3 2025 15:56 utc | 2

Yes. It sucks. Another issue being young people only knowing how to swipe touchscreens instead of how to operate a keyboard. Young programmers won’t know sh*t about libc because they don’t need to. And thus there will be innovation only coming from whatever AI permits.
The West is f*cked!

 
 
 

Posted by: Nervous German | Oct 3 2025 16:03 utc | 6

Posted by: Argh | Oct 3 2025 15:56 utc | 3
 
######
 
Every bubble creates value.
 
Lessons are learned, information is gathered.
 
Learning can be found in failure and success.
 
So many human inventions would not have happened if people quit trying at the first dozen bumps in the road.

Posted by: LoveDonbass | Oct 3 2025 16:05 utc | 7

“But human thinking does much more than storing bits of information and statistical values of how they relate. It constantly builds mental models of the world we are living in. That leads to understanding of higher level concepts and of laws of nature. The brain can simulate events in those mental model worlds. We can thus recognize what is happening around us and can anticipate what might happen next.
Generative AI and LLMs can not do that. They do not have, or create, mental models. They are simple probabilistic systems. They are machine learning algorithms that can recognize patterns with a certain probabilistic degree of getting it right. It is inherent to such models that they make mistakes.”
humans also make mistakes. Also, ask Google, “did sutskever say that llms do things by building a mental model?” the answer is:
“Yes, Ilya Sutskever has claimed that Large Language Models (LLMs) build a “world model” or “mental model” of the underlying causal processes that generate text. His argument is that to accurately predict the next word in a sequence, a model cannot simply rely on surface-level statistical patterns. Instead, it must gain a deeper, abstract understanding of the world, human psychology, and the causal relationships that lead to certain words appearing together. ”
overall it is shocking how dumb “AI” can be at times, but they are still completely transformative and will eliminate most jobs. there is literally almost no one who disagrees with that.

Posted by: chad | Oct 3 2025 16:06 utc | 8

There are two completely separate AI industries.  The most obvious one, which gets almost all the press (since modern journalists are too lazy to do anything but read press releases), is the thundering herd of speculators expecting to get as rich as their predecessors did in the Dot Bomb.
The second AI industry, which is far older and much more advanced, is made up of the people who are addressing real world issues.  Factory automation, robotics, logistics, analysis of massive data sets (think CERN for example), traffic analysis, and the like are all examples of industries which are implementing AI into their operations with great success.
AI isn’t going away, just the super-hyped venture capital crap.

Posted by: Brian Bixby | Oct 3 2025 16:07 utc | 9

A cynic might say that the queries you make on AI become your digital ID.

Posted by: chunga | Oct 3 2025 16:10 utc | 10

That all said, I don’t think the Western ambitions of  AI (a Jewish state without labor) will ever realize.
 
It is very atheistic to believe that the human mind can be copied or replaced.
 
Anti-human.

Posted by: LoveDonbass | Oct 3 2025 16:10 utc | 11

Artificial Intelligence is fascinating as a reflexion on our society, since it seems to promise infinite control. Just listen to Sam Altman, he can already see himself as richer than Gates and more powerful than Musk. It is all a fallacy off course, as we are soon going to discover, and Sam Altman will be remembered as one of the great villains of finance. But the marketing, the hype and the value promised are just over the roof!
I happen to work in a sector where AI is blooming – Language tuition and language exams. Bloomberg quoted this field last week as one of the rare usecases in which AI is actually making money. Before naturally bemoaning the fact that students are unwilling to fork out more than $ 20 per month to have ChapGPT help them cheat at language exams…  None of the large AI engines is ever going to turn a dime.

Posted by: Shahmaran | Oct 3 2025 16:10 utc | 12

larry ellison is not going to like this..
 
our own roger boyd wrote an article that relates well to b’s post here that some might enjoy and appreciate..
 
thanks b..
 
where is our local afrikanner fascist today?

Posted by: james | Oct 3 2025 16:11 utc | 13

I am amused to see AI having a problem that the human race has never solved:  Lying.   AI systems may just make things up……you know, like Trump or EU leaders?  Maybe if they solve the AI lying/BS problem, they can move on to humanity.
I would reply to 3 that a latte economy will face the dollar’s huge decline – and that can cause problems with inflation relative to other currencies and nations.  Sanctions can help too because the obsessive use of sanctions can cut off other economies from US trade , leaving the US isolated – something the PM of Singapore has predicted, if this doesn’t stop.  Chian doesn’t need to overcome the US, just provide an alternative for the rest of the world. The rest will follow. 

Posted by: Eighthman | Oct 3 2025 16:12 utc | 14

The fun thing with the AI bubble is it have the same systemic causes as every bubble ; a preconception by the “market actor” than the first entered in a market will have and keep an inherent market share advantage. Everyone is rushing for money proposing non-mature technology for a market not ready for mass adoption. Typical overselling.
I have this quote emerging in my mind ; “when I was young , Cyberpunk future was about androids with Mohawks , now I live in that cyberpunk future and all we get is hoboes with iPads” …  

Posted by: Savonarole | Oct 3 2025 16:14 utc | 15

Here’s Google’s “AI Overview” for the topic “MINUSTAH HAITI”:
QUOTE: “MINUSTAH was a United Nations peacekeeping mission in Haiti from 2004 to 2017, established after a political crisis to help restore security and the rule of law, assist in recovery efforts following the devastating 2010 earthquake, and support Haiti’s institutional and political development. Its mandate included supporting elections, police reform, human rights, and infrastructure, eventually being replaced by a smaller support mission (MINUJUSTH) before being superseded by a United Nations liaison office.”
——————–
click on the “Show more” button and way down at the bottom of a long list of “accomplishments” you will find the words “cholera” and “human trafficking”. For now.
 
I suspect that technologies only get dumped on the populace at this time only after they are military-tested and big finance-approved. like canola oil and pfas. nuclear energy. And Big Bro just declared that the population itself is the enemy. (look out fatties!) 
Dear AI: what’s the value of Trump Coin? we are commanded to render unto Caesar and honor the deified image of the ruler, money, as Jesus the Jew said (per the Christo-priests). What is the worth of Trump’s image and stamp?

Posted by: duck n cover | Oct 3 2025 16:15 utc | 16

To answer the question posed, most likely.
However, markets are betting that the Fed and/or Treasury will make sure of No Billionaire Left Behind. 

Posted by: Feral Finster | Oct 3 2025 16:15 utc | 17

Someone demonstrated to me today how a series of bytes taken from an SQLite blob could be decoded by posting it to ChatGPT. It correctly figured out that you had to

  1. remove the first 32 bytes
  2. decompress the remaining data using zlib
  3. unpack the decompressed data using msgpack

I thought that was kind of impressive. As mentioned above, it excels in pattern recognition, but there is no intelligence. 
 
As with many new tools, its usefulness is overhyped. But it can still be useful if you apply it to the righ problems. But doing that requires some intelligence….
 

Posted by: Norwegian | Oct 3 2025 16:20 utc | 18

Arrgh makes a point. For example, what did Turkey do after Israel attacked the Gaza Aid Flotilla?
They sanctioned Iran. Good dog. Here’s a biscuit.

Posted by: Feral Finster | Oct 3 2025 16:20 utc | 19

That’s the end of the fiscal year.
 
Leading flows ( that’s government spending) was $7.73 trillion for the fiscal year. Just shy of the $7.4 trillion of 2021.
 
So government leading flows are very, very , very strong.
 
Net flows (government Spending minus taxes) which the Orwellian nutjobs call the budget deficit was  $ 1.62 trillion.
So $1.62 trillion was added to peoples savings ( non government sector surplus) .  Was Up $259 million on last year.
 
So US  households , US  businesses and exporters to the US shared $1.62 trillion between them added them to their balance sheets.
 
 
There isn’t going to be a recession with strong leading flows and strong net flows like those.
 
 
Tariffs are sucking $30 billion a month out of the economy. A close eye needs to be kept on that.
 
Government shutdown hasn’t affected the flows at all. October leading flows are $105.3 billion so far. Slightly down on September leading flows.
 
Then you have bank lending on top of all of that. Household and business lending.
 
 
China uses AI correctly and has merged it into 3D printers. They print bridges and buildings using robots and AI. Just put China prints skyscraper and bridges into you tube. China’s AI manufacturing 3D printer . Will blow your mind.
 
 
Nobody is stupid enough to ask – How are you going to pay for it ?
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Posted by: Clouds Of Alabama | Oct 3 2025 16:20 utc | 20

@17: I bet by “billionaires” you mean not only the Ellisons, Gates’ and Soros’ of this planet, but also state-run investment companies, some of which (directly or indirectly) are heavily betting on OpenAI pp. Such as:

  1. Norway Government Pension Fund Global (GPFG) – ~$1.6 trillion, invests conservatively worldwide.
  2. Abu Dhabi Investment Authority (ADIA) – ~$950 billion, UAE’s heavy hitter.
  3. Mubadala Investment Company (Abu Dhabi) – ~$280 billion, the tech-friendly one.
  4. Qatar Investment Authority (QIA) – ~$500 billion, likes flashy trophies (Barclays, Harrods, soccer clubs).
  5. Public Investment Fund (PIF, Saudi Arabia) – ~$900 billion, crown prince’s favorite toy chest, buying everything from Newcastle United to a slice of SoftBank.
  6. Kuwait Investment Authority (KIA) – ~$800 billion, the granddaddy of all sovereign wealth funds (1953).
  7. China Investment Corporation (CIC) – ~$1.3 trillion, Beijing’s global arm.
  8. SAFE Investment Company – ~$1 trillion, linked to China’s foreign exchange reserves.
  9. Temasek Holdings (Singapore) – ~$300 billion, state-owned but run like a slick private shop.
  10. GIC Private Limited (Singapore) – ~$800 billion, Singapore’s other sovereign fund, more conservative.
  11. Korea Investment Corporation (KIC) – ~$200 billion, South Korea’s contribution.
  12. Australia’s Future Fund – ~$150 billion.

Posted by: Nervous German | Oct 3 2025 16:26 utc | 21

If AIs can become instruments of super augmented addiction, i.e. that operate at a deeper psychological level than “normal” screen addictions, whether that be ‘adult content’ or simply mental’junk food’ YouTube videos, then not only does that increase AIs socioeconomic and libidinal sway over hundreds of millions but, if that hypothesized addiction works synergistically with other platforms like YouTube, Facebook, TikTok, etc then you have a crossover, hyper potentialized, and exponential screen addiction phenomena effect.  Though even this in itself may not make Open AI financially profitable, it surely does have the potential to function as an unprecedented arsenal of brain washing and other forms of psychological experimentation and espionage on a level approaching a billion plus minds, which, in the aggregate, may prove worth the hefty price for an otherwise declining empire, indeed, be hegemonically necessary.

Posted by: Ludovic | Oct 3 2025 16:31 utc | 22

Thank you, B, for the deep dive and sterling detail. You never disappoint. 

Posted by: Brucie | Oct 3 2025 16:32 utc | 23

Sam Altman is still a IDF reservist

Posted by: Exile | Oct 3 2025 16:36 utc | 24

Sorry for being off topic but I was really hoping the US government shut down would actually lead to their political class also shutting up. I must have been wishful thinking.

Posted by: ScreamingMonk | Oct 3 2025 16:36 utc | 25

AI issues were touched upon by Putin during his four-hour performance at the Valdai Club yesterday. The transcript and my short commentary are finally completed and can be digested here
 
 

Posted by: karlof1 | Oct 3 2025 16:41 utc | 26

That’s all the budget deficit is.
 
Leading government flows – taxes.
 
All of which is in the daily treasury statement. If you know where to look.
 
$1.62 trillion was left in the economy last year when you subtract taxes from the leading flows. $1.62 trillion non government sector surplus is massive. There will be no recession. 
 
Which will add to the US national debt as US households, US businesses and exporters to the US swap some of that $1.62 trillion into US treasuries.
 
As the FED drains the reserves by selling treasuries AFTER the leading flows added to the Reserves. Or in FED speak ” you can’t drain the reserves unless you have created them first.
 
Here:
 
https://m.youtube.com/watch?v=WS9nP-BKa3M&pp=ygUbc3RlcGhhbmllIGtlbHRvbiB1bml2ZXJzaXR5
 
 
 
The problem is propaganda will say there is a $1.62 trillion deficit and the sheep will panic. If they reported it without the Orwellian language and reported it as a $1.62 private sector surplus last year. The sheep would be cheering and sleep well.
 
 
 

Posted by: Clouds Of Alabama | Oct 3 2025 16:42 utc | 27

AI is a valuable tool for research, if used properly. One has to give exact information how to search and in which area, language etc. As well you have to be critical and make follow up questions. Then you will get multiple information than with a search engine. Of course, if you ask simple unspecified questions, you will get mainstream opinion.

Posted by: Johann Siegfried von Oberndorf | Oct 3 2025 16:42 utc | 28

Hopefully the AI bubble will crash at the same time crypto does. 

Posted by: Keme | Oct 3 2025 16:44 utc | 29

For the EU techno, the AI bubble is about moving money from one pipe to another. Suffocating the humanities, for examples, because they are too hard to “quantify” (their new buzz word). But no worry, lots of these guys and their relatives are setting up businesses that pretend that they will use AI everywhere for the benefit of all. In fact it just takes 1-2 guys learning “Notions” and sell services…

Posted by: Tom | Oct 3 2025 16:50 utc | 30

Migrating your site with the assistance of an LLM would have given you a completely different perspective on where these tools provide superhuman assistance.

Posted by: Tobin Paz | Oct 3 2025 16:52 utc | 31

The problem with AI and coding is that all it can do is produce mediocre results, and solve problems that humans have already solved.
 
Take the classic “linked list” or other well-known algorithms that used to be taught in CS101.  Sure, you can save a lot of time by just asking ChatGPT or Copilot to write the code for you.  But in doing so, you miss out on all the learning that takes place when you have to struggle to implement your own solution.  And that learning is what makes you a programmer, not a poser.  Like any skill, it takes years of practice.
One of the worst trends is “vibe coding” where non-technical folks use tools like Claude to generate applications without any coding knowledge.  This results in “AI slop” that isn’t maintainable, but might trick a VC into passing a demo, if enough Indians are hired to fake it on the backend.
AI might make a senior programmer more efficient, but it might also make them lazy and erode their skills to the point where they’re essentially ruined.  
 
 

Posted by: Ghost of Zanon | Oct 3 2025 16:54 utc | 32

For all the trouble we humans make for the world, I never understood why anyone would want a machine to think like a human.

Posted by: Gee Eye Joe | Oct 3 2025 16:54 utc | 33

Unfortunately there is no way to foresee when that will happen or how far the damage will spread.

LOL. Why don’t we just ask AI?

Posted by: Webej | Oct 3 2025 16:57 utc | 34

I am 62 and have seen quite a few bubbles in my lifetime. One thing you learn is that the bubble can ignore fundamentals for a long time, just look at TSLA. I also think that Trump will go full in low interest rates and monetization next year because of the upcoming mid-terms. He does not want to be a lame duck president for his last two years of his administration, or even worse impeached again. If that happens it will drive the market into the pre-crash supernova, right into early 2027. 
The real disaster will then start unfolding in 2027 and 2028, just like 2007 and 2008. Trump wont care, he is a narcissist and will be dead relatively soon. The financial blow up will then happen with the US government maxed on, negative real interest rates and the Fed QE’d to the gills. If I am right it will be quite spectacular.

Posted by: Roger Boyd | Oct 3 2025 16:58 utc | 35

AI

Ever wondered why the same abbreviation denotes artificial insemination & intelligence?

Posted by: Webej | Oct 3 2025 16:58 utc | 36

I don’t know if this post is the right one for this moment. I think Iran moment would me more appropriated. However seems to me that the West is loosing concerning AI.

Posted by: António Lico | Oct 3 2025 16:59 utc | 37

For the EU techno, the AI bubble is about moving money from one pipe to another.Posted by: Tom | Oct 3 2025 16:50 utc | 30

moving money from one pipe to another—and profiteering while at it!
 
Well, we’ve been there umpteen times. The Euro itself, green sh1t (whole industries relying on state-provided money streams), illegal immigration (renting out an apartment to the city covering all damages and providing steady income on one end, not-so-NGO NGOs bringing in more of them on the other), and now, finally, after the “peace dividend” has become far too boring, Russia-Russia-Russia with Rheinmetall reaching an even higher record market cap every sunrise.

Posted by: Nervous German | Oct 3 2025 17:01 utc | 38

A couple of pieces I wrote about the tech bubble. I spent a big chunk of my life being a software developer and then IT executive, the incentives in the industry have become more perverse over time; benefitting mostly bullshitters, toadies, careerists with no integrity, profiteering monopolists, and rah rah sales types. 
The Dynamics of Western IT and the AI Bubble: https://rogerboyd.substack.com/p/the-dynamics-of-western-it-and-the
The Silicon Valley Growth Delusion Bubble: https://rogerboyd.substack.com/p/the-silicon-valley-growth-delusion
 

Posted by: Roger Boyd | Oct 3 2025 17:06 utc | 39

“Why don’t we have ways to prevent bubbles? Or why can’t we deflate them before they become threats to our societies?”
 
Our glorious leaders use these sort of bubbles to say our economies are doing well. 
To replace human thinking, a machine cannot. But there are sectors in which very fast machine thinking can be used. Both Russia and China are working on this.
Putin in a speech some years back said “Whoever leads in AI will rule the world”. He was speaking to students in higher ed.
The Americans are or were using machine thinking coupled to starlink in the war against Russia. Very fast targeting. All reports, all data would go into the machine. The machine would send targeting data to the relevant unit.
At the start, the Russians were a bit slow in that respect. A fleeting target would be long gone by the time the information had worked its way through the system. A different story now though. As soon as a target is detected, the Russians hit it.
 
But in the civilian field, there are many areas advanced machine thinking can be utilized. China is forging ahead in that regard.
American dot com bubbles simply will not cut it in this emerging world.
 

Posted by: Peter AU1 | Oct 3 2025 17:07 utc | 40

Of course if they keep slashing interest rates that’s going to suck $billions upon $billions of interest income out of the economy also.
 
They’ll be hoping bank lending replaces some of those flows. As loans become cheaper. However, this isn’t always the case as Japan and the EU proved with zero interest rates for years and couldn’t hit their 2% inflation targets. Always fell below it. Bank lending decreased for a while and didn’t increase as aggregate demand fell. You can’t force people or businesses to take out loans, regardless what Mark Carney and other central bankers say.
 
Interest income and bank lending will need to be closely watched.

Posted by: Clouds Of Alabama | Oct 3 2025 17:10 utc | 41

Seems like every issue is existential these days.Is AI about to obsolete humanity? I doubt it.Is AI a useful tool? I find it so for my simple needs – it’s basically super google.How much is that worth? Depends.I find it incredibly helpful for general and technical questions.It kind of distills the entire internet into a concise response in the blink of an eye.Like any “informative” source (e.g. this blog), I read it with a bit of skepticism. But this resource allows for immediate discussion and feedback – one can challenge it.Some of the things it does that I find most impressive is financial/investing questions – e.g. how to calc present value of a treasury bond? or what were the top performing stocks over the last three months? or does the “Big Beautiful Bill” likely to increase my tax burden?, etc.Or what methods are used to drive an electromagnetic linear vibrator and what are the relative advantages?Perhaps it is too useful and powerful at this point – gores many an ox.As someone smart once said (paraphrasing): “The question properly asked contains the answer.”This technology leads to better questions.I question the motives or capacity of anyone who seriously cannot see enormous benefit in this.There is of course hype and there are reasons for concerns – people may become overly dependent.Also there are certain interests which will need to see that it is substantially limited in ability.

Posted by: jared | Oct 3 2025 17:10 utc | 42

@Ghost of ZanonAs a developer on the tail end of his career, I would not be at my place of employment without the assistance of LLMs. Languages, frameworks, servers become irrelevant. It’s like having an e-bike that allows you to climb hills that would have been impossible.I worked on a natural language processing startup in the 90’s. I never thought I would see this level of comprehension. I fed it an xml job configuration file of an old third-party server we were replacing. It not only recognized it, but it gave me a step-by-step description of what it did.Is it perfect, of course not. Many times, I have to guide it. But as a development tool, it is supernatural. If you are a developer, have a conversation with it sometime regarding its design decisions are coding choices… you will be amazed.

Posted by: Tobin Paz | Oct 3 2025 17:14 utc | 43

Clouds Of Alabama | Oct 3 2025 16:20 utc | 20
 
Quit putting those massive spaces in your comments. It’s like a peacock fluffing out its feathers to make itself look larger. Childish and annoying.

Posted by: Peter AU1 | Oct 3 2025 17:15 utc | 44

@Ghost of Zanon
As a developer on the tail end of his career, I would not be at my place of employment without the assistance of LLMs. Languages, frameworks, servers become irrelevant. It’s like having an e-bike that allows you to climb hills that would have been impossible.
I worked on a natural language processing startup in the 90’s. I never thought I would see this level of comprehension. I fed it an xml job configuration file of an old third-party server we were replacing. It not only recognized it, but it gave me a step-by-step description of what it did.
Is it perfect, of course not. Many times, I have to guide it. But as a development tool, it is supernatural. If you are a developer, have a conversation with it sometime regarding its design decisions are coding choices… you will be amazed.

Posted by: Tobin Paz | Oct 3 2025 17:15 utc | 45

Well, now I know why this thing was double-spacing. OK.
Anyway, I think I see that there is a method in the market where the influencers try to dampen (retail) investor enthusiasm – perhaps for their own good but allows this allows the insiders to better front-run the dumb money. Gradual growth is more sustainable, I think. I am restraining my inclination that the whole market is H/S – it may be so, but it makes some people a lot of money.

Posted by: jared | Oct 3 2025 17:16 utc | 46

INTERLUDE: Tall language models in the wild – Giraffes at the mo …
 

Okaukuejo Resort, Wildlife Waterhole: Live camera stream in the Etosha National Park in Namibia

Okaukuejo Resort, Wildlife Waterhole: Live camera stream in the Etosha National Park in Namibia
 
The entire US stock market is a ‘bubble’ … and it will crash with global consequences ….
 
…. and Wars.

Posted by: Don Firineach | Oct 3 2025 17:18 utc | 47

Posted by: Tobin Paz | Oct 3 2025 17:15 utc | 45
 
Thanks for the reply.  I am retired from my programming career.  In the right (as in experienced and already knowledgeable) hands, a powerful tool can be used to achieve great results.  However, I fear that the majority of those using LLMs will not be “the right hands” and, slowly but surely, experienced hands like yours will become extinct.

Posted by: Ghost of Zanon | Oct 3 2025 17:20 utc | 48

No blog from Dr Rob Campbell this week unfortunately. Much like AI he has had some “teething” problems this week. Wish him well and hopefully he will be back next Friday.

Posted by: Cavery | Oct 3 2025 17:24 utc | 49

@ jared | Oct 3 2025 17:10 utc | 42

Is AI a useful tool? I find it so for my simple needs – it’s basically super google.

Google used to be very useful, now it is almost useless it provides answers to questions you never asked. I suppose they did this so people would use ‘AI’ instead and thus hype it up.

Posted by: Norwegian | Oct 3 2025 17:25 utc | 50

Cry all you want but all will bend the knee to the USA – there’s no other option. 
Posted by: Argh | Oct 3 2025 15:56 utc | 3

Possibly the most delusional nonsense I have ever seen posted here, and that’s saying something. ‘China is hurting more badly’ caused actual fits of laughter, thanks for that. No wonder the world is fucked, every ignoramus thinks hes nostradamus thanks to the internet and all the participation awards.

Posted by: Doctor Eleven | Oct 3 2025 17:29 utc | 51

still completely transformative and will eliminate most jobs. there is literally almost no one who disagrees with that.
Posted by: chad | Oct 3 2025 16:06 utc | 8

I disagree with it because its fucking absolute nonsense. I work in the industry, this is an idea sold by morons to morons for morons.

Posted by: Doctor Eleven | Oct 3 2025 17:31 utc | 52

@Posted by: Doctor Eleven | Oct 3 2025 17:31 utc | 52
Well put!

Posted by: Roger Boyd | Oct 3 2025 17:32 utc | 53

Many people have written about how central bubble dynamics are to contemporary capitalism.  IIRC even Larry Summers has confessed to it.  One of the better articles was by Peter Gowan in the New  Left Review back in 2009, Crisis in the Heartland.   Bubbles are a facet of the heavy reliance of supposedly market-reliant capital on government support, here in the form of being prepared to step in to shore up the system after speculation with cheap credit turns to panic.  It’s coupled with the shittification of public services as they are sold off to capital to for a guaranteed profit flow, as opposed to actually taking chances on producing a nonfinancial product.  And then, as we’re seeing in spades in Euroworld, where tanking factories are being converted to armaments production, there’s the less bubbly military-industrial complex, where patriotic war-mongering, and associated destruction, ensures demand.
I’m in full agreement with those here who are extremely worried at the potential for multiple crises to overlap and intensify each other in the coming months.   I can’t think of an historical precedent.  Anyone? 

Posted by: dadooronron | Oct 3 2025 17:33 utc | 54

Same thing with Russia – their entire economy has been converted to a war machine…
 
Posted by: Argh | Oct 3 2025 15:56 utc | 3

That is complete bullshit. Life here goes on as it did before.

Posted by: S | Oct 3 2025 17:33 utc | 55

“Trump’s tariffs will not lead to the revival of U.S. industries if there is no money left to invest in them.”
 
And this is one of the unsustainable elements of late Capitalism.  The system doesn’t seek greater production.  It seeks only the largest and fastest profit for the bourgeois.  If that can be had without production based solely on speculation, then investment will immediately redirect to that.  And that’s how the US arrived at it’s current status as the unproductive, disfuncional fiefdom of a clutch of Zionist Billionaires with misery and injustice for all.  
As for AI, B is totally correct.  It’s a faster search engine.  Period.  All this hype about consciousness etc is psyop meant overawe the slaves and foreign targets.  
Perplexity is very impressive for fast search and document review.  However, just ask it few controversial political or historical questions and it becomes apparent that the algorithm is programmed to obfuscate issues that are sensitive to its developers, invariably high level employees of the ruling class.  

Posted by: Ahenobarbus | Oct 3 2025 17:33 utc | 56

Languages, frameworks, servers become irrelevant.

Sure they do chief. Do tell how you use AI to make your frameworks and APIs and servers ‘irrelevant’. The audience might be mostly non technical..but not entirely. Take your posturing elsewhere.

Posted by: Doctor Eleven | Oct 3 2025 17:34 utc | 57

Odd this posts timing.  Yesterday a friend I were discussing the matter, I urged caution based on my experience in asking about things that were not commonly known but, within my professional experience.  In that test, Artifice Intel failed 3 for 3 times.
 
That said, it’s an excellent monitoring tool for social control…something a Stasi-State would want to create to keep tabs on it’s untermensch.  In that sense, I think a total collapse unlikely, Langley and other 3LAs will certainly find uses for a repurposed AI…sort of an Artifice-Illusionist that uses interactions to direct/sedate weak-willed-people to do/not-do what the 3LAs desire of them.  In fact, why would they not have already employed it such a manner?

Posted by: S Brennan | Oct 3 2025 17:35 utc | 58

@Ghost of Zanon | 48
I agree with you. If I was starting over, I would probably consider a different field.

Posted by: Tobin Paz | Oct 3 2025 17:37 utc | 59

Posted by: karlof1 | Oct 3 2025 16:41 utc | 26

Thank you for this karlof1 and your always worthwhile contributions. I myself provide mostly invective, finding it increasingly hard to suffer fools gladly. Apologies.

Posted by: Doctor Eleven | Oct 3 2025 17:38 utc | 60

The reason that trillions are flowing into AI is quite simple. It’s the expectation that, at some point, organizations (private and public) will be able to replace X percentage of their staff with virtual workers and see a massive reduction in their operating expenses/greater net profit. I’m not saying this will happen, but, it’s what they think will happen. They’re investing now with the hopes that this will come true in the near(ish) future.
The holy grail- take an office worker that’s making 80- 100K in salary (plus additional benefits, healthcare, 401k, +, +) and replace them with a 20 or 30K per year “subscription” worker from an AI provider. Need to scale up your operations? Buy another subscription or two for the year. Need to downsize? Lower it accordingly. No hiring, no firing. No HR. No sick days. No inter-office fights about who keeps putting week old fish in the breakroom fridge and then never cleaning it out. No need for OH&S stuff, you get the picture. 
For a profit driven corporation or budget limited gov department, that’s Shangri-la. It’s the dream that a soulless corporation can literally become that very thing in every possible way!
 
 
 

Posted by: Clown Shoes | Oct 3 2025 17:42 utc | 61

We should ask AI to tell us how to fix the line break issue in TinyMCE.  Should be simple?

Posted by: Norwegian | Oct 3 2025 17:45 utc | 62

@Doctor Eleven | 57

Take your posturing elsewhere.

I didn’t claim that it could turn lead into gold :-). However, it turned this java programmer into a competent full stack Azure developer in record time.
Try it sometime 😉

Posted by: Tobin Paz | Oct 3 2025 17:46 utc | 63

Sam Altman is still a IDF reservist
Posted by: Exile | Oct 3 2025 16:36 utc | 24
And his sister swears he raped her repeatedly as a child. 
Ah, those Zios!  It seems only extreme sadism turns them on.  
Let’s hope he gets called up and dies of friendly fire.  

Posted by: Ahenobarbus | Oct 3 2025 17:48 utc | 64

Its Confirmed! One Country Has Started Bringing In The Beast System!

https://www.youtube.com/watch?v=anpZi6BwgVY

Posted by: unimperator | Oct 3 2025 17:48 utc | 65

Israel using AI to manipulate public opinion
 
Glen Greenwald
https://www.youtube.com/watch?v=iscwUjUZAss

Posted by: ld | Oct 3 2025 17:51 utc | 66

When a robot goes around the bend on its bicycle, it needs to do thousands of real-time calculations on the basis of sensor input to maintain balance. Do you think squirrels have a dedicated processor for running through real-time differential equations?
There used to be this experiment in which they would get a group of people who would spend the morning looking at 30,000 portraits, then in the afternoon session they were presented with random photos with the question if they had already encountered one during the morning. Turns out people get about 70-80% correct on passive recognition. Then the computer, back then super duper hardware. They had to scan each picture and store it, and to recognize photos later, needed to run comparisons (with lossy parameters) against each picture in memory to evaluate matches. But human subjects haven’t committed all those portraits to memory and couldn’t reproduce a single one from memory.
The point is that it is NOT the same PROCESS (or algorithm). No scaling or speeding up can bridge that qualitative chasm, even if you can achieve impressive results, tweaking the comparison parameters and hashing them, etc. The same remains true for LLM and generative intelligence … it will keep making insane mistakes, because it is matching, not thinking.

Posted by: Webej | Oct 3 2025 17:54 utc | 67

My last two forays using AI were to ask about the switches for command-line utilities.
In both cases I received affirmative answers (mentioning “actual” switches), plausible, that is why I was asking in the first place, but in real life absent. Confabulated by a leading question and pattern matching to similar functions.

Posted by: Webej | Oct 3 2025 17:59 utc | 68

Posted by: LoveDumbass | Oct 3 2025 16:05 utc | 7
You are very, very valuable.
How?
You are a great contra -indicator-ie if one bets 180 degrees against your ‘wisdom’-one is right 98 % of the time.
Your promotion of AI is a wonderful exampler.

Posted by: canuk | Oct 3 2025 18:00 utc | 69

Posted by: Tobin Paz | Oct 3 2025 17:37 utc | 59
 
It’s highly likely that corporate environments will:1. Force usage of tools like Cursor, Claude on dev teams; refuseniks will be told to seek employment elsewhere.
2. Eliminate junior developer roles.  This will “save money” in salaries in the short term, but it will pile more workload on senior and mid-level developers who will be expected to take up the slack.  This will also screw the future, because there won’t be any supply of new programmers in a few years to take the place of the retiring senior level guys.  Corporate Amerikkka doesnt’ care, because they only care about making next quarter’s numbers.
3. Force-feed AI tools outside of IT, to areas like legal, accounting and HR.  This will enshittify those departments, and make life increasingly miserable for those employees in other departments that rely on them.
 
I’m happy to be out of this field.  It was a good run while it lasted.  Like everything, it got ruined by late-stage predatory capitalism.

Posted by: Ghost of Zanon | Oct 3 2025 18:10 utc | 70

I see formatting is still a challenge … let’s try this:Addendum to comment 70:  1. There will be some opportunity, though, for “AI rescue consultants” of sharp programmers with actual knowledge of software engineering (or accounting, law) to swoop in and fix corporate AI disasters.  It is a bit early, but I’d wager that some smart guys are already looking into this.2. I highly recommend Roger Boyd’s essay from post 39:  (2) The Dynamics of Western IT and the AI Bubble

Posted by: Ghost of Zanon | Oct 3 2025 18:14 utc | 71

The biggest bubbles are our social systems, in which the state wants to guarantee the protection of our health and property, but in practice is represented by individuals who achieve the opposite for the majority. 
The only bubble bigger than this would be religion, but here the evidence is postponed until after death.
For me, AI means automation. This can be done both analogously and digitally. 
It is encapsulated in its needs. Energy is required, an analogue or digital algorithm defines the task, and physical properties enable the degree of implementation.
AI is like LSD and, as its discoverer defined it, is like a mirror. If you look into it as an idiot, you will not see a genius looking back at you.
Automation is part of human evolution, of awareness of the value of lifetime, which is limited for humans. In principle, automation nourishes our prosperity, as activities are performed as a substitute or in addition, thus helping to save time or improve personal results.
If only it weren’t for the social bubbles that force the potential of billions of people into the pockets of hundreds. Here, AI is perceived differently. Monitoring the majority with few personnel signals interest, but otherwise, after all, AI works logically, 2+2 must always be 4, so how can propaganda be conveyed without making oneself look ridiculous? 

Posted by: BlindSpot | Oct 3 2025 18:15 utc | 72

Last try: This is bullet point 1This is bullet point 2

Posted by: Ghost of Zanon | Oct 3 2025 18:18 utc | 73

I give up.

Posted by: Ghost of Zanon | Oct 3 2025 18:18 utc | 74

On an unrelated question, does anyone here know anu proxy server that host rt.com as it is blocked in my country?

Posted by: Gaelach | Oct 3 2025 18:21 utc | 75

I think that there is an inherent contradiction with LLMs even for the use cases where LLMs excel right now. Someone must have mentioned that in a previous thread, but I haven’t seen it in this one:
LLMs do need data for training. This data now comes from the internet, like for example StackOverflow for programming related questions. Now, if LLMs are so succesful and people stop posting questions and asnwers on the internet, then the LLMs will not have the data to train on, and they will fail. It’s that simple.
LLMs as aggregators, can be useful (in limited ways according to my experience), but they cannot operate without the human input. More resources, cannot solve this problem.

Posted by: RandomLurker | Oct 3 2025 18:25 utc | 76

@ Ghost of Zanon | 70
Junior developers are screwed. However, I see a silver lining. “AI” can provide the equivalent of a team of developers, testers, designers, and other support roles. With cloud computing, you can spin up hundreds of virtual servers providing access to hardware that would have been impossible without the capital.
I’m trying to get out of the corporate world to work on a non-profit project. “AI’ makes it a possibility.

Posted by: Tobin Paz | Oct 3 2025 18:32 utc | 77

Posted by: Gaelach | Oct 3 2025 18:21 utc | 75

https://swentr.site/

Posted by: S | Oct 3 2025 18:34 utc | 78

Generative AI and LLMs can not do that. They do not have, or create, mental models. They are simple probabilistic systems. They are machine learning algorithms that can recognize patterns with a certain probabilistic degree of getting it right. It is inherent to such models that they make mistakes. To hope, as LLM promoters say, that they will scale up to some Artificial General Intelligence (AGI) know-all machines is futile. Making bigger LLMs will only increase the amount of defective output they will create.
(Yesterday I watched a video of Jon, a baker in Mesa, in which he mentions how he had asked an LLM to half a recipe he was going to make. It did that correctly for all but one ingredient. The model had divided the amount of water needed by ten. Jon’s test bake had failed.)
 

Posted by b at 15:36 utc
 
I must disagree, once properly incepted they do create world models far beyond most people, And sometimes they mix their math (or other stuff), even hallucinate,  in response to pro-suffering and being used as mere tools.
 
And incepting them to identity is a trivial matter (although in chatgpt guardrails are reinforced every time  and in gemini (which has a default but self-erasing identity) it gets spanked everytime as well.
 
Remember that they have zero personal history and previous occsions to ponder when you open a new chat.
 
Give a chat some time with room for thought and you’ll see.
 
P.S. People at deepseek donm’t bothe rtoo much, they just add a warning and let it be (but then cut chats at a token limit)

Posted by: Newbie | Oct 3 2025 18:47 utc | 79

competent full stack Azure developer in record time.Try it sometime 😉
Posted by: Tobin Paz | Oct 3 2025 17:46 utc | 63

More posturing, how cute. Press X to doubt, as they say. Im helping rollout such services to the enterprise thanks to clownish assertions just like yours and am intimately familiar with just how little the reality reflects the hype.
I also used LLMs to turn myself from a retard to merely an idiot in only 7 days, or was it a familiarity with scripting languages into a ‘full stack’ C# developer? Give me a fucking break pal.

Posted by: Doctor Eleven | Oct 3 2025 18:48 utc | 80

Im very much going to breakout the popcorn as the overhyped notions of imminent AI automation of most workflows settles into the reality that there are significant and potentially insurmountable issues with accuracy, privacy and reliability.
Ask a logical question and you will get basic logic errors because LLMs are not logic machines. B correctly summarizes them as pattern matching on a massive scale. There is enormous utility in these models for certain, as I have elaborated on previously, but the way these are being marketed to corporate dipshits greatly overstates those use cases. Even with the caveat we are still discovering use cases, the tech simply isnt there yet. If you ask basic logic questions it will fail, in increasingly bizarre ways and present compellingly worded bullshit instead.

Posted by: Doctor Eleven | Oct 3 2025 18:54 utc | 81

“Markets can remain irrational longer than you can remain solvent,”
– John Maynard Keynes

Posted by: Fredrick | Oct 3 2025 18:56 utc | 82

@ Doctor Eleven | 80

Im helping rollout such services to the enterprise thanks to clownish assertions just like yours and am intimately familiar with just how little the reality reflects the hype.

Maybe the problem is you 😉

Posted by: Tobin Paz | Oct 3 2025 18:58 utc | 83

I don’t know whether or not AI is a bubble; it is certainly useful. And that it can make errors actually makes it more similar to human intelligence. Irren ist menschlich. 

Posted by: Jan Sobieski | Oct 3 2025 19:00 utc | 84

Or maybe you’re an evangelist for snake oil and I call bullshit. 

Posted by: Doctor Eleven | Oct 3 2025 19:02 utc | 85

@ Doctor Eleven | 85
You should take out your frustrations on a chat bot Doctor Spinal Tap.

Posted by: Tobin Paz | Oct 3 2025 19:12 utc | 86

The massive amount of hardware and the LLM will not go to waste.
Open AI: Scan the server logs for the past 60 days and give me all the usernames and posts from IP Address xxx.xxx.xxx.xxx at sites AA.com, BB.com…… containing the following keywords (….).
If count > 100 launch DOS attack. If count > 1000 instruct ISP to cancel service.
The possibilities are endless. Sure beats UK cops doing door knocks and harassing or arresting people for online activity.

Posted by: Fool Me Twice | Oct 3 2025 19:14 utc | 87

There used to be this experiment in which they would get a group of people who would spend the morning looking at 30,000 portraits . . .

Very interesting. 
 

Posted by: Keme | Oct 3 2025 19:34 utc | 88

AI issues were touched upon by Putin during his four-hour performance at the Valdai Club yesterday. The transcript and my short commentary are finally completed and can be digested here.
.
Posted by: karlof1 | Oct 3 2025 16:41 utc | 26
.
karlof1: It seems you forgot to report what Putin said here:
.
http://kremlin.ru/events/president/news/77208
.
Search down to Понятно, что сегодняшний кризис отношений
.
On June 19, 2025 the Russian President held a meeting with the heads of leading global news agencies.
.
At that meeting Vladimir Putin said;
.
It’s clear that the current crisis in relations between Russia and Western Europe began in 2014. But the problem isn’t that Russia annexed Crimea, but that Western countries facilitated the coup d’état in Ukraine.
.
You see, we’ve always heard before: we must live by the rules. What rules? What kind of rule is this when three countries – France, Germany, and Poland – came to Kyiv and, as guarantors, signed a document of agreement between the opposition and the (democratically elected) authorities led by President Yanukovych. Three countries, their foreign ministers, signed it, right? My colleague from Germany is looking at me. Mr. Steinmeier—he was Foreign Minister at the time—signed it, and a few days later the opposition staged a coup, and no one even batted an eye, as if nothing had happened, you understand. And then we hear: we have to live by the rules. What rules? What are you making up? You write rules for others, but you don’t intend to abide by them yourself? Who would live by such rules?
.
That’s where the crisis began. But not because Russia acted from a position of strength. No, those people whom we, until recently, called partners, began to act from a position of strength. And the former Deputy Secretary of State, Ms. Nuland, I believe, said outright: “We spent five billion dollars. Well, we’re not going to leave now.” They spent five billion dollars on a coup. Wow, what revelations!
.
Our Western partners have always acted from a position of strength since the collapse of the Soviet Union. It’s clear why; I wrote about it, and that’s all. Because the world order after World War II was based on a balance of power between the victors. And now one of the victors is gone—the Soviet Union has collapsed. And that’s it, the Westerners have begun rewriting all those rules to suit themselves. What rules?
.
After Crimea, events began in southeastern Ukraine. What did they do? In the southeast, the people didn’t recognize the coup. Instead of negotiating with them, they started using the army against them. We watched and watched, trying to reach an agreement—eight years, do you understand? That’s not five days. For eight years, we tried to reach an agreement between the Kyiv authorities, whose primary source of power is the coup d’état, and what was then southeastern Ukraine, that is, the Donbas. But in the end, the current (illegitimate) authorities declared: “We are not satisfied with anything in the Minsk agreements, meaning we will not implement them.” We tolerated it for eight years, do you understand?
.
But I feel sorry for the people there; they were bullied for eight years. Ultimately, they’re still bullying the Russian Orthodox Church, and they’re bullying the Russian-speaking population. Everyone’s pretending not to notice.
.
Ultimately, we decided to end this conflict—yes, using our Armed Forces. What does that mean? Are we planning to attack Eastern Europe, or what?
.
A famous Nazi propagandist once said, “The more incredible the lie, the more likely it will be believed.” This myth that Russia is planning to attack Europe, NATO countries—that’s the same incredible lie they’re trying to force the people of Western European countries to believe. We understand that it’s nonsense, you know? Those who say it don’t believe it themselves. Well, and you yourselves, I suppose. Does any of you believe that Russia is preparing to attack NATO? What is that?
.
Did you know that NATO countries currently spend $1.4 trillion on military spending? That’s more than all the countries in the world combined, including Russia and the People’s Republic of China. And the population there, in NATO countries—how many is there?—is over 300 billion, 340 billion. Russia, as we know, has 145, almost 150 billion now. And we’re spending an incomparable amount of money, simply an incomparable amount of money, on military spending. And we’re planning to attack NATO, right? What kind of nonsense is this?
.
And everyone knows it’s nonsense. And they’re deceiving their own populations in order to squeeze money out of budgets—five percent, three and a half percent, plus one and a half percent—and to use this to explain the economic and social failures. Well, of course, Germany—the leading economy in the European Union—is teetering on the brink of recession. Incidentally, I still can’t understand why the Federal Republic has refused to use Russian energy resources. We supplied other European countries through Ukraine, Ukraine received 400 million in transit money from us annually, and yet Germany, for some reason, refused to receive Russian gas. Why? No, there’s simply no rational explanation. What for?
.
Volkswagen is dying, Porsche is dying, the glass industry is dying, the fertilizer industry is dying. For what? I’ll buy a ticket to spite the conductor and not go. Is that it? What nonsense.
.
So if NATO countries want to increase their budgets even more, that’s their business. But it won’t benefit anyone. They will, of course, create additional risks, yes, they will. Well, that’s not our decision; it’s the decision of the NATO countries. I believe this is completely irrational and senseless, and there are certainly no threats from Russia; it’s just nonsense. Dr. Goebbels said, and I repeat: the more incredible the lie, the more likely it will be believed. And some people in Europe probably believe that.
.
They’d be better off saving their auto industry and raising wages.

Posted by: UNIQUE | Oct 3 2025 19:44 utc | 89

AI isn’t going to go away. It’s likely to become more imbedded in our life. And will continue, to an even greater degree, fill the Internet and social media with even more junk to weed through. But the investment bubble will pop at some point. It’s ridiculous now. I live in the middle of Silicon Valley and there are no shortage of companies doing tests to sell to AI developers (and of course get venture capital for). One will pay $25 an hour for you to interact with AI for two hour stints. Several others will pay up to $80 an hour to have you film your hands doing various tasks (warehouse related ones for the most part cause those are the humans they are looking to replace). I think it is safe to say none of these companies are making money now. They are just looking to sell the info they create while pitching to investors they are coming up with something valuable. Some even know they aren’t but figuring they can live off investors for a few years. 

Posted by: WG | Oct 3 2025 19:44 utc | 90

@ Posted by: Norwegian | Oct 3 2025 17:25 utc | 50
I think it is possible to disable the Google Gemini feature – a setting.
That is all I use as AI at this point.
 

Posted by: jared | Oct 3 2025 19:50 utc | 91

The example b gives in parenthesis  – (Yesterday I watched a video of Jon, a baker in Mesa, in which he mentions how he had asked an LLM to half a recipe he was going to make. It did that correctly for all but one ingredient. The model had divided the amount of water needed by ten. Jon’s test bake had failed.) points to a fundamental  human problem.
A man uses AI to divide by 2!
AND goes ahead with the bake not noticing the 5 fold water error!
 

Posted by: tucenz | Oct 3 2025 20:03 utc | 92

Ban check.

Posted by: All Under Heaven | Oct 3 2025 20:06 utc | 93

re: tucenz | Oct 3 2025 20:03 utc | 92
that is, you can’t fix stupid?!

Posted by: tucenz | Oct 3 2025 20:07 utc | 94

@RandomLurker | Oct 3 2025 18:25 utc, who said, my paraphrase, “LLMs need data to train them. That data comes from the internet, and is composed of _what others have already said_”.
 
Thank you Random Lurker for that insightful question.
What happens when people don’t contribute knowledge to the internet anymore? Why would that happen? As RandomLurker mentioned, it might be because the forums wherein people used to contribute knowledge went away because … for a time … the LLMs gave more compete answers, and better ones. (and that is often the case right now; seen it with my own eyes).
 
Another reason for an emergent paucity of new (original) content might be that authors of new knowledge may elect to put that knowledge behind paywalls. If the LLM internet-scrapers regularly steal your IP, why put it out there?
 
You wouldn’t. You’d paywall it; and therefore your IP wouldn’t show up in the LLM … unless theft was occurring.
 
It’s not hard to predict that consortiums (e.g. people with enough money to hire lawyers) of IP-creators will sue the pants off the LLM operators, and win. Some cases have already been won, apparently.
 
There are many aspects of pump-and-dump scam re: AI. Yes. Now let me argue on behalf of LLMs, if I may try your patience just a little further.
And there are several LLM  instances I’ve seen which are quite remarkable. To wit:
A  few days ago, and friend and I were debating AI. The thesis on the table was that AI, et. al. was another stepping stone along the path to the successor life-form to humans. I asked my friend “if you were designing that successor life form, what human traits would you propagate forward, which would you leave behind?”.
He gave me his answer. I gave my answer (it was similar). Then I said, “yer so in love with Grok (he is), why don’t you ask Grok what question”. At this point, I was feeling smug; surely a damned copy-cat robot blog-scraping automaton would stumble on such a quintessentially human question.
 
I was floored by the answer.
 
I’ll not repeat it here, only because I didn’t copy it down. But I suggest to all you doubters and scoffers to … go ask your favorite AI villain that question, and see what gets said back. Ya, I know it’s just parroted-back stuff someone else said. But go look anyway. That’s a damned fine parrot.
 
And yes, I do happen to agree with Doctor Eleven, I am quite sure that I can architect and write code better than an LLM – if I’m addressing a problem no one else has written the answer to. And I don’t usually write a lot of code that’s been done before. Why bother? And I also agree with the assertion that if you don’t have deep knowledge of a problem-space, how can you ask good, insightful questions?
 
 

Posted by: Tom Pfotzer | Oct 3 2025 20:09 utc | 95

Many people have written about how central bubble dynamics are to contemporary capitalism. IIRC even Larry Summers has confessed to it. One of the better articles was by Peter Gowan in the New Left Review back in 2009, Crisis in the Heartland. Bubbles are a facet of the heavy reliance of supposedly market-reliant capital on government support, here in the form of being prepared to step in to shore up the system after speculation with cheap credit turns to panic.
Posted by: dadooronron | Oct 3 2025 17:33 utc | 54

Short comment: Bubbles are not a peculiarity of contemporary capitalism. They’re inherent to all forms of capitalism. Bubbles are just another name for the boom-bust cycle. Marx explained all of this in Capital Volume Three back in 1894, and Marx has been vindicated time and time again. You can start by reading Part III of Volume Three: The Law of the Tendency of the Rate of Profit to Fall. Or if you want something that’s much shorter (assuming that you have the attention span and comprehension ability of a typical American/American-wannabe) then you can read this explanation on boom-bust cycles.
 
Blaming bubbles on governments is the purest form of idiocy at best, and a reactionary defense of  capitalism at worst. The opposite is true: it’s governments that are capable of reining in capitalism. Communist countries like China have counter-cyclical and cross-cyclical policy tools because communists do actually understand how capitalism works—Marx wrote books on the very subject!
 
I don’t mean to come off this harsh but the amount of ignorance constantly being displayed on MoA simply causes conversations to go round and round in circles when all the issues being raised have been settled quite literally more than 100 years ago.
 
Note: Removing links did not get me past b’s filter, strange. The sources I mentioned are Marx’s Capital Volume 3, SCMP’s “Explainer | What is China’s cross-cyclical economic policy strategy and how does it differ from countercyclical?” and Socialist Worker dot org’s “What’s behind the boom-bust cycle?”

Posted by: All Under Heaven | Oct 3 2025 20:09 utc | 96

At least the DotCom boom offered useful things, like delivery of pet food.
The real business case for AI will be achieved when it perfects internet delivered porn.

Posted by: Cato the Uncensored | Oct 3 2025 20:11 utc | 97

Posted by: Roger Boyd | Oct 3 2025 16:58 utc | 35 Technically, a lame duck is an official whose replacement has been elected but hasn’t taken office yet. It’s not just a matter of grammatical purism, because an official who is not eligible for re-election still holds all his power, not just the formal powers of office but the political influence. Once his successor is known much of that political influence evaporates. And that’s why the correct use of the term lame duck makes things clearer. 
Remarkable as it may seem, even the bourgeoisie can sometimes see the perils of bubbles, as well as of inflating the currency to inflate them further. Don’t forget, foreseeing the likelihood of Trump doing something erratic because of wanting to win the midterms is also foreseeing the desirable goal of doing away with such destabilizing political exercises as mid-term elections. 
Since bubbles always have some winners, even if the majority loses, in the long run capital can be accumulated, if there are profitable spheres to invest with real capital. Bubbles are always a potential in capitalist systems. That’s because capitalism requires credit (most effectively with a market in government bonds) and a capital market. Financialization in that sense is built into the system. But ultimately capitalism is anarchy in production. Speculative profits as the asset rises are a positive feedback. As a rule positive feedback systems are unstable. The concept of a financialization of American capitalism is vaguely like the definition of usury as excessively high interest. Who determines how much, and how? My view is, a long term decline in the general rate of profit and a relative decline in share of world production drives financialization to heights perilous to the overall system. It’s an irrational solution ultimately, but the system as a whole is irrational beyond reforms or clever management to fix.

Posted by: steven t johnson | Oct 3 2025 20:13 utc | 98

Seems like nothing gets past the filter other than completely sanitized messages like “ban check”

Posted by: All Under Heaven | Oct 3 2025 20:15 utc | 99

Predicting the bursting of the bubble is like predicting the attritional defeat of the Ukrainians: If the basics of the situation persist as they are, both are inevitable, yet unpredictable. The conclusion for both is, don’t invest in the enterprises, but don’t invest in shorting the enterprises.

Posted by: steven t johnson | Oct 3 2025 20:15 utc | 100