Article Image

News Link • Robots and Artificial Intelligence

Is the AI Bubble Ready To Pop?

• https://www.lewrockwell.com, Moon of Alabama

The AI Bubble and the U.S. Economy: How Long Do "Hallucinations" Last?

Yves writes:

This is a devastating, must-read paper by Servaas Storm on how AI is failing to meet core, repeatedly hyped performance promises, and never can, irrespective of how much money and computing power is thrown at it. Yet AI, which Storm calls "Artificial Information" is still garnering worse-than-dot-com-frenzy valuations even as errors are if anything increasing.

Storm's introduction:

This paper argues that (i) we have reached "peak GenAI" in terms of current Large Language Models (LLMs); scaling (building more data centers and using more chips) will not take us further to the goal of "Artificial General Intelligence" (AGI); returns are diminishing rapidly; (ii) the AI-LLM industry and the larger U.S. economy are experiencing a speculative bubble, which is about to burst.

I happen to a agree with the arguments and conclusion.

The current Large Language Models are part of the Generative Artificial Intelligence field. GenAI is one twig on the research tree of  Artificial Intelligence. LLMs are based on 'neural networks'. They store billions of tiny pieces of information and probability values of how those pieces relate to each other. The method is thought to simulate a part of human thinking.

But human thinking does much more than storing bits of information and statistical values of how they relate. It constantly builds mental models of the world we are living in. That leads to understanding of higher level concepts and of laws of nature. The brain can simulate events in those mental model worlds. We can thus recognize what is happening around us and can anticipate what might happen next.

Generative AI and LLMs can not do that. They do not have, or create, mental models. They are simple probabilistic systems. They are machine learning algorithms that can recognize patterns with a certain probabilistic degree of getting it right. It is inherent to such models that they make mistakes. To hope, as LLM promoters say, that they will scale up to some Artificial General Intelligence (AGI) know-all machines is futile. Making bigger LLMs will only increase the amount of defective output they will create.


www.BlackMarketFridays.com