Select date

October 2024
Mon Tue Wed Thu Fri Sat Sun

There’s Just One Problem: AI Isn’t Intelligent, and That’s a Systemic Risk

14-8-2024 < SGT Report 16 621 words
 

by Charles Hugh Smith, Health Impact News:



Mimicry of intelligence isn’t intelligence, and so while AI mimicry is a powerful tool, it isn’t intelligent.


The mythology of Technology has a special altar for AI, artificial intelligence, which is reverently worshiped as the source of astonishing cost reductions (as human labor is replaced by AI) and the limitless expansion of consumption and profits. AI is the blissful perfection of technology’s natural advance to ever greater powers.


The consensus holds that the advance of AI will lead to a utopia of essentially limitless control of Nature and a cornucopia of leisure and abundance.


TRUTH LIVES on at https://sgtreport.tv/


If we pull aside the mythology’s curtain, we find that AI mimics human intelligence, and this mimicry is so enthralling that we take it as evidence of actual intelligence. But mimicry of intelligence isn’t intelligence, and so while AI mimicry is a powerful tool, it isn’t intelligent.


The current iterations of Generative AI–large language models (LLMs) and machine learning–mimic our natural language ability by processing millions of examples of human writing and speech and extracting what algorithms select as the best answers to queries.


These AI programs have no understanding of the context or the meaning of the subject; they mine human knowledge to distill an answer. This is potentially useful but not intelligence.


The AI programs have limited capacity to discern truth from falsehood, hence their propensity to hallucinate fictions as facts. They are incapable of discerning the difference between statistical variations and fatal errors, and layering on precautionary measures adds additional complexity that becomes another point of failure.


As for machine learning, AI can project plausible solutions to computationally demanding problems such as how proteins fold, but this brute-force computational black-box is opaque and therefore of limited value: the program doesn’t actually understand protein folding in the way humans understand it, and we don’t understand how the program arrived at its solution.


Since AI doesn’t actually understand the context, it is limited to the options embedded in its programming and algorithms. We discern these limits in AI-based apps and bots, which have no awareness of the actual problem. For example, our Internet connection is down due to a corrupted system update, but because this possibility wasn’t included in the app’s universe of problems to solve, the AI app/bot dutifully reports the system is functioning perfectly even though it is broken. (This is an example from real life.)


In essence, every layer of this mining / mimicry creates additional points of failure: the inability to identify the difference between fact and fiction or between allowable error rates and fatal errors, the added complexity of precautionary measures and the black-box opacity all generate risks of normal accidents cascading into systems failure.


There is also the systemic risk generated by relying on black-box AI to operate systems to the point that humans lose the capacity to modify or rebuild the systems. This over-reliance on AI programs creates the risk of cascading failure not just of digital systems but the real-world infrastructure that now depends on digital systems.


There is an even more pernicious result of depending on AI for solutions. Just as the addictive nature of mobile phones, social media and Internet content has disrupted our ability to concentrate, focus and learn difficult material–a devastating decline in learning for children and teens–AI offers up a cornucopia of snackable factoids, snippets of coding, computer-generated TV commercials, articles and entire books that no longer require us to have any deep knowledge of subjects and processes. Lacking this understanding, we’re no longer equipped to pursue skeptical inquiry or create content or coding from scratch.


Read More @ HealthImpactNews.com




Print