In an astonishing reaction to ChatGPT, the Wall Street Journal published a 4,324-word panicked paroxysm of purple prose on Feb. 25, 2023, declaring the popular new service an all-but-irresistible threat.
What makes the claims doubly astonishing is the group of authors signing this piece – Henry Kissinger, Eric Schmidt of Google/Alphabet and Daniel Huttenlocher, Dean of the Schwarzman College of Computing at MIT. These three have written an ominously titled book on this subject called “The Age of AI: And Our Human Future.” Published in this WSJ, written by these authors, I was expecting an informative and level-headed treatment of this new, verbose chatbot.
The whoppers start popping in the first paragraph when they tell us GhatGPT “creates a gap between human knowledge and human understanding.” That sounds familiar, from Plato in the 4th Century B.C., at the founding of rational philosophy in the West. How does ChatGPT cause that fundamental philosophical challenge?
This article would have been much more enlightening if written by an author who is a master in the field, such as AI Scientist Ray Kurzweil who has successfully created new solutions in AI and ML for five decades. The current authors seem to think ChatGPT is sui generis, all new, something unthought of before now. They claim it “will redefine human knowledge, accelerate changes in the fabric of our reality, and reorganize politics and society.” In 2006, Kurzweil published his fascinating “The Singularity is Near” describing his sturdy projection of trends in the development of computing power and integration of machines with people in VR. That was a towering, sensible overview with none of the fear and trepidation in the WSJ piece. I get the feeling that Kissinger et al. are writing about a set of tools that will jump out of the toolbox and take over the world.
The authors then move into comparison of ChatGPT with Enlightenment approaches, or simply the Scientific Method of Hypothesis, Experiment, Analysis, Conclusion. They say AI starts at the other end, making up answers to questions with no way to tell where the answers came from. “Sophisticated AI methods produce results without explaining why or how their process works.”
It seems we are in danger of ChatGPT becoming our one and only source of knowledge and understanding. How will that happen?
The intuitive way to test this answer-giver, or any other source of information, is to ask it about things you know. We are not coming to a ChatGPT session as a walking-talking tabula rasa, we possess learned knowledge and trained thinking skills.
For example, on common subjects with widely accessible and reliable knowledge available on the web, that very same source ChatGPT uses for its studies by the way, we ask simple questions and compare answers to our expectations. For example, I asked ChatGPT things like: “Where did Aeneas end up?” “What was the highest-flying manned airplane?” “When was omnifont OCR brought to market?” It got them all wrong. Aeneas went to Rome, NOT Athens. It was the X-15 that flew highest, NOT the SR-71. Ray Kurzweil’s omnifont OCR came to market in the 70’s, NOT the 60’s.
It’s not a threat to mankind that ChatGPT gives wrong answers, but the wordy confidence with which it delivers its wrong answers is a bit annoying. If you don’t trust its answers, go look somewhere else. Best Practice is to ALWAYS double-check web info. Expecting ChatGPT to be right every time is as bad as relying on info from random wiki pages. The thought occurs: If all the accurate info is available on the web, and ChatGPT read those tens or hundreds of billions of words, why does it get anything wrong? That’s annoying, not the existential threat the authors describe as “the mystery associated with machine learning will challenge human cognition for the indefinite future.” Is the Web going away? Are common sense and reason dying out? Are reference books and databases going to disappear in some impending global Burning of the Library of Alexandria?
The fearful hysteria is sustained throughout this long article, I can’t picture any of the authors being able to support this position in an intellectual argument. “The question remains: Can we learn, quickly enough, to challenge rather than obey? Or will we in the end be obliged to submit?” I honestly don’t get it, who would submit to accepting Knowledge from a computer without coercion?
Toward the end, the paroxysm reaches sci-fi levels – “Using the new form of intelligence will entail some degree of acceptance of its effects on our self-perception, perception of reality and reality itself.” Yes, we are in The Matrix, that flashy metaphor for Plato’s Cave and the ancient questions of Knowledge and Understanding.
Again, the most astonishing thing about this jejune article is that it is produced by these authors in this venue. We would have all been much better served if Ray Kurzweil had walked us through ChatGPT, he could start with the currently insuperable challenges he describes in just trying to match the power of a human brain in his 2013 book: “How to Create a Mind: The Secret of Human Thought Revealed.”
#ChatGPT #OpenAI #Bing #Microsoft #PlatoRepublic
PDF Expert – Master PDF and OCR
Copyright © 2023 Tony McKinley. All rights reserved.
Email: amckinley1@verizon.net