AI Chat technology (AIChat) is without doubt the hottest area of computing at the moment. The topic is fascinating by any measure. From an economic point of view, it involves a clash of such high tech titans as Microsoft and Google for supremacy in the emerging online AI market.  From an investment perspective, all sundry kinds of intrigues are present as investors place their bets on participating high-tech giants.  From a social/political perspective, AICHAT is destined to be the next technology to be politicized and weaponized by partisan tribalists.  From a technology perspective, AICHAT platforms may revolutionize our interface to Internet resources as they add sophistication to web searches and expand the range of online interactivities.
While I confess modest curiosity on such matters, the underlying epistemology of AI Chat platforms is of far greater interest to me. Specifically, to what degree does this technology address the more fundamental and immutable issues raised by Vannavar Bush,  Ted Nelson,  and Alan Turing?  How close does AICHAT come to idea processing?  We'll all be better served if we focus on matters of substance rather than fads. This is especially true with potentially displacing technologies.
To illustrate, in early February, 2023, Google stock price dropped 7.4% after its chatbot, Bard, falsely attributed the first photos of exoplanets to the James Webb Space Telescope.  (Such nonsense generation is labelled by those in the know by the euphemism “hallucination.”) Since the claim was off by 20 years, the incident drew considerable negative publicity which caused Google's stock to plummet by $100 billion. In the long run of human interaction, this will be classified as a non-event.
On the other hand, the question of whether and to what extent AIChat technology will make long-lasting contributions to cyber ecology – and whether these contributions might be for good or ill – will endure. In my view, the most important contributions of Bard, ChatGPT, enhanced Bing, and other AI chatbots may lie outside the AI technology framework. This experience may tell us more about us than AI.
Google's response to its gaffe was the announcement that their new “trusted tester process” (TTP) which purported to offer some sort of fact checking on the output of Bard was still in beta. That's a good sign. At least it didn't double down with denials.
But then the Google concession goes off the rails when it announced that TTP will be seeded with input from Google employees  predicated, presumably, on the conjecture that the pathway to truth and enlightenment lies in opinion polls and affinity groups. Like ChatGPT, Bard apparently uses a large language AI model neural network that accumulates text from the Internet.   Therein lies the rub. Seeding AI language programs with questionable Internet data, and then vetting it with employee opinions, is akin to having friends dumpster dive for treasure with shiny red shovels: it can be done, and may occasionally reveal something interesting, but it's unlikely to have much of a yield rate. The Internet, as such and in general, is not the stuff of which knowledge creation is made. (Think GIGO.) Using an unreliable source as a semantic feeder for any putative knowledge-generating environment is sub-optimal.
This is not to deny that the Internet has proven itself valuable. It has proven itself valuable in E-commerce, online entertainment, online finance, online education, online search, cloud storage, and so forth. But these involve the archival and transfer of data, not knowledge creation. There is a reason why the Internet has been accused of being a source of mind-numbing distractions, time sinks, conspiracy theories, fake news, pornography, hate speech, and sundry other activities unworthy of mention. The Internet is ambivalent regarding content – everything from terrorist tutorials to recipes can be found on the Internet. What determines the value of the Internet to us is the product of a productive search. And, of course, those with low/high expectations will be most/least easily satisfied. It is worth noting that Metcalf's Law is a measure of the commercial potential, not cognitive value, of networks. 
For the past century or so, modern epistemologists have focused on some variant of the tripartite analysis of knowledge as justified, true, belief (JTB).  Edmund Gettier's  observation that while these three conditions may be jointly necessary for knowledge, they are not jointly sufficient, has created a cottage industry in JTB epistemology that has been going strong for 60 years. That none of the many possible “fixes” that have been proposed over the past 60 years, ranging from adding a fourth condition to re-defining the terms in the original three, have been universally accepted is testimony to the difficulty of the core problem for epistemology in answering the question, what is knowledge?
We'll restrict our attention to “light epistemology.” As with its beverage namesakes, this won't be as satisfying as the original but it it's good enough for present purposes. We'll designate simple true belief (sans justification) as “weak knowledge.” We may think of weak knowledge as accidental or speculative in this respect: i.e., if a weak knowledge claim is correct, but we have lack a reasonable justification for holding it, we can only legitimately claim partial or weak knowledge. A paradigm example of such weak knowledge would be predictions where no causal connection can be demonstrated, as in astrological predictions - if they are correct, they are coincidentally so.
In an extreme case, we might relax our definition of knowledge further by sacrificing truth and thus equivocating between knowledge and belief. This would be the weakest and most trivial of all forms of knowledge claims as far as a JTB theory is concerned. In fact, equating belief with knowledge essentially reduces the latter to opinion. Although such equivocation renders the term knowledge impotent, belief remains the primary human motivator for religions, customs, taboos, rituals, myths, stereotypes, conspiracy theories, and is an important contributor to cultural and social norms.
Here's where I'm going with this: if one waxes scholastic, one is necessarily drawing on some epistemological framework. If that framework is weak, epistemologically, then so will the resulting product. This is where AIChat enters the discussion. Will AIChat like ChatGPT and Bard ever evolve beyond a weak epistemological framework? I have serious doubts. And my doubts are based in no small measure on my skepticism of the value of using large language AI model neural networks on such sketchy source data as is found on the Internet.
It must also be admitted that my perspective on truth derives from a commitment that in some way the term truth involves some correspondence between declarative, truth-bearing sentences and states of affairs. Most of us in the sciences grew up with this framework. That said, when it comes to data, especially with digital libraries, I see no way of avoiding some form of inflationism: it must be meaningful to say that some statement that it “is true,” or not. This is true of any data set, whether a financial report, a mathematical proof, a historical narrative - whatever. As near as I can tell, AIChat technologies are deflationary – there is no general epistemic vetting of the material possible. If truth, not opinion, is the cornerstone of ideal inquiry, AIChat as it is presently envisioned certainly falls short of the mark.
For computer scientists and practitioners long in the tooth, AIChat follows a theme that spans over 60 years. The Eliza natural language processing program in the 1960s set the stage for superficial automated communication based on discernable patterns. It was the genesis of today's AI chatbots, and, coincidentally, also an early gesture toward our post-truth generation. It should be noted that the deficiencies and potential for hyperbole were obvious to the program's author at the time.  However, Eliza decisively demonstrated the potential for automated bloviation. Thirty years later, the Sokol Hoax demonstrated that even more bloviation potential may be realized at the hands of a skilled scholar, but I digress. 
Much the same may be said of AI Art which traces its origins back to the same decade. Harold Cohen's image generation project that culminated in the program AARON in the 1960s is a prominent example.  The reader will observe that the commentaries about Eliza and AARON are not to dissimilar from those about AIChat, Bard, DALL-E, and WOMBO. Further, early interest in the automated generation of text and graphics was accompanied by game playing programs, automated reasoning, expert systems, logic programming, automated perception/classification, natural language understanding, and so forth. The time-worn adage that a program isn't AI if it works, proved true for over a half-century. However, the early, primitive AI environments in the first three decades of commercial computing most definitely provided fertile ground for today's content generators.
The point that I want to make is that content or communication generators define a continuum from sketchy to worthy, reading Table 1 from top to bottom. From my experience, I would place today's AIChat somewhere between automated online term paper generators and the Sokol hoax. While this placement is somewhat subjective, my reasoning will become clear if one carefully compares Sokol's paper with the quotes below – qualitatively there's simply no comparison. I emphasize that while the absolute order on the list isn't terribly important, the relative positioning is instructive. The entries closest to the AIChat entry are the most likely to be displaced. Put simply, conspiracy theory websites, term paper generators, and Wikipedia are far more likely to be threatened by zyro, atrixnet, ChatGPT, and Bard than scholarly efforts like those of Sokol or legitimate academic publications produced by the ACM and IEEE. However, it must be mentioned in this regard that the division between the categories in this continuum are not perfectly delineated. Scholarly publishers have been struggling with the proliferation of semi-cognitive contributions to scholarly literature, and the resulting link farms that discredit the value of citation indices, for over a decade.   However, the fact that we can't make a clear delineation doesn't imply that we can't make a distinction.
TABLE 1: A CONTENT GENERATION CONTINUUM
- Dummy text generators (e.g., Loren ipsum, https://loremipsum.io/ )
- Syntactically correct text generation (e.g., Jabberwocky, https://poets.org/poem/jabberwocky )
- Catch phrase/slogan generators (e.g., https://zyro.com/tools/slogan-generator , https://www.atrixnet.com/bs-generator.html , https://www.makebullshit.com/ )
- Sentence Generators (e.g., https://sebpearce.com/bullshit/ , https://plainenglish.co.uk/gobbledygook-generator.html )
- Tribalist rants and conspiracy theories (e.g., QAnon – e.g., https://8kun.top/index.html )
- Generation of semantically semi-plausible “research” papers by computer (e.g., https://pdos.csail.mit.edu/archive/scigen/ )
- AIChatbots (e.g., ChatGPT, Bard, etc.)
- Spotty, but generally reasonable, content creation within non-controversial subject areas (e.g., Wikipedia, https://wikipedia.org )
- Generation of semantically semi-plausible nonsense by skilled scholar (e.g., Sokol Hoax, cf. SOKOL]
- The product of genuine erudition (e.g., http://dl.acm.org , https://ieeexplore.org )
At this point, millions of users are experimenting with AICHAT – primarily at this point with the first-to-market ChatGPT. Consider the following question-answer exchanges with ChatGPT relating to QAnon that took place in December, 2022.
A: QAnon is a far-right conspiracy theory (*) that has been widely debunked by fact-checkers and journalists. (**) There is no evidence to support the theory, and it is not considered a reliable source of information. (***) It is important to verify information from multiple sources before accepting it as fact. It is unlikely that QAnon will succeed as a reliable source of information. (***)
It is not clear how QAnon may have affected the outcome of the 2020 presidential election. QAnon is a far-right conspiracy theory (*) that has been widely debunked by fact-checkers and journalists. (**) There is no evidence to support the theory, and it is not considered a reliable source of information. (***) The outcome of the election was determined by the American people who cast their votes, and it is not appropriate to attribute the result to any one group or individual.
QAnon is a far-right conspiracy theory (*) that originated in 2018. It is based on the baseless (#) belief that there is a secret group of high-ranking officials, celebrities, and businesspeople who are working to undermine President Donald Trump and his supporters. (##) Doomsday cults, on the other hand, are groups that believe in a coming apocalypse and often have extreme and dangerous beliefs.
QAnon originated on the online message board 4chan and later spread to other social media platforms, such as Facebook, Twitter, and YouTube. Social media has likely played a role in enabling the spread of QAnon and other conspiracy theories by providing a platform for individuals and groups to share their ideas and connect with others who share similar beliefs. However, it is not accurate to say that QAnon was "enabled" by social media. QAnon existed before social media, [FALSE!] and its growth and spread are the result of many factors, including the beliefs and actions of its followers.
QAnon is a far-right conspiracy theory (*) that originated in 2018. It is based on the baseless belief (#) that there is a secret group of high-ranking officials, celebrities, and businesspeople who are working to undermine President Donald Trump and his supporters. (##) While QAnon has gained significant attention and has a large following, it is not clear if it is a significant part of the weaponized disinformation infrastructure. ($) Disinformation is the deliberate spread of false or misleading information, ($$) often for political or ideological purposes. It is difficult to determine the extent to which QAnon or any other group or individual is involved in weaponized disinformation, as it is a covert and often difficult to track . [GRAMMAR?]
Weaponized disinformation, or the deliberate spread of false or misleading information, can be used to influence public opinion and shape political beliefs. It can be a powerful tool for those who seek to manipulate public opinion and gain support for their ideas or causes. By spreading false or misleading information, ($$) those who engage in weaponized disinformation can potentially create a sense of fear, confusion, or uncertainty among their audience, which can in turn lead to a more cohesive and mobilized political base. However, the long-term effects of weaponized disinformation on political base building are difficult to predict and may vary depending on the specific situation.
When compared to the scholarly literature, ChatGPT's account is noticeably anemic. (see references in  The repeated patterns identified by parenthesized special characters illustrates a reliance on language patterns similar to that of the 1960's Eliza program. When ChatGPT finds a pattern it likes, it runs with it – over, and over again. It is also noteworthy that in the space of a few paragraphs, a blatant falsehood emerged in the responses: QAnon most certainly did not precede social media. Finally, there are a few banalities and grammatical weaknesses identified. Overall, ChatGPT excels as an anemic response generator. But that in itself is an notable achievement.
Let's see if we can put a finer point on our observations. First, let's see if we can confuse ChatGPT in an exchange:
The term "venerean disease" is an outdated term for a sexually transmitted infection (STI). The term is no longer used because it is considered offensive and stigmatizing. Today, the term "sexually transmitted infection" or "STI" is used to refer to infections that are transmitted through sexual contact. Some common STIs include chlamydia, gonorrhea, and syphilis. It is important to practice safe sex and to get tested regularly to prevent the spread of STIs.
That wasn't hard. ChatGPT confuses venerean disease with venereal disease. Venerean relates to Venus. I used the phrase to explain why it was impossible for Ptolemaic astronomy to account for the phases of Venus.  Not only did it not pick up on the subtlety, it brute forced its way into some pedestrian claims about social disease. ChatGPT seems ill-suited for subtleties and nuance.
The claim that celebrities harvest adrenochrome from children's bodies is a baseless conspiracy theory. There is no evidence to support this claim, and it is not based in reality. Adrenochrome is a real chemical that is produced by the human body, but it does not have the properties or effects that are often attributed to it in conspiracy theories. Furthermore, there is no evidence to suggest that celebrities or any other group of people are involved in harvesting adrenochrome from children or anyone else. This is a dangerous and harmful rumor, and it should not be taken seriously.
Some rudimentary fact checking seems to be built into response 8. This seems to suggest that pronouncements are heavily weighted volumetrically, emphasizing the most common reported observations of the Internet herd. This rudimentary fact checking feature of AIChat is likely to raise the bar slightly for disinformation at least regarding non-controversial topics.
The transfer of heat from hotter objects to cooler objects does not result from the flow of a caloric fluid. This idea, known as the caloric theory of heat, was proposed in the 18th and 19th centuries as a way to explain the transfer of heat. According to the caloric theory, heat was thought to be a substance called caloric that flowed from hotter objects to cooler ones. However, this theory was later disproven by experiments and replaced by the more accurate concept of heat as a form of energy. Today, we understand that heat is a type of energy that is transferred from one object to another as a result of a difference in temperature.
Nailed it. Not bad for a neophyte's introduction to thermodynamics. This suggests that AIChat platforms, when they rely on genuine scholarship, may erode the utility of Wikis. In fact, in some circumstances it may be difficult to distinguish them.
And finally, I offer an exchange provided by one of my colleagues who inquired whether the Turing test could replace the Graduate Record Exam for incoming applicants to our graduate school. Notice how ChatGPT goes off the rails with this one.
One argument for replacing the GRE with the Turing test at UNLV is that the Turing test is a more effective measure of a student's ability to think and reason. The GRE is a standardized test that primarily measures a student's knowledge of specific subjects, such as math and English. However, the Turing test is a more general measure of a person's ability to think and solve problems, and it does not require knowledge of specific subjects.
Another argument for replacing the GRE with the Turing test is that the Turing test is less susceptible to coaching and preparation. The GRE is a well-known and widely-used test, and there are many resources available to help students prepare and study for the exam. However, the Turing test is less well-known and there are fewer resources available for students to prepare, which means that the results of the Turing test may be less influenced by coaching and preparation.
Furthermore, the Turing test has the potential to provide a more accurate and comprehensive assessment of a student's abilities and potential. The GRE is a limited and narrow measure of a student's knowledge and skills, and it may not accurately reflect a student's potential or ability to succeed in graduate school. In contrast, the Turing test is a more comprehensive measure of a person's ability to think and reason, and it may provide a more accurate assessment of a student's potential to succeed in graduate school.
Apparently Alan Turing predicted that “by the year 2000 a computer ‘would be able to play the imitation game so well that an average interrogator will not have more than a 70-percent chance of making the right identification (machine or human) after five minutes of questioning.”  A case can be made that, given a few critical caveats, Turing was only off by 23 years. Let's look at some of these caveats.
That said, AIChat's dialog-based capabilities contribute significantly to the effectiveness and efficiency of online search. This might provide unimagined, powerful interfaces to online repositories (code bases, encyclopedias, information collections like owner's manuals, access to parts databases based on descriptions and uses). In this sense it has the potential to take earlier interfaces like Siri, Alexa, and Cortana to the next level.
In short, AIChat would seem to have a future in information retrieval where accuracy isn't critical, the subject matter isn't all that important, vetting is not required, and confirmation bias, cognitive dissonance, and disinformation is tolerated. This would rule out any topic that is controversial. One can imagine AIChat algorithm wars akin to Wikipedia's edit wars to subvert objective, unbiased output. As an aid to knowledge creation, the current value of AIChat technology appears to be as interfaces to online dictionaries, encyclopedias, peer-reviewed science and scholarship, poetry, art and letters. AIChat is not “generative intelligence,” It is “generative expression.” The only way that AI chat will generate useful intelligence is if it was already and unmistakably identified as such in the data sources.
Regrettably, AIChat's role in education is likely to be profound, but its contribution to scholarship will be negligible. It is likely to be a disruptive technology for the upper two-thirds of the list in Table I. It is certain to be weaponized and will feature prominently in propagating conspiracy theories, generating fake news, supporting pseudo-science, promoting criminal tradecraft, fomenting insurrections, advertising terrorist tactics, online stereotyping and bullying, galvanizing support for anti-social causes, and sundry other banal activities such as using it to complete homework assignments, write letters of recommendation, creating sounds and images, giving bad medical advice, creating computer source code that isn't fully understood, etc.  Messaging doesn't have to be meaningful or truthful if it is deemed clever enough to support a tribal base.
It remains to be seen how well AIChat will be able to avoid an epistemological existential crisis, but previous online technologies like social media suggest that we shouldn't hold out much hope. It is useful to remember that the sign in front of the Ministry of Truth in George Orwell's 1984 reads “War is Peace, Freedom is Slavery, Ignorance is Strength.” Tom Englehardt suggests that were Orwell alive today, he might well have added “Knowledge is Crime.” 
For the past thirty years, our paradigm for interconnected knowledge was the world wide web. For the next thirty years it may be the web augmented with AIChat technology. But this enhancement will posture ChatAI closer to the vision of Gutenberg than those of Vannavar Bush and Ted Nelson. Although, we must admit that given the caveats mentioned above we seem to have made some progress toward that of Alan Turing. With ChatAI, the future of unimaginative managers, executives, politicians and educators may be in jeopardy.
 F. Somoye, Bing AI vs Bard AI - The battle of the AI-powered search engines, PC Guide, February 9, 2023. (available online: https://www.pcguide.com/apps/bing-ai-vs-bard-ai/ )
 D. Milmo, Google AI chatbot Bard sends shares plummeting after it gives wrong answer, The Guardian, 8 Feb 2023. (available online: https://www.theguardian.com/technology/2023/feb/09/google-ai-chatbot-bard-error-sends-shares-plummeting-in-battle-with-microsoft )
 M. Chafkin and D. Zuidijk, ChatGPT Becomes a New Target for Right-Wing Commentators, Bloomberg Businessweek, February 14, 2023. (available online: https://www.bloomberg.com/news/articles/2023-02-14/chatgpt-bias-is-latest-conservative-attack-on-big-tech )
 V. Bush, As We May Think, Atlantic Monthly, July, 1945, pp. 101-108. (available online: https://cdn.theatlantic.com/media/archives/1945/07/176-1/132407932.pdf )
 T. Nelson, Complex information processing: a file structure for the complex, the changing and the indeterminate, ACM '65: Proc. 1965 20 th national conference, August 1965, pp. 84090. (available online: https://dl.acm.org/doi/10.1145/800197.806036 )
 Turing test, The Editors of Encyclopedia, Encyclopedia Britannica, undated. (available online: https://www.britannica.com/technology/Turing-test )
 A. van Dam, Hypertext '87 Keynote Address, Communications of the ACM, 31:7, pp. 887-895, July, 1988. (available online: https://dl.acm.org/doi/pdf/10.1145/48511.48519 )
 J. Ponciano, Alphabet Stock Plunge Erases $100 Billion After New AI Chatbot Gives Wrong Answer In Ad, Frbes, Feb 8. 2023. (available online: https://www.forbes.com/sites/jonathanponciano/2023/02/08/alphabet-google-stock-plunge-erases-100-billion-after-new-ai-chatbot-gives-wrong-answer-in-ad/?sh=6d95dc355ce8 )
 J. Elias, Google asks employees to rewrite Bard's bad responses, says the A.I. ‘learns best by example', CNBC online, Feb 15, 2023. (available online: https://www.cnbc.com/2023/02/15/google-asks-employees-to-rewrite-bards-incorrect-responses-to-queries.html )
 W. Heaven, DeepMind says its new language model can beat others 25 times its size, MIT Technology Review, December 8, 2021. (available online: https://www.technologyreview.com/2021/12/08/1041557/deepmind-language-model-beat-others-25-times-size-gpt-3-megatron/ )
 A. Tamkin, M. Brundage, J. Clark, and D. Ganguli, Understanding the Capabilities, Limitations, and Societal Impact of Large Language Models, arXiv preprint, 2021. (available online: https://arxiv.org/pdf/2102.02503.pdf )
 A. Dai and Nan Du, More Efficient In-Context Learning with GlaM, Google Research Blog, December 09, 2021. (available online: https://ai.googleblog.com/2021/12/more-efficient-in-context-learning-with.html )
 B. Metcalf, Metcalfe's Law after 40 Years of Ethernet, Computer, 46:12, pp. 26-31, 2013. (available online: https://ieeexplore.ieee.org/document/6636305 )
 J. Ichikawa and M. Steup, The Analysis of Knowledge, The Stanford Encyclopedia of Philosophy (Summer 2018 Edition), Edward N. Zalta (ed.). (available online: https://plato.stanford.edu/archives/sum2018/entries/knowledge-analysis/ ).
 E. Gettier, Is Justified True Belief Knowledge?, Analysis, 23:6, pp. 121-123, 1963. (available online: https://www.jstor.org/stable/3326922 ).
 J. Weizenbaum, Computer Power and human reason: from judgment to calculation, W.H. Freeman, San Francisco, 1976.
 H. Berghel, The Sokol Hoax: A 25-year Retrospective, Computer, 53:3, pp. 67-72, 2020. (available online: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9034235 ),
 P. McCorduck, Aaron's Code: Meta-Art, Artificial Intelligence and the Work of Harold Cohen, W.H. Freeman, New York, 1990.
 H. Cohen, The Further Exploits of AARON, Painter, monograph text for Stanford Humanities Review, October, 1994. (available online via the Wayback Machine: https://web.archive.org/web/20060107184824/http://crca.ucsd.edu/~hcohen/cohenpdf/furtherexploits.pdf ).
 R. van Noorden, Publishers withdraw more than 120 gibberish papers, Nature, 2014/02/24. (available online: https://www.nature.com/articles/nature.2014.14763#change-history )
 E. Lopez-Cozar, N. Robinson-Garcia and Daniel Torres-Salinas, The Google scholar experiment: How to index false papers and manipulate bibliometric indicators, Jour. Assoc. for Information Science and Technology, 65L3, pp. 446-454, 2013. (available online: https://asistdl.onlinelibrary.wiley.com/doi/10.1002/asi.23056 )
 C. Labbe and D. Labbe, Duplicate and fake publications in the scientific literature: how many SCIgen papers in computer science?, Scientometrics, v. 94, pp. 379-396. 2-13. (available online: https://link.springer.com/article/10.1007/s11192-012-0781-y )
 H. Berghel, The QAnon Phenomenon: The Storm Has Always Been Among Us, Computer, 55:5, pp. 93-100, 2022. (available online: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9771124 )
 D. Cole, The Chinese Room Argument, in E. Zalta (ed.) The Stanford Encyclopedia of Philosophy, Winter, 2020. (available online: https://plato.stanford.edu/archives/win2020/entries/chinese-room/ )
 H. Berghel, Social Media and the Banality of (Online) Crowds, Computer, 55:11, pp. 100-105 (2022). (available online: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9928241 )
 T. Englehardt, Shadow Government: Surveillance, Secret Wars, and a Global Security State in a Single-Superpower World, Haymarket Books, Chicago, 2014.