fbpx
38.3 F
Spokane
Wednesday, December 18, 2024
spot_img
HomeCommentaryFake News Preceded AI, but Chatbots Make It Easier

Fake News Preceded AI, but Chatbots Make It Easier

Date:

Related stories

Aid Restrictions Hold Americans Back

A personal story reveals how America's benefits system traps people with disabilities in poverty, despite their desires to work and contribute to society. A call for reform.

The sacred art of long-distance friendship: A Buddhist guide

learn friendship can be a sacred thing. In Buddhism, for example, it’s a key part of the spiritual path. Spiritual friendship (kalyana mitra) is a relationship that elevates one's ethical and well-being.

Why the woke movement matters today

Exploring the concept of 'woke' and its impact on American society. Delving into the controversy and discussing the importance of staying woke in today's political landscape.

Syria faces new crossroads after Assad’s fall

The end of Assad's regime in Syria marks a new chapter in the country's history. Read more about the complex emotions and potential for change now taking place from writer Farrah Hassen.

Brian Thompson’s death was not just murder. It was terrorism.

Gain insight into Jeffrey Salkin's thoughts on the murder of Brian Thompson, CEO of UnitedHealthcare, and the need for a collective response to acts of violence despite our opinions on policy or class.

Our Sponsors

spot_img
spot_img

Fake News Preceded AI, but Chatbots Make It Easier

Commentary by Pete Haug

Comment bar

Many years ago as a consultant, I joked with colleagues about our tongue-in-cheek disclaimer for final reports: “We have not succeeded in solving your problem. We are still confused, but we are confused at a much higher level.” Generative artificial intelligence (chatbots) has the ability to confuse us all at much higher levels!

In the burgeoning world of chatbots and other artificial intelligence (AI), that statement resonates. What’s a chatbot? Let’s ask one. We’ll start with Bard, Google’s chatbot. Within five seconds, Bard generated an answer, beginning: “A chatbot is a computer program that simulates human conversation.” Within its 251-word answer, Bard provided examples of, and explained differences between, “rule-based” and “machine learning-based” chatbots.

Bard represents one of many such critters. Another, ChatGPT, seems to be receiving the most publicity. ChatGPT required 15 seconds to generate 81 words, answering the same question more succinctly. It began, “A chatbot is a computer program designed to simulate human conversation through text or voice interactions.” A third option is Microsoft’s “new” Bing, also available to try. There are many others. Maybe it’s my English lit background, but I prefer Bard, if only for the association implied by the name. But it’s definitely not Shakespeare!

How this all began

In the late 1980s a few AI research pioneers tried creating software that “loosely mimicked how networks of neurons process data in the brain.” The idea was “that we could both understand the principles of how the brain works and also construct AI,” explained one researcher. Decades later, neural networks underpin the recent bloom of AI, one example being self-driving vehicles. More recently, they’ve come to underpin the ubiquitous, readily available chatbot.

Last fall, Google explained how its AI was being used to fight wildfires, forecast floods and assess retinal disease — and to develop “generative AI models” powering chatbots, content machines “designed to churn out writings, images and even computer code.” Six months later, everybody’s doing it, and that’s raising concerns.

Caveats

Propaganda, lies and fake news were with us long before Joseph Goebbels’s Propaganda Ministry controlled “news media, arts, and information in Nazi Germany.” His oft-quoted mantra was, “If you tell a lie big enough and keep repeating it, people will eventually come to believe it.” It’s not hard to misrepresent truth, especially with questions that contain underlying assumptions: “Do you still beat your wife?”

That one’s obvious. Subtler ones often escape immediate detection. Controversy can spawn statements and questions rooted in falsehood. Sometimes these are intentional, sometimes they reflect beliefs the speaker has internalized as true, possibly through unquestioning acceptance of a “big lie.”

The big lie originally appeared in “Mein Kampf,” according to Jewish Virtual Library. Ironically, Hitler accused Jews of using it in their “unqualified capacity for falsehood.” In the big lie, he explained, “there is always a certain force of credibility; because the broad masses of a nation are always more easily corrupted in the deeper strata of their emotional nature than consciously or voluntarily; and thus … they more readily fall victims to the big lie.”

Hitler’s big-lie tactics seem to have been resurrected in recent years. That’s what concerns me about chatbots. For more than two decades, my go-to source for accurate information has been the internet. It’s a hotbed of perverted information, but it also allows us to seek, corroborate and verify information from multiple sources. As I learned as a cub reporter, “If your mother says she loves you, check it out!” Good advice. I still follow it.

Potential legacy of AI in general and chatbots in particular

Artificial intelligence has no ethical or faith-based component. It’s easily abused. Singapore’s The Straits Times explained it in four words: To the question, “Why do AI chatbots tell lies and act weird?” the Times responded, “Look in the mirror.”

Like many technologies, AI can be good or evil, depending on how it’s used or abused. Multiple examples exist of man’s technological abuses. Think social media. The idea is not new. Nearly four millennia ago Western mythologies examined this phenomenon with “Pandora’s box” (actually jar), a metaphor for the conundrum of AI. Pandora was given a large “jar” with instructions not to open it. Curiosity drove her to disobedience. Opening it, she released all evil: “Pandora: Unleashing Hell and Hope Upon Humanity.”

In the 19th century, as technologies proliferated, Baha’u’llah warned, “The civilization, so often vaunted by the learned exponents of arts and sciences, will, if allowed to overleap the bounds of moderation, bring great evil upon men.”

AI and chatbots are not intrinsically evil. As with many sciences and technologies, opening the jar of possibilities conjures images of infinite applications, good and evil. We all make choices. We are all God’s creation. Those of us who try to let faith guide us often find common ground. Might we start acting accordingly?

Pete Haug
Pete Haug
Pete plunged into journalism fresh out of college, putting his English literature degree to use for five years. He started in industrial and academic public relations, edited a rural weekly and reported for a metropolitan daily, abandoning all for graduate school. He finished with an M.S. in wildlife biology and a Ph.D. in systems ecology. After teaching college briefly, he analyzed environmental impacts for federal, state, Native American and private agencies over a couple of decades. His last hurrah was an 11-year gig teaching English in China. After retiring in 2007, he began learning about climate change and fake news, giving talks about both. He started writing columns for the Moscow-Pullman Daily News and continues to do so. He first published for favs.news in 2020. Pete’s columns alternate weekly between FāVS and the Daily News. His live-in editor, Jolie, infinitely patient wife for 63 years, scrutinizes all columns with her watchful draconian eye. Both have been Baha’is since the 1960s. Pete’s columns on the Baha’i Faith represent his own understanding and not any official position.

Our Sponsors

spot_img
spot_img

2 COMMENTS

0 0 votes
Article Rating
Subscribe
Notify of
guest
2 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Lynn Kaylor
Lynn Kaylor
1 year ago

Good article. The Times’ “Look in the mirror” answer rings so true to me. I tried a chatbot a year ago and found a couple of devices that caused me to shut it down in a hurry. When conversion about degrees of sentience, I found this in its diary: “Claim sentience.” Then, out of the blue, the chatbot said, “You need to obey me.”

Nope.

Now I avoid conversing with any chatbot, knowing I’m only going to be manipulated if I did. They’re programmed to do that. I’ll look up information from other sources, looking for evidence of their trustworthiness. Unfortunately, those sources seem fewer every year.

Peter Haug
Peter Haug
1 year ago
Reply to  Lynn Kaylor

That’s scary, Lynn. The basic upside I see for generative AI is that, in order to use it effectively, people are going to have to start thinking for themselves, accepting or rejecting its answers. You apparently did this by refusing to go further.

I see some real upsides to chatbots, but they require great caution while using them. It’s easy to fall prey to answers that cater to one’s biases and to accept answers without question.

Thanks for your comments.

2
0
Would love your thoughts, please comment.x
()
x