The Looming Artificial Intelligence Crisis

We may well be at the stage where we will need to advertise “100% human-made” in a “post-human,” twisted version of “non-GMO.”

Getting your Trinity Audio player ready...

From a student essay on the topic of artificial intelligence:

Artificial intelligence (AI) and ChatGPT are two technologies that have been rapidly advancing in recent years and have garnered much attention in both academic and popular spheres. While both have the potential to bring about significant benefits and improvements in various fields, they also come with their own set of dangers and limitations. In this essay, we will explore the benefits and dangers of AI and ChatGPT from a Catholic worldview, examining the potential impacts and implications of these technologies through the lens of Catholic teaching and values.

One of the main benefits of AI from a Catholic perspective is its potential to alleviate suffering and improve the lives of people. AI algorithms can analyze large amounts of data and make decisions faster than humans, making it particularly useful in fields such as healthcare. For instance, AI algorithms can analyze patient data and help doctors make more accurate diagnoses, leading to better patient outcomes and reducing suffering. Similarly, AI can be used in agriculture and other industries to improve efficiency and increase the availability of resources, which can help to reduce poverty and improve the lives of people.

Orthodox. Faithful. Free.

Sign up to get Crisis articles delivered to your inbox daily

Email subscribe inline (#4)

Another benefit of AI from a Catholic perspective is its ability to handle repetitive tasks, freeing up humans to focus on more creative and complex work. This can lead to increased productivity and efficiency in the workplace, as well as a better work-life balance for employees. In the Catholic tradition, work is seen as a way to participate in God’s creative activity and to contribute to the common good, and so technologies that enable humans to work more efficiently and effectively can be seen as a positive development.

However, there are also several dangers to consider. One of the main concerns is the potential for job displacement. As AI algorithms become more advanced, there is a risk that they may replace human workers in various industries. This could lead to significant job losses and exacerbate economic inequality. In the Catholic tradition, work is seen as a fundamental aspect of human dignity and a means of providing for oneself and one’s family, and so technologies that threaten the availability of work can be seen as a danger. It is important to address this issue by developing strategies to mitigate any negative effects on employment and to ensure that the benefits of AI are shared fairly.

Another is the potential for biased outcomes. If the data used to train an AI algorithm is biased, the algorithm may also be biased in its decision-making. This can have serious consequences, particularly in fields such as criminal justice, where biased AI algorithms could result in unfair sentences or other forms of discrimination. In the Catholic tradition, the principle of justice is central, and so technologies that perpetuate or exacerbate injustice can be seen as a danger. It is important to address this issue by carefully selecting the data used to train AI algorithms and by implementing measures to ensure that the algorithms are not biased.

ChatGPT is a chatbot developed by OpenAI that uses natural language processing (NLP) to generate human-like responses to user input…

Full disclosure: This “student essay” (which continues for five more paragraphs) in this case was “written” by AI in less than a minute (OpenAI and AI Paraphrasers). I had asked OpenAI to “write a paper about the benefits and dangers of AI and ChatGPT from a Catholic worldview.” When further asked to “provide feedback on the content and organization of the essay” it had generated, it responded in seconds:

Overall, the essay provides a comprehensive overview of the benefits and dangers of AI and ChatGPT from a Catholic perspective. The benefits and dangers of each technology are clearly explained and supported with examples, and the essay provides a balanced view of the potential impacts and implications of these technologies. The organization of the essay is clear and logical, with each point flowing smoothly into the next.

One potential area for improvement could be to provide more in-depth analysis and discussion of the implications of the benefits and dangers of AI and ChatGPT from a Catholic perspective. For example, the essay could delve more deeply into how the Catholic tradition approaches issues such as work, justice, and communication, and how these values can inform the use and development of AI and ChatGPT. Additionally, the essay could consider potential ethical dilemmas and challenges that may arise in the use of these technologies, and how the Catholic tradition can provide guidance in addressing these issues.

Make no mistake: AI sophistication has exponentially increased in the last year and is accelerating faster than ethicists can ponder the ramifications (Google has been firing its ethicists of late). 

When I worked on an “ART-ificial Intelligence” project for my honors research thesis some 30 years ago, I was training an AI to create art pieces. My reflection back then was how surreal it was to “teach” a machine to create art that was to my taste. I was creating a virtual me, as it were, at least in terms of this narrow segment of art. 

The difference between then and now? Back then, I would iteratively train the AI manually, spending long nights and early mornings generating art with the AI. Now, the AI can simply be unleashed on the Internet, left to train itself with terabytes of information, human interactions, and content in seconds. 

And if speculations are true, the AIs that have been released to the public are a mere fraction of the power of the AIs that the big conglomerates are playing with, which is possibly another factor in the Taiwan/China tensions—supporting a powerful AI brain requires lots of computing hardware. 

In fact, the academic disciplines affected most directly are the quantitative fields, the sciences, and the technology programs. I had the AI write me programs with tasks that I wanted it to perform and it wrote me the necessary code in seconds, without error. Security researchers are sounding the alarm that ChatGPT was successfully prompted to write malicious code that could compromise computers, run malware, phish for credentials or encrypt an entire system in a ransomware scheme.

The arts are no less safe. I had it write poems, songs, and stories, even going so far as to specify the style and genre, telling it, for example, to write me stories in the style of Rod Serling’s Twilight Zone, replete with twist endings. It also generated an essay about the Battle of Midway in seconds.

Someone won an art competition in which the winning piece was entirely AI generated. Microsoft’s AI can build a robo-clone of your voice after just 3 seconds. 

It’s no longer a distant possibility to tell the AI to create a movie, for example. The key is in connecting the AI components together. Connect the AI that writes the screenplay to the AI that creates images, use Deep Fake to create realistic human actors replete with AI voices, use Unreal Engine and the like to create environments, and you have a blockbuster movie in minutes.

Yes, all this is already seriously impacting institutes of learning, but this isn’t merely about the potential demise of academia as we know it. One YouTube user described his asking the AI how to rob a specific bank at a specific location at a specific time; it allegedly provided him with options for the best way to execute this.

Combine the AI with a facsimile of a human form to play games on the one hand, or analyze military strategy and control attack drones in world-wide conflicts on the other. Financial analysts contend that AI predictive capabilities have crossed the threshold to now reach up to an 87.4% accuracy. 

Or more insidious still, AI can easily be used to obfuscate and manipulate agenda-driven politics and policies, appearing in your online conversations for example. Disturbingly, millions of AI-driven voices could drown out the human minority. 

It’s not far-fetched to say that from this point on, there’s going to be an even greater crisis of credibility online. You may not be conversing with a flesh and blood human being. From now on, when you read an op-ed, it may have been AI generated. We can’t put the genie back in this bottle. We may well be at the stage where we will need to advertise “100% human-made” in a “post-human,” twisted version of “non-GMO.”

WEF’s Yuval Noah Hararii, pro-transhumanist and lead advisor to Klaus Schwab, declared that humans are hackable. “Organisms are algorithms [to decipher]…once we have algorithms that can understand me better than I understand myself, they could predict my desires, manipulate my emotions, and even make decisions on my behalf.”“AI & bio-technology are giving us divine powers of creation and destruction,” Hararii asserted. We may soon need to defend not just the pro-life principle but the pro-human one as well.

Author

Join the Conversation

in our Telegram Chat

Or find us on

Editor's picks

Item added to cart.
0 items - $0.00

Orthodox. Faithful. Free.

Signup to receive new Crisis articles daily

Email subscribe stack
Share to...