Ten Philosophical Questions @ChatGPT (Eduardo Navas)

Back to Questions

--------------------------------------------------------------------------

Brief Notes: "The (L)-imitations of ChatGPT"

--------------------------------------------------------------------------

I began using ChatGPT in November of 2022, when I received a notification from OpenAI announcing its availability through my pre-existing DALL-E-2 acct. At the time that I write this short statement, which is the end of January 2023, it appears that the website is constantly down. This is confirmed by online articles that report ongoing problems due to various issues, which include overwhelming demand on the servers. I was denied access shortly after I finished the project “Ten Philosophical Questions @ChatGPT” for which I write this art statement. The questions were queried during the month of December 2022, and the project was prepared for release online at the end of January 2023. I was able to log on again on February 1.

There is much hype about AI. DALL-E-2 was a major publicity builder for OpenAI, and in January Microsoft announced it would invest heavily on the project. We are entering a cultural stage in which creativity is being redefined by smart automation. I wrote a book about the implications of this shift in terms of metacreativity, in which I allude to much of what is taking place now. Now that chatGPT has entered the networked mainstream, we are facing questions not just about creativity, but also about communication intertwined with what it means to produce and attain knowledge. Based on this precedent, I decided to use ChatGPT to explore philosophical questions.

I chose ten popular philosophical questions based on my own searches online. Anyone can find variations of the questions I include in this project by writing a basic query on Google such as "top ten philosophical questions." The list should not be considered authoritative by any means. Philosophical questions will always vary for each person according to background, experience, and personal interests. But regardless, the questions I selected should prove to be relevant to anyone, and could be considered as a type of popular mean to assess the general understanding of cultural tensions around artificial intelligence.

What the answers by ChatGPT expose is that artificial intelligence at the moment is able to remix pre-existing content based on what has already been produced by people and is available online, but it is unable to contribute new knowledge.

The questions I asked are:

1) What is the meaning of life?
2) Does God exist?
3) Is there life after death?
4) What is truth?
5) What is the difference between something and nothing?
6) Is there right and wrong?
7) Do humans have free will?
8) What is art?
9) Why does hate exist in the world?
10) Can humans reach equality?

I ran each question only five times because it became evident that all ChapGPT was doing was revising the same basic content. There was no reason to keep asking the same question when no new content would be produced. At times it elaborated a bit more, but for the most part, the argument was the same for all the re-edits. The answers are coherent and are free of grammar and syntax mistakes. But all of them state the same premise. The real issue is that these answers, while they are clearly not substantial by any means, and do not provide any real insight beyond what a person may already know when asking questions to ChatGPT, expose the limitations of the algorithm: the engine composes content based on what has already been produced. The content itself is not necessarily new, but rather a different way of combining what may already be even summarized somewhere else on the Internet. There is no new knowledge. It functions as a smart encyclopedia, in this case, which provides focused summaries to a person’s query.

Anyone reading the answers may find them disappointing because they are not "deep" or complex in any way. Based on this, one could argue that the tool should not be used to ask questions; but, nevertheless the fact is that I can use it as I did because it was designed to function in multiple ways: It can write answers for any query, it can summarize content for the user to get a quick idea of a subject, it can be used to organize notes, or it can produce content on any subject which a person can then polish, thus making the writing process more efficient. It can be used to debug code, which can also save quite a bit of time for programmers; in terms of education, programming students can learn how to write code by analyzing the bug fixes. But the reality is that this is not how people are likely to use and are already using the tool. Many people are using it to game the system. For this reason, the tool has been banned across schools in the United States. Many articles were written in January of 2023, including one in The New York Times, in which the emerging problem of AI plagiarism is becoming a real concern, because ChatGPT can write student papers in just a few minutes. The user only needs to provide a prompt with specific content, which could be a text to be summarized, or a query asking to write an essay on a subject not so different from how I asked ChatGPT3 to answer philosophical questions.

"Ten Philosophical Questions @ChatGPT," however, exposes the fact that chatGPT is an elaborate engine that puts together content in narrative form, but which is no better than what Google is already doing when users write a query and a list of questions emerge that support possible bifurcating searches. In this sense the AI is only repackaging content that exists online. And here a crucial issue emerges: ChatGPT can be useful if one wants to utilize it to search for things and organize them in order to develop new and unique content, but it becomes problematic if people begin to use it to cut corners. I am evidently raising concerns that are already commonly being discussed across the Internet, itself, which is also from where ChatGPT is harvesting content to produce its elaborate summaries. And here we have a type of remix that functions based on echo-chamber tendencies that have already led to disparate world views often leading to misinformation.

As I noted previously, I was able to log on February 1, 2023, and asked ChatGPT, "What is ChatGPT?"" and it answered:

"ChatGPT is a conversational language model developed by OpenAI, based on the transformer architecture. It was trained on a diverse range of internet text, and its goal is to generate human-like responses to questions and prompts given to it, enabling it to participate in open-domain conversations."

I then asked, "Is ChatGPT smart?" And it answered:

"ChatGPT is a highly advanced language model that has been trained on a large corpus of text data, allowing it to generate human-like responses to a wide range of questions and prompts. However, it's important to note that ChatGPT is not 'smart' in the traditional sense, as it doesn't have consciousness, emotions, or personal experiences. Instead, it uses statistical patterns in the text it was trained on to generate responses, and its abilities are limited to the scope of the text data it was trained on."

ChatGPT, as innovative as it might be, functions under the umbrella of digital and informational colonialism given that it performs unethical appropriation: it takes content from different online resources to repurpose them. When the user receives results, it is not possible to see the sources, at least not at the moment I wrote this statement. I tested this when I asked "Write a college level essay on post-modernism in art." When I asked for the sources used for composing the essay, I got the reply:

"As an AI language model, I do not have access to external sources. The information in the essay on post-modernism is based on my training data, which includes a wide range of text from the internet. My training data consists of diverse sources, such as news articles, books, websites, and other texts, but I do not have specific information on the sources used to generate the essay on post-modernism."

Clearly, this is a problem for educational and research purposes. From this stance, the model is plagiarizing given that it cannot provide its sources for proper reference.

There are many other questions that I am not mentioning in this short text, which I will revisit in actual research essays, but for now, what "Ten Philosophical Questions @ChatGPT" makes clear is that AI is not smart as people popularly tend to think. At the moment artificial intelligence is able to recombine sources to provide new possibilities, but no new knowledge emerges from this process. Humans still need to evaluate the results to consider how it could be a contribution to knowledge and creativity. In this sense, ChatGPT is part of metacreativity, once we consider it as a system that performs distributed labor. When evaluating the ten questions part of this project, we can assess that the tool is an efficient textual collage engine: an automated smart remixing algorithm that unfortunately some people may misuse while others may wrongfully consider it able to produce contributions to actual knowledge.