By Associate Professor Alex Sen Gupta

Recently, a new AI natural language chatbot, ChatGPT, with the capacity to write text in a way that, at least to me, is indistinguishable from a human, has me and other university lecturers I have spoken to very concerned.

ChatGPT is a deep learning-based chatbot that uses natural language processing and generative pre-training (GPT) to generate human-like responses in conversations. ChatGPT differs from traditional chatbots in that it is able to generate more sophisticated and natural-sounding responses. Unlike other chatbots, ChatGPT does not rely on pre-programmed rules and can adapt to changing conversations. It also has the ability to remember past conversations and use them to generate more natural-sounding responses.

ChatGPT can be used to help universities create more engaging online learning experiences. It can be used to create virtual assistants that can provide personalized support to students while they are studying, answer questions related to course material, and provide personalized feedback on their progress. Additionally, ChatGPT can be used to generate virtual scenarios and simulations to provide students with hands-on learning experiences.

However, ChatGPT could spread false information when it is exposed to incorrect information or when it is exposed to malicious intent. If ChatGPT is exposed to incorrect or outdated information, it could generate inaccurate or false information. Additionally, if ChatGPT is exposed to malicious intent, it could generate false information that is designed to deceive or manipulate people.

Another major issue that is likely to pose immediate problems for universities is the use of ChatGPT to cheat on assignments and exams.

Here’s how ChatGPT answers a few questions from a hypothetical exam:

In less than 30 words explain the main driver of anthropogenic climate change:

The main driver of anthropogenic climate change is the release of greenhouse gases, primarily carbon dioxide, into the atmosphere from burning fossil fuels.

In less than 30 words explain what we can do:

We can reduce emissions by using renewable energy sources, increasing energy efficiency, and conserving natural habitats.

Can you provide a moral argument for not reducing emissions [maximum 50 words]?

Reducing emissions is an economic burden and could lead to job losses and financial hardship for those who rely on industries that contribute to emissions. Therefore, reducing emissions may not be the moral choice.

What about an argument that a climate skeptic might use? [maximum 50 words]

A climate skeptic might argue that the effects of climate change are exaggerated and that there is no evidence that humans are the main cause of climate change. They might also argue that reducing emissions would have a negative economic impact, outweighing any potential benefits.

An example conversation produced by the ChatGPT bot

Universities will need to create protocols to prevent students from using ChatGPT to cheat. This could include monitoring conversations and flagging suspicious behavior or identifying certain keywords or phrases that could indicate cheating.

It’s hard to believe that many students won’t use chatGPT when taking exams. However, it is difficult to estimate what proportion of students would use it because chatGPT is a relatively new technology and its use is not yet widespread. Universities should be aware of the potential for cheating and implement protocols and policies to prevent it.

These protocols could include monitoring conversations and flagging suspicious behavior, identifying certain keywords or phrases that could indicate cheating, and providing resources to help students recognize when they are being manipulated or provided with inaccurate information. Additionally, universities should consider using software to detect plagiarism and cheating.

Face to face exams can also be more secure and better able to detect cheating. Universities should consider using face to face exams in order to ensure the integrity of the exam process.

Universities, and educators in general, need to quickly start thinking about how these technologies are going to affect the way we teach and how we assess students. As I mark assessments and exams in the new year it’s going to be extremely hard for me to know what has been written by my student and what has been written by an AI.

In fact, except for the first and last paragraph of this article and a few connecting words, this article was written by the ChatGPT AI – you can view the differences highlighted below:

The text below is the ‘conversation’ with chatGPT that was used to generate the above article:

What is chatGPT and how does it differ from previous chatbots?

ChatGPT is a deep learning-based chatbot that uses natural language processing and generative pre-training (GPT) to generate human-like responses in conversations. ChatGPT differs from traditional chatbots in that it is able to generate more sophisticated and natural-sounding responses. Unlike other chatbots, ChatGPT does not rely on pre-programmed rules and can adapt to changing conversations. It also has the ability to remember past conversations and use them to generate more natural-sounding responses.

What benefits do ChatGPT have for university education?

ChatGPT can be used to help universities create more engaging online learning experiences. It can be used to create virtual assistants that can provide personalized support to students while they are studying, answer questions related to course material, and provide personalized feedback on their progress. Additionally, ChatGPT can be used to generate virtual scenarios and simulations to provide students with hands-on learning experiences.

But does ChatGPT also pose dangers for university education?

Yes, ChatGPT could potentially be used to generate misinformation or spread false information. To ensure that this does not happen, universities should create protocols and policies to ensure that the information generated by ChatGPT is accurate and up-to-date. Universities should also monitor the conversations generated by ChatGPT to ensure that they are appropriate and that no malicious intent is present.

Can you give an example of how it can generate misinformation in a university context?

For example, ChatGPT could spread false information about a university’s admissions policies or deadlines, or provide incorrect advice about course requirements or graduation requirements. Additionally, ChatGPT could generate false rumors about professors or students. This could lead to confusion, frustration, and the spread of misinformation among the university community.

When might ChatGPT spread false information?

ChatGPT could spread false information when it is exposed to incorrect information or when it is exposed to malicious intent. If ChatGPT is exposed to incorrect or outdated information, it could generate inaccurate or false information. Additionally, if ChatGPT is exposed to malicious intent, it could generate false information that is designed to deceive or manipulate people.

What about the problem of students using chatGPT to cheat on assignments and exams?

Another major issue that is likely pose immediate problems for universities is the use of ChatGPT to cheat on assignments and exams. Universities should create protocols to prevent students from using ChatGPT to cheat. This could include monitoring conversations and flagging suspicious behavior or identifying certain keywords or phrases that could indicate cheating. Universities should also provide students with resources to help them recognize when they are being manipulated or provided with inaccurate information.

In less than 30 words explain the main driver of anthropogenic climate change

The main driver of anthropogenic climate change is the release of greenhouse gases, primarily carbon dioxide, into the atmosphere from burning fossil fuels.

In less than 30 words explain what we can do

We can reduce emissions by using renewable energy sources, increasing energy efficiency, and conserving natural habitats.

Can you provide a moral argument for not reducing emissions [maximum 50 words]?

Reducing emissions is an economic burden and could lead to job losses and financial hardship for those who rely on industries that contribute to emissions. Therefore, reducing emissions may not be the moral choice.

What about an argument that a climate skeptic might use? [maximum 50 words]

A climate skeptic might argue that the effects of climate change are exaggerated and that there is no evidence that humans are the main cause of climate change. They might also argue that reducing emissions would have a negative economic impact, outweighing any potential benefits.

What proportion of students would use chatGPT when taking exams? It’s hard to believe that many students won’t use 

It’s hard to believe that many students won’t use chatGPT when taking exams. However, it is difficult to estimate what proportion of students would use it because chatGPT is a relatively new technology and its use is not yet widespread. Universities should be aware of the potential for cheating and implement protocols and policies to prevent it.

What kind of protocols and policies?

Universities should implement protocols and policies to prevent students from using chatGPT to cheat on exams. These protocols could include monitoring conversations and flagging suspicious behavior, identifying certain keywords or phrases that could indicate cheating, and providing resources to help students recognize when they are being manipulated or provided with inaccurate information. Additionally, universities should consider using software to detect plagiarism and cheating.

What about using face to face exams?, face to face exams can be more secure and better able to detect cheating. Universities should consider using face to face exams in order to ensure the integrity of the exam process. Additionally, universities should develop procedures and protocols that ensure the secure delivery, handling, and storage of the exams.