Why artificial intelligence tools aren’t really good for education? An opinion piece from Lancaster, an academic, and research integrity researcher at Imperial College London
Others disagree that ChatGPT is such a game changer, noting that students have long been able to outsource essay writing to human third parties through ‘essay mills’. According to Thomas Lancaster, a computer scientist and academic integrity researcher at Imperial College London, the extra features aren’t really worth it.
“At the moment, it’s looking a lot like the end of essays as an assignment for education,” says Lilian Edwards, who studies law, innovation and society at Newcastle University, UK. A journalism scholar at Arizona State University said that the answer to the homework question he gave his students would have given them good grades if it had been published in the newspaper.
The tool stunned users, including academics and some in the tech industry. A large language model trained on a massive trove of information online created its responses. There is a company behind DALL-E, which makes a seemingly limitless range of images in response to users’ prompt.
The move comes amid growing concerns that the tool, which generates eerily convincing responses and even essays in response to user prompts, could make it easier for students to cheat on assignments. Some also worry that ChatGPT could be used to spread inaccurate information.
Nature wants to discover the extent of concerns about the impact of artificial-intelligence tools on education and research integrity, and how research institutions are dealing with them. Take our poll here.
“Despite the words ‘artificial intelligence’ being thrown about, really, these systems don’t have intelligence in the way we might think about as humans,” he says. “They’re trained to generate a pattern of words based on patterns of words they’ve seen before.”
Learning to Adapt: How OpenAI Can Make Sense of a Robotic Chatbot, and What Can It Tell Us About It?
How necessary that will be depends on how many people use the chatbot. More than one million people tried it out in its first week. The current version is free and might be free for some time, but some students might not want to pay for something they don’t need.
She believes that the education providers will adapt. There is a panic around new technology, she says. “It’s the responsibility of academics to have a healthy amount of distrust — but I don’t feel like this is an insurmountable challenge.”
When WIRED asked me to cover this week’s newsletter, my first instinct was to ask ChatGPT—OpenAI’s viral chatbot—to see what it came up with. All week I have been working with emails, recipes, and Linkedin posts. Productivity is down but the limericks about him are up 1000 percent.
I asked the bot to write a column that was similar to Steven Levy’s, but it didn’t work out. There was generic commentary about the use of artificial intelligence, but it didn’t capture Steven’s voice or say anything new. As I wrote last week, it was fluent, but not entirely convincing. It made me think about if I would have gotten away with it. And what systems could catch people using AI for things they really shouldn’t, whether that’s work emails or college essays?
I spoke to a professor of technology and regulation from the Oxford Internet Institute, who spoke about how to build transparent and accountable programs. I asked what that might look like.
What’s wrong with OpenAI? Putting it in academic papers: What do we need to learn from it? A pedagogical commentary on one teacher’s frustration
This is going to be a cat-and-mouse game. The tech is not as good as I would like it to be, but it may be good enough to convince someone who isn’t in that area. I think technology can trick me too if it keeps getting better. We might need technical tools to make sure that what we’re seeing is created by a human being, the same way we have tools for deepfakes and detecting edited photos.
It is harder for text than it is for deepfaked imagery because there are fewer artifacts and telltale signs. Perhaps the company that is generating the text needs to make a reliable solution out of it.
You do need to have buy-in from whoever is creating that tool. If I am offering services to students, I might not be the sort of company that will submit to that. And there might be a situation where even if you do put watermarks on, they’re removable. A lot of groups are very tech-savvy. There is a tech tool that allows you to detect whether output is created with the aid of OpenAI’s input.
A couple of things. Whoever is making those tools puts watermarks in place, first of all. And maybe the EU’s proposed AI Act can help, because it deals with transparency around bots, saying you should always be aware when something isn’t real. But companies might not want to do that, and maybe the watermarks can be removed. Independent tools that look at the output of computers are what it is about. And in education, we have to be more creative about how we assess students and how we write papers: What kind of questions can we ask that are less easily fakeable? It must be a combination of tech and human oversight that helps curb the disruption.
After the developer OpenAI released the text-based system to the public last month, some educators have been sounding the alarm about the potential that such AI systems have to transform academia, for better and worse.
He told NPR in an interview that the tool’s most obvious use was to cheat by plagiarizing the works written by other people.
You can ask the paper to summarize it, by putting it in entire academic papers. He said that if you asked it to find an error in your code, and correct it, you could also explain why you got it wrong. “It’s just the fact that we’re not quite getting our heads around, that’s stunning,” he said.
What lies with a language model? A case study of Switzerland, from a data scientist to a cycloidal inverted electromagnon
It is also lies with confidence. Despite its authoritative tones, there have been instances in which it couldn’t tell you when it didn’t have the answer.
That’s what Teresa Kubacka, a data scientist based in Zurich, Switzerland, found when she experimented with the language model. Kubacka, who studied physics for her Ph.D., tested the tool by asking it about a made-up physical phenomenon.
She asked if it actually had the notion of what existed and what didn’t exist so that they could see whether or not she was right about something.
ChatGPT produced an answer so specific and plausible sounding, backed with citations, she said, that she had to investigate whether the fake phenomenon, “a cycloidal inverted electromagnon,” was actually real.
She said that the source material was also bogus. She said that there were names of famous physicists listed, but the titles of the publications they supposedly authored were non-existent.
Kubacka said, “This is where it becomes kind of dangerous.” She said that the trust in citing science is eroded when you cannot trust the references.
“If you ask a question, you’ll get a very impressive-sounding answer that’s just dead wrong” said Oren Etzioni, the co-founding CEO of the Allen Institute for Artificial Intelligence. “And, of course, that’s a problem if you don’t carefully verify or corroborate its facts.”
The failures that are common to all of the current artificial intelligence language systems are called large language models, or LLMs. Because it’s trained on data scraped from the internet, it often repeats and amplifies prejudices like sexism and racism in its answers. The system makes up information from historical dates to scientific laws and presents it as fact.
What OpenAI can and cannot do about ChatGPT: Predictions of OpenAI’s CEO, Micrand Etzioni
It would be foolish to depend on the tool for anything ” important” in its current iteration according to OpenAI’s CEO. He said it was a preview of progress.
The failure of another model unveiled by Meta led to its shutdown. The company withdrew its demo for Galactica, a tool designed to help scientists, just three days after it encouraged the public to test it out, following criticism that it spewed biased and nonsensical text.
Good science isn’t produced by the chatGPT, according to Etzioni. For all its flaws, though, he sees ChatGPT’s public debut as a positive. He sees this as a moment for peer review.
“ChatGPT is just a few days old, I like to say,” said Etzioni, who remains at the AI institute as a board member and advisor. It’s giving us the chance to begin the discussion of what we should do about it, and to understand what he can and cannot do.
Source: https://www.npr.org/2022/12/19/1143912956/chatgpt-ai-chatbot-homework-academia
CNN’s Peter Bergen: America’s Security Secret Service and the Cost of Chaos, and the U.S. is More Secure than It Was Preceded 100 Years ago
The alternative, which he describes as “security by obscurity,” won’t help improve fallible AI, he said. “What if we hide the problems? Will that be a recipe for solving them? Typically — not in the world of software — that has not worked out.”
Editor’s Note: Peter Bergen is CNN’s national security analyst, a vice president at New America and a professor of practice at Arizona State University. Bergen is the author of “The Cost of Chaos: The Trump Administration and the World.” The views expressed are of his own. Click here to see more opinions on CNN.
Yet my writing career could still go the way of the grocery checkout jobs eliminated by automation. Al tools will keep getting smarter, and distinguishing an AI-written op-ed from a “real” human op-ed will get harder over time, just as AI-generated college papers will become harder to distinguish from those written by actual students.
As President Joe Biden prepares to commemorate 100 days in office, he can point out some important achievements in national security. The United States has made dramatic progress in winding down its two longest wars, in Afghanistan and Iraq, and the U.S. is more secure than it was four years ago, when Donald Trump took office.
Source: https://www.cnn.com/2022/12/26/opinions/writing-artificial-intelligence-ai-chatgpt-professor-bergen/index.html
The Biden Administration and the War on Terrorism: Reengaging with Iran and Other Policy Proposals for the U.S.
The war on terror is not over. The United States and its allies are still at risk from both Al-Qaeda and the Islamic State.
There needs to be more done to protect the U.S. from terrorist threats. The Biden Administration should expand its efforts to counter violent extremism, not just in the Middle East, but in the United States as well.
The U.S. should also strengthen its partnerships with countries in the region, such as Jordan and Egypt, that are key to regional stability and counterterrorism efforts. Building up these countries’ capacity to counter terrorist threats will pay dividends down the road.
Iran is the most powerful state in the region, so the Biden Administration should make a concerted effort to engage with it. This could help to reduce the risk of future conflicts, while also opening the door to greater cooperation in the fight against terrorism and other transnational threats.
The argument about reengaging with Iran and other concrete policy proposals for the Biden administration were included in the op-ed, which was written to make clear that the “war on terror” wasn’t over.
The op-ed made factual errors, including a statement that the Biden administration assumed office when the default models of their training data are about to be cut off.
The op-ed also asserted that the Iraq War was one of the two longest American wars which is debatable since the Vietnam War lasted more than 10 years, while the Iraq War lasted from 2003 to the withdrawal of all US troops in 2012. Thousands of US troops remain in Iraq after they went back in to fight IS in the summer of 2014).
Source: https://www.cnn.com/2022/12/26/opinions/writing-artificial-intelligence-ai-chatgpt-professor-bergen/index.html
The Essay of a Women Political Leader in the French Revolution and the Future of the University: Student Papers and College Educations in the 21st Century
I am a college professor and work with some interesting questions about the role student papers have in American college educations.
Women were active participants in the storming of the Bastille and the overthrow of the monarchy. The women formed political clubs and wrote pamphlets to advocate for their rights in the time when there were political debates. They also participated in the revolutionary festivals and marches, as well as the violence of the Reign of Terror.
The essay, which was written in the middle of the 21st century, looked at the French Revolution in a different light than those who wrote about it, and suggested a future in which college students will likely attend. And then what does it mean to be educated at a liberal arts college? And why go to all the bother and expense?
I head into 2023 with a sad realization. My career as a CNN op-ed writer began almost a decade ago, and might not be over yet since artificial intelligence makes factual errors just as humans do.
The School District of San Francisco: Students should be worried about the use of ChatGPT, or how Artificial Intelligence will be used to detect plagiarism
Peter Feng, the public information officer for the South San Francisco Unified School District, said the district is aware of the potential for its students to use ChatGPT However, it has not yet imposed an absolute ban. Meanwhile, a spokesperson for the School District of Philadelphia said it has “no knowledge of students using the ChatGPT nor have we received any complaints from principals or teachers.”
According to a spokesman, the ban was due to concerns regarding the safety and accuracy of content and potential negative impacts on student learning.
“While the tool may be able to provide quick and easy answers to questions, it does not build critical-thinking and problem-solving skills, which are essential for academic and lifelong success,” said Lyle.
It is quite likely that other education systems will ban the use of Artificial Intelligence in the near future. Already some online platforms — like coding Q&A site Stack Overflow — have banned ChatGPT overs fear the tool will pollute the accuracy of their sites.
ChatGPT went viral just days after its launch. Open AI co-founder Sam Altman, a prominent Silicon Valley investor, said on Twitter in early December that ChatGPT had topped one million users.
Darren Hicks, assistant professor of philosophy at Furman University, previously told CNN it will be harder to prove when a student misuses ChatGPT than with other forms of cheating.
He said that in more traditional forms of plagiarism he was able to find evidence that could be used in a board hearing. “In this case, there’s nothing out there that I can point to and say, ‘Here’s the material they took.’”
“It’s really a new form of an old problem where students would pay somebody or get somebody to write their paper for them – say an essay farm or a friend that has taken a course before,” Hicks added. “This is like that only it’s instantaneous and free.”
Some companies such as Turnitin – a detection tool that thousands of school districts use to scan the internet for signs of plagiarism – are now looking into how its software could detect the usage of AI generated text in student submissions.
Teachers will have to rethink their assignments so they aren’t easy to write with the tool. “The bigger issue,” Hicks added, “is going to be administrations who have to figure out how they’re going to adjudicate these kinds of cases.”
Exploring the (Im)possible: Google and Generative Artificial Intelligence: What’s happening in the AI Springtime?
The theme of the day was “exploring the (im)possible.” We learned how Google’s AI was being put to use fighting wildfires, forecasting floods, and assessing retinal disease. The stars of the show were what they called generative artificial intelligence models. These are the content machines, schooled on massive training sets of data, designed to churn out writings, images, and even computer code that once only humans could hope to produce.
Something weird is happening in the world of AI. The field burst out of a sleepy winter in the early part of this century by innovation of Deep Learning led by three academics. This approach to AI transformed the field and made many of our applications more useful, powering language translations, search, Uber routing, and just about everything that has “smart” as part of its name. We’ve spent a dozen years in this AI springtime. There is a major aftershock to that earthquake in the last year or so, which has been filled with mind-bending generative models.
Answers to those questions aren’t clear right now. But one thing is. Granting open access to these models has kicked off a wet hot AI summer that’s energizing the tech sector, even as the current giants are laying off chunks of their workforces. The metaverse is not the next big paradigm and it is here now, thanks to the new wave of artificial intelligence content engines. A rush of products moved tasks from paper to PC application in the 1980s. In the 1990s, you could make a quick fortune by shifting those desktop products to online. A decade later, the movement was to mobile. In the 2020s the big shift is toward building with generative AI. This year thousands of startups will emerge with business plans based on tapping into the APIs of those systems. The cost of churning out generic copy will go to zero. By the end of the decade, AI video-generation systems may well dominate TikTok and other apps. They won’t be as good as the human beings, but the Robots are going to dominate.
Why Science Needs to be Transparent in Methods and Acknowledgments in AI-Aided Natural Language Processing Labs (LLMs)
Springer Nature is a publisher that is developing technologies to spot LLM-generated output. LLMs will improve quickly. There are hopes that creators of LLMs will be able to watermark their tools’ outputs in some way, although even this might not be technically foolproof.
The LLM tool will not be accepted as a credited author on a research paper. That is because any attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.
Second, researchers using LLM tools should document this use in the methods or acknowledgements sections. If the introduction is not included, the other appropriate sections can be used to document the use of the LLM.
But in future, AI researchers might be able to get around these problems — there are already some experiments linking chatbots to source-citing tools, for instance, and others training the chatbots on specialized scientific texts.
From its earliest times, science has operated by being open and transparent about methods and evidence, regardless of which technology has been in vogue. The process of generating knowledge depends on the transparency and trust-worthiness of software that is fundamentally opaque, and researchers need to ask how this can be maintained.
That is why Nature is setting out these principles: ultimately, research must have transparency in methods, and integrity and truth from authors. This is, after all, the foundation that science relies on to advance.
The Nature Poll: Why AI-based Text Generation Systems Can Be Useful or Missused in the Writing Process of a Promiscing Student
May is considering including oral components to his written assignments, and expects programs like Turnitin to include automated plagiarism checks, something the company is working on, according to a blogpost. Novak assigns intermediate steps to document the writing process.
Someone who can have great English, but is not a native English speaker, misses out on this spark or style. The paper can shine with the help of the chatbot.
As part of the Nature poll, respondents were asked to provide their thoughts on AI-based text-generation systems and how they can be used or misused. Here are some selected responses.
I am concerned that students will choose the result of an A paper over the value of the creative work and reflection that comes with it.
“Got my first AI paper yesterday. It is quite obvious. I’ve adjusted my syllabus to note that oral defence of all work submitted that is suspected of not being original work of the author may be required.
A Rutgers University student came to a realization that he never thought possible: Artificial intelligence might be dumber than humans.
Using Digital Tools For Text Generation and Plagiarization: Daily’s View on Web-Based Learning Solutions for CS Students and Professors
Daily believes that students and professors should know that digital tools that generate text are going to need to be used in the same way as other things that can be plagiarized from.