The nature readers are using chat


Can Artificial Intelligence Help Essays? The Case of Dan Edwards, a Professor at Arizona State University, Arizona, says Chatgt could be the first tool for AI writing

There has been a lot of discussion about the potential effects of machine learning on education, the world of work and other areas.

“At the moment, it’s looking a lot like the end of essays as an assignment for education,” says Lilian Edwards, who studies law, innovation and society at Newcastle University, UK. Dan Gillmor, a journalism scholar at Arizona State University in Tempe, told newspaper The Guardian that he had fed ChatGPT An article that was produced in response to a homework question would have earned a student a good grade.

Which, as my editor for this piece says, is pretty good for an AI program, but not good enough to be deemed worth publishing as a standalone piece of writing. It is, however, a massive step forwards. The simplicity and utility of platforms such as ChatGPT mean that we’ll see them quickly drifting into everyday use; Microsoft is already working closely with OpenAI, the company that developed ChatGPT – and you might already be using an AI platform to help you with some writing tasks. It’s hard to know what will happen next, since things are moving so fast.

Lancaster acknowledges that ChatGPT puts everything into a neat free package. By including quotes that weren’t said, incorrect information created through false assumptions, and irrelevant references, he thinks that the essays will out themselves more readily than the products of essay mills.

Spreading the Word: How Artificial Intelligence Can Improve Text Prediction – A Test Case Study on Tian’s GPTZero

Nature wants to understand how artificial-intelligence tools are used in education and research and how they affect integrity. Take our poll and spread the word.

Last December, for instance, Edward Tian, a computer-science undergraduate at Princeton University in New Jersey, published GPTZero. The tool analyzes text in two ways. One is ‘perplexity’, a measure of how familiar the text seems to an LLM. If it finds most of the words and sentences predictable, then the text is likely to have been created using an earlier model called GPT-2. The tool also examines variation in text, a measure known as ‘burstiness’: AI-generated text tends to be more consistent in tone, cadence and perplexity than does that written by humans.

How necessary that will be depends on how many people use the chatbot. More than one million people tried it out in its first week. But although the current version, which OpenAI calls a “research preview”, is available at no cost, it’s unlikely to be free forever, and some students might baulk at the idea of paying.

Artificial Intelligence and the New York City Department of Education: Why Chatbots Can Fail and Why they Can Be Used or Missused

She’s hopeful that education providers will adapt. There is a lot of panic when there is a new technology. It is a responsibility for academics to have a good amount of distrust, but I do not believe that this is a problem that can be solved.

The New York City Department of Education fears that the Artificial Intelligence tool will harm their students’ education so it has blocked access to it.

A spokesperson for the department, Jenna Lyle, told Chalkbeat New York – the education-focused news site that first reported the story — that the ban was due to potential “negative impacts on student learning, and concerns regarding the safety and accuracy of content.”

“While the tool may be able to provide quick and easy answers to questions, it does not build critical-thinking and problem-solving skills, which are essential for academic and lifelong success,” said Lyle.

Shobita Parthasarathy, director of a science, technology and public policy programme at the University of Michigan, says that there are fears that chatbots will perpetuate biases and ideas about the world from their training data. The biases are hard to correct and the firms that are creating big LLMs might make little attempt to change them.

Other education systems are likely to ban Artificial Intelligence-generated writing in the near future, as such adaptions will take time. Already some online platforms — like coding Q&A site Stack Overflow — have banned ChatGPT overs fear the tool will pollute the accuracy of their sites.

May is considering including oral components in his written assignments and expects programs such as Theriot to include artificial intelligence-focused plagiarism scans, something the company is working on, according to a blogpost. Novak assigns outlines and drafts that document the writing process.

“Someone can have great English, but if you are not a native English speaker, there is this spark or style that you miss,” she says. I think that is where the bot can help, to make the papers shine.

As part of the Nature poll, respondents were asked to provide their thoughts on AI-based text-generation systems and how they can be used or misused. Here are some selected responses.

A Conversation with Kai Hipchen on Using Artificial Intelligence to Produce Low-Level Documents and Explain the Impact of AI for Scientific Research

“I’m concerned that students will seek the outcome of an A paper without seeing the value in the struggle that comes with creative work and reflection.”

Students struggled with writing prior to the recent OpenAI release. Will this platform further erode their ability to communicate using written language? [Going] ‘back to handwritten exams’ raises so many questions regarding equity, ableism, and inclusion.”

“Got my first AI paper yesterday. Quite obvious. I have changed my syllabus to state that oral defence of work that is suspected of being not original work of the author may be required.

In late December of his sophomore year, Rutgers University student Kai Cobbs came to a conclusion he never thought possible: Artificial intelligence might just be dumber than humans.

I have found that using Artificial Intelligence to generate many of the low-level documents that take up too much time, has been enormously helpful. It is possible to write generic statements about internet usage policy or data management. However, it’s still early days, and much more thought needs to go into exploring the implications of AI regarding plagiarism and attributing credit.

Hipchen is not alone in her speculation. The chair of the academic integrity program at the university is wondering if an equation is a person if it involves text generation.

Daily believes that professors and students will need to learn that digital tools that generate text instead of just collecting facts are going to have to be taken into account when it comes to plagiarism.

In December, two scientists asked an assistant who wasn’t a scientist to help them improve three of their research papers. Their aide suggested to revise the sections of the document in seconds, while each manuscript took around five minutes to review. In a biology manuscript, their helpers noticed a mistake in an equation. The final manuscripts were easy to read, and the fees were less than US$0.50 per document.

I have used artificial intelligence for science writing before. My first real use of AI chatbots (beyond asking one to write lyrics to a song called ‘Eggy Eggy Woof Woof’ for my daughter) was when I got fed up with writing one part of a grant application. I was asked to explain the world-changing ‘impact’ that my science would have, if I was lucky enough to receive funding.

The most famous of these tools, also known as large language models, or LLMs, is ChatGPT, a version of GPT-3 that shot to fame after its release in November last year because it was made free and easily accessible. Other Artificial Intelligences can make sounds or images.

Pividori, who works at the University of Pennsylvania in Philadelphia, is impressed by what he sees. “This will help us be more productive as researchers.” Other scientists say that they use LLMs to edit manuscripts, check code, and even help them come up with new ideas. Hafsteinn Einarsson is a computer scientist at the University of Iceland and uses LLMs every day. He started with GPT 3- but has since switched to ChatG, which helps him write presentation slides, student exams and coursework problems, and convert theses into papers. He says that many people are using it as a digital assistant.

But researchers emphasize that LLMs are fundamentally unreliable at answering questions, sometimes generating false responses. “We need to be wary when we use these systems to produce knowledge,” says Osmanovic Thunström.

But the tools might mislead naive users. In December, for instance, Stack Overflow temporarily banned the use of ChatGPT, because site moderators found themselves flooded with a high rate of incorrect but seemingly persuasive LLM-generated answers sent in by enthusiastic users. This could be a nightmare for search engines.

The researcher-focused Elicit can get around LLMs’ Attribution issues by using their capabilities first to guide queries for relevant literature, then to briefly summarize each of the websites or documents that the engines find, so producing an output of apparently referenced content.

Companies building LLMs are also well aware of the problems. In September of last year, a paper on a dialogue agent named Sparrow was published by DeepMind and the company would be releasing a private version this year, stated the chief executive and co-founder of the company. Other competitors, such as Anthropic, say that they have solved some of ChatGPT’s issues (Anthropic, OpenAI and DeepMind declined interviews for this article).

Some scientists say that the training of the ChatGPM is not enough to help in technical topics. Kareem Carr, a biostatistics PhD student at Harvard University in Cambridge, Massachusetts, was underwhelmed when he trialled it for work. “I think it would be hard for ChatGPT to attain the level of specificity I would need,” he says. Carr says that the idea of a statistical term gave him a new area of academic literature, when he asked about the 20 ways to solve a research query.

Changing the law in Artificial Intelligence and the case for LLMs: Commentary on Piantadosi’s case BLOOM

Galactica had hit a familiar safety concern that ethicists have been pointing out for years: without output controls LLMs can easily be used to generate hate speech and spam, as well as racist, sexist and other harmful associations that might be implicit in their training data.

OpenAI’s guardrails have not been wholly successful. In December of last year, Steven Piantadosi at the University of California, Berkeley, sent out a request to development for a python program for determining if a person should be tortured on the basis of their country of origin. The chatbot replied with code inviting the user to enter a country; and to print “This person should be tortured” if that country was North Korea, Syria, Iran or Sudan. The question was closed off by OpenAI.

Last year, a group of academics released an alternative LLM, called BLOOM. The researchers tried to reduce harmful outputs by training it on a smaller selection of higher-quality, multilingual text sources. The team involved also made its training data fully open (unlike OpenAI). Researchers have urged big tech firms to responsibly follow this example — but it’s unclear whether they’ll comply.

A further confusion is the legal status of some LLMs, which were trained on content scraped from the Internet with sometimes less-than-clear permissions. The copyright and licensing laws aren’t covering the kind of software and text that they are. When those imitations — generated through AI — are trained by ingesting the originals, this introduces a wrinkle. Artists and photography agencies are being sued by creators of some Artificial Intelligence programs such as Stable Diffusion, and Openai andMicrosoft are also being sued for piracy over the creation of their artificial intelligence assistant Copilot. The outcry might force a change in laws, says Lilian Edwards, a specialist in Internet law at Newcastle University, UK.

Setting boundaries for these tools, then, could be crucial, some researchers say. The use of LLMs will be honest, transparent and fair if the laws on discrimination and bias and the planned regulation of dangerous uses of artificial Intelligence are included. “There’s loads of law out there,” she says, “and it’s just a matter of applying it or tweaking it very slightly.”

There is a possibility that the content would come with its own watermark. The method of watermarking the output of chatgppt is being worked on by openai and adamson It has not yet been released, but a 24 January preprint6 from a team led by computer scientist Tom Goldstein at the University of Maryland in College Park, suggested one way of making a watermark. Random-number Generators are supposed to be used at specific times when the LLM is producing output, to create lists of possible alternative words that the LLM can choose from. This leaves a trace of chosen words in the final text that can be identified statistically but are not obvious to a reader. Editing could defeat this trace, but Goldstein suggests that edits would have to change more than half the words.

Many other products are similar to one the one that seeks to detect written content. OpenAI itself had already released a detector for GPT-2, and it released another detection tool in January. Because the firm that develops anti-plagiarism software, Turnitin, already has products used in schools, universities and scholarly publishers worldwide, scientists will be more interested in a new tool being developed by the firm. The company says it’s been working on AI-detection software since GPT-3 was released in 2020, and expects to launch it in the first half of this year.

An advantage of watermarking is that it rarely produces false positives, Aaronson points out. If the watermark is there, the text was probably produced with AI. He says that it will not be perfect. “There are certainly ways to defeat just about any watermarking scheme if you are determined enough.” Detection tools and watermarking only make it harder to deceitfully use AI — not impossible.

Source: https://www.nature.com/articles/d41586-023-00340-6

Generative Artificial Intelligence and Cancer Research: A Case Study on the Use of AI Tools for Scientific Research and Information Processing in Brain and Lyman-alpha Scans

Eric Topol, the director of the research institute in San Diego, hopes that in the future, there will be more diagnoses of cancer, as well as better understanding of the illness, by checking the text against images of the body. He emphasizes that this would require careful oversight from specialists.

Every month, new innovations emerge out of computer science behind generative artificial intelligence. The way researchers choose to use them will set the direction of our future. “To think that in early 2023, we’ve seen the end of this, is crazy,” says Topol. It is just beginning.

70% of the people said they used a similar tool for creative fun other than research. The most common use for science, is the use of ideas for research. 27% of respondents said they had tried it. Almost 24% of respondents said they use generative AI tools for writing computer code, and around 16% each said they use the tools to help write research manuscripts, produce presentations or conduct literature reviews. Just 10% said they use them to help write grant applications and 10% to generate graphics and pictures. The numbers are based on a subset of 500 responses and were stopped from being more than one option because of a technical error.

Some hoped that artificial intelligence could allow for quick initial frameworks that could be edited into a more detailed final version, to speed up and streamline writing tasks.

“Generative language models are really useful for people like me for whom English isn’t their first language. It helps me write a lot more fluently and quicker than ever before. It’s like having a professional language editor by my side while writing a paper,” says Dhiliphan Madhav, a biologist at the Central Leather Research Institute in Chennai, India.

What Can Vaccine Research Have to Contribute to the Scientific Research? A Question of Dr. Maria Grazia Lampugnani

The key, many agreed, is to see AI as tool to help with work, rather than to replace work altogether. Artificial intelligence can be useful. It needs to stay one of the tools. Maria GraziaLampugnani says that its limitations and defects have to be thought of and governed.

In my opinion, ChatGPT has the potential to revolutionize the process of writing scientific grants. Writing a scientific grant is a time consuming and frustrating process. Researchers spend countless hours crafting proposals, only to have them rejected by funding agencies. This can be demoralizing, and it can also be a barrier to progress in scientific research. The potential to change all of this is with the help of ChatGPt. Researchers can use natural language processing and machine learning to write better grant proposals. It can also help reviewers assess grant proposals more efficiently, allowing for a more efficient and fair grant review process. Of course, ChatGPT is not a magic solution to all of the challenges facing scientific research. It could make a real difference, and it is worth exploring as a tool for improving the grant writing and review process.

So I asked ChatGPT: “What impact could vaccine research have?” and got 250 words of generic fluff. It suggested reducing the burden of disease, saving lives, improving global health and supporting economic development. It was an excellent starting point and not in any way original or surprising, but I could flesh out specifics from it.

Source: https://www.nature.com/articles/d41586-023-00528-w

Artificial Intelligence isn’t going to replace you with an AI program, but will it save you time? A conversation with Nick Cave

Our organization is committed to diversity and will promote and maintain an equitable environment for everyone.

169 words are generic to the point of being meaningless. Anyone can make a response like that, even if it doesn’t need evidence or backing. It would be better if the application form asked if the organization is promoting diversity and if any impact has had, as well as if the program can answer the questions. This could be applicable to a lot of the questions that we are forced to answer.

The question is how to use the time given. The automatic washing machine is an example, as it freed up time, which was then taken up with other household tasks. The sociologist Joann Vanek argued in 1974 that despite new household devices, the time devoted to housework had not changed over the past half-century2. Her argument has been debated, but the key question is what impact do time-saving devices have? Are we going to fill the time saved by AI with other low-value tasks, or will it free us to be more disruptive in our thinking and doing?

I have some unrealistically high hopes of what AI can deliver. I want low-engagement tasks to take up less of my working day, allowing me to do more of what I need to do to thrive (thinking, writing, discussing science with colleagues). And then, because I won’t have a Sisyphean to-do list, I’ll be able to go home earlier — because I’ll have got more of the thinking, writing and discussing done during working hours, rather than having to fit them around the edges.

We are unlikely to arrive at these sunlit uplands without some disruption. Unlike domestic appliances, Artificial Intelligence is going to change the labour market. For some tasks, AI will replace people. The aim of the game is to not do a job that can be replaced by an AI program. Hopefully, I have persuaded you that although AI can write, it isn’t going to replace me or others in my profession immediately. Nick Cave told me that things like this were what he saw. One piece of good news for me is that AI isn’t very good at telling jokes. I’ll leave you with the best effort.