Wet Hot Artificial Intelligence is here this summer


Exploring the (Impossible): Generative AI and the Stability AI at a San Francisco Comic-Conference

The theme of the day was “exploring the (im)possible.” We learned how Google’s AI was being put to use fighting wildfires, forecasting floods, and assessing retinal disease. The show featured generative AI models which are called the stars of this show. Content machines are designed to create writings, images, and even computer code that humans could only dream of.

Are you interested in learning more about the technology and the boom of it? Check out WIRED’s extensive (human-written) coverage of the topic, including how teachers are using it at school, how fact-checkers are addressing potential disinformation, and how it could change customer service forever.

Stability AI, which offers tools for generating images with few restrictions, held a party of its own in San Francisco last week. The company was valued at $1 billion by the new funding announcement. The gathering attracted tech celebrities including Google cofounder Sergey Brin.

Song works with Everyprompt, a startup that makes it easier for companies to use text generation. He said that testing generative AI tools that make images, text, or code left him with a sense of wonder at the possibilities. He says that it has been a long time since he used a website or technology that felt helpful or magical. “Using generative AI makes me feel like I’m using magic.”

If you have been in outer space for the past few months, you will know that people are losing their minds over the ability to answer questions in strikingly coherent and seemingly insightful ways. Want to know more about quantum computing? Need a recipe for whatever’s in the fridge? Can’t you write a high school essay? You can count on your back, from the ChatGPLt.

Christopher Potts, a professor at Stanford University, says the method used to help ChatGPT answer questions, which OpenAI has shown off previously, seems like a significant step forward in helping AI handle language in a way that is more relatable. “It’s extremely impressive,” Potts says of the technique, despite the fact that he thinks it may make his job more complicated. It got me thinking about what I was going to do in my courses, which often require short answers on assignments.

The company did not give a full description of how it developed the new interface, but shared some information in a post. The team fed human- written answers to GPT- 3.5 as training data and used a method called reinforcement learning to push their model to provide better answers to questions.

“We are absolutely looking to get these things out into real products and into things that are more prominently featuring the language model rather than under the covers, which is where we’ve been using them to date,” said Dean. It is important we get this right. Pichai said that the company has a lot planned for artificial intelligence in the future, and that they need to be bold and responsible.

There are ways to mitigate these problems, of course, and rival tech companies will no doubt be calculating whether launching an AI-powered search engine — even a dangerous one — is worth it just to steal a march on Google. After all, if you’re new in the scene, “reputational damage” isn’t much of an issue.

Does OpenAI Really Need to Be Bad at Talking to the Cable Company? An Empirical Report on DoNotPay, an OpenAI Program

It seems that Openai is trying to damp down expectations. As CEO Sam Altman recently tweeted: “ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness. It is a mistake to rely on it for anything important right now. We have a long way to go on robustness and truthfulness, it is a preview of progress.

DoNotPay used GPT-3, the language model behind ChatGPT, which OpenAI makes available to programmers as a commercial service. The company trains GPT- 3 on examples of successful negotiations and relevant legal information. He wants to automate a lot more than just talking to the cable company. “If we can save the consumer $5,000 on their medical bill, that’s real value,” Browder says.

A new breed of Artificial Intelligence programs are created with huge quantities of text information gathered from the web, books and other sources. It is possible for training material to imitate human writing and be used to answer questions. But because they operate on text using statistical pattern matching rather than an understanding of the world, they are prone to generating fluent untruths.

The Next Big Paradigm, the Next Big Revolution, and the Future of AI-Enhanced Search Engines: Google vs Bing

Answers to those questions are not very clear right now. One thing is. Granting open access to these models has kicked off a wet hot AI summer that’s energizing the tech sector, even as the current giants are laying off chunks of their workforces. Contrary to Mark Zuckerbergs belief, the next big paradigm is not the metaverse, but a new wave of artificial intelligence content engines. In the 1980s, we saw a gold rush of products moving tasks from paper to PC application. In the 1990s, you could make a quick fortune by shifting those desktop products to online. A decade later, the movement was to mobile. The 2020s will see the big shift toward generative artificial intelligence. This year thousands of startups will emerge with business plans based on tapping into the APIs of those systems. The cost of churning out generic copy will go to zero. At the end of the decade, video generation systems may well control TikTok and other apps. They are not very good at creating innovative creations, but the robots will be very dominant.

Google is expected to announce artificial intelligence integrations for the company’s search engine on February 8 at 8:30 am Eastern. It’s free to watch on the internet.

Google commanded the online search business for years, while Microsoft’s Bing remained a distant competitor. Microsoft, an OpenAI investor, plans to weave generative AI into its search engine in an effort to differentiate the experience from Google and attract more users. Will this be a big year for Bing? Who knows, but users can expect to soon see more text crafted by AI as they navigate through their search engine of choice.

Microsoft executives said that a limited version of the AI-enhanced Bing would roll out today, though some early testers will have access to a more powerful version in order to gather feedback. People are being asked to sign up for a larger launch in the coming weeks.

The response also included a disclaimer: “However, this is not a definitive answer and you should always measure the actual items before attempting to transport them.” A “feedback box” at the top of each response will allow users to respond with a thumbs-up or a thumbs-down, helping Microsoft train its algorithms. Text generation is one of the ways that Google enhances its search results.

Google unveiled Bard earlier this week as part of an apparent bid to compete with the viral success of ChatGPT, which has been used to generate essays, song lyrics and responses to questions that one might previously have searched for on Google. ChatGPT’s meteoric rise in popularity has reportedly prompted Google’s management to declare a “code red” situation for its search product.

In the demo, which was posted by Google on Twitter, a user asks Bard: “What new discoveries from the James Webb Space Telescope can I tell my 9 year old about?” Bard responds with a series of bullet points, including one that reads: “JWST took the very first pictures of a planet outside of our own solar system.”

According to NASA, the first image of an exoplanet was taken by the European Southern Observatory’s Very Large Telescope in 2004.

Detecting Artificial Intelligence in Artificial Language: A Case Study of Google, OpenAI, and the Alphabet Sensitive Response to Bard

The stock price of Alphabet fell as much as 8% in midday trading on Wednesday after the inaccurate response from Bard was first reported.

In the presentation Wednesday, a Google executive teased plans to use this technology to offer more complex and conversational responses to queries, including providing bullet points ticking off the best times of year to see various constellations and also offering pros and cons for buying an electric vehicle.

With generative AI tools now publicly accessible, you’ll likely encounter more synthetic content while surfing the web. Some instances might be benign, like an auto-generated BuzzFeed quiz about which deep-fried dessert matches your political beliefs. (Are you Democratic beignet or a Republican zeppole?) Other times could be similar to a propaganda campaign from a foreign government.

Edward Tian, a student at Princeton, went viral earlier this year with a similar, experimental tool, called GPTZero, targeted at educators. It gauges the likeliness that a piece of content was generated by ChatGPT based on its “perplexity” (aka randomness) and “burstiness” (aka variance). OpenAI, which is behind ChatGPT, dropped another tool made to scan text that’s over 1,000 characters long and make a judgment call. False positives and limited effectiveness are just some of the limitations that the company is upfront about. Just as English-language data is often of the highest priority to those behind AI text generators, most tools for AI-text detection are currently best suited to benefit English speakers.

A few more years have passed since the ability to mimic natural writing was first introduced. In 2019, Harvard and the MIT-IBM Watson AI Lab released an experimental tool that scans text and highlights words based on their level of randomness.

Why would this be helpful? An AI text generator is fundamentally a mystical pattern machine: superb at mimicry, weak at throwing curve balls. When you type an email to your boss or send a group text to some friends, your tone and cadence can be predictable, but there is an underlying quality to our human style of communication.

While these detection tools are helpful for now, Tom Goldstein, a computer science professor at the University of Maryland, sees a future where they become less effective, as natural language processing grows more sophisticated. “These kinds of detectors rely on the fact that there are systematic differences between human text and machine text,” says Goldstein. “But the goal of these companies is to make machine text that is as close as possible to human text.” Does this mean that the hope for synthetic media detection is lost? Absolutely not.

Goldstein worked on a recent paper researching possible watermark methods that could be built into the large language models powering AI text generators. It’s fascinating, but not totally sure of its validity. Remember, ChatGPT tries to predict the next likely word in a sentence and compares multiple options during the process. A watermark might be able to designate certain word patterns to be off-limits for the AI text generator. When the text is scanned and the watermark rules are broken multiple times, it shows a human being has messed with the masterpiece.

Bing, Baidu, Ernie Bot – the new AI AI wars and their emergence in China’s biggest search company — WIRED

The all-new Bing is full of information. The company gave demos at its headquarters and a quick test drive by Aarian Marshall, WIRED’s head of product reviews, who showed that it can answer tricky questions and generate a vacation itinerary. It’s a long way from Microsoft’s hapless and hopeless Office assistant Clippy, which some readers may recall bothering them every time they created a new document.

Last but by no means least in the new AI search wars is Baidu, China’s biggest search company. It joined the competition with the announcement of another competitor, “Ernie Bot” in English. The bot will be released in March after internal testing.