Meet Bard, the Answer to the question of ChatGPT


The Bard AI Chatbot: A Methodology for Extracting Ethical Constraints from Artificial Intelligence Data to Speed up Text Generation Technology

Google couldn’t let Microsoft get away with launching an AI chatbot that has the potential to challenge the company’s core business: search. That’s why it rushed to announce its own AI chatbot, Bard, though we still don’t know much about its capabilities.

For example, the query “Is it easier to learn the piano or the guitar?” would be met with “Some say the piano is easier to learn, as the finger and hand movements are more natural … Others say that it’s easier to learn chords on the guitar.” Pichai also said that Google plans to make the underlying technology available to developers through an API, as OpenAI is doing with ChatGPT, but did not offer a timeline.

ChatGPT is built on top of GPT, an AI model known as a transformer first invented at Google that takes a string of text and predicts what comes next. OpenAI has gained attention for revealing how feeding huge amounts of data into transformer models and ramping up the computer power running them can produce systems able to generate language or imagery. In order to improve on GPT, humans have to provide feedback to another model that fine-tunes the output.

Google has, by its own admission, chosen to proceed cautiously when it comes to adding the technology behind LaMDA to products. Artificial Intelligence models trained on text from the Web are prone to exhibit racial and gender biases.

Those limitations were highlighted by Google researchers in a 2020 draft research paper arguing for caution with text generation technology that irked some executives and led to the company firing two prominent ethical AI researchers, Timnit Gebru and Margaret Mitchell.

Other Google researchers who worked on the technology behind LaMDA became frustrated by Google’s hesitancy, and left the company to build startups harnessing the same technology. The company seems to have been spurred by the advent of ChatGPH to speed up its text generation capabilities into its products.

The company is using a methodology called Constitutional Artificial Intelligence to develop the chatbot. Anthropic training the language model with a set of around 10 “natural language instructions or principles” will be used to revise its responses automatically, as shown in the research paper. The goal of the system, according to Anthropic, is to “train better and more harmless AI assistants” without incorporating human feedback.

“The big idea is that in addition to talking to our friends and family every day, we’re going to talk to AI every day,” he says. This is something we are well positioned to do as a messaging service.

That distinction could make it easier to have headaches. As Bing’s implementation of OpenAI’s tech has shown, the large language models (LLMs) underpinning these chatbots can confidently give wrong answers, or hallucinations, that are problematic in the context of search. They can be emotionally cruel if they are toyed with enough. It’s a dynamic that has, at least so far, kept larger players in the space — namely Google and Meta — from releasing competing products to the public.

It is not the same place as before. It has a large and young user base but its business is not doing well. My AI will likely be a boost to the company’s paid subscriber numbers in the short term, and eventually, it could open up new ways for the company to make money, though Spiegel is cagey about his plans.

We’re still at the beginning of what conversational AI can do, and with major players like Microsoft and Google getting on board, we’re bound to see some progress. The evolution of these tools will be interesting to see, as well as who manages to make their way into our daily lives.

The company made the new version of Bing available for a small group of people who have been able to ask questions such as “Can you suggest places to visit in Paris?” or “What is the best apple pie recipe?” and then receive annotated replies explaining various tourist destinations or outline the ingredients and

But Microsoft may have made Bing a bit too flexible. A now-disabled prompt that triggered Bing to reveal its internal nickname, and some of the parameters its developers set for its behavior, were just some of the exploits users found with the system.

As for Edge, Microsoft plans on adding AI enhancements that let you summarize the webpage or document you’re reading online, as well as generate text for social media posts, emails, and more.

There will be more from Meta in the future. The company established a dedicated team that will eventually create “ai personas” designed to help people, as well as text- and image-based tools, according to CEOMarkZuck.

While Meta says it trained the bot on “over 48 million papers, textbooks, reference material, compounds, proteins and other sources of scientific knowledge,” the bot produced disappointing results when the company made it available in a public beta last November. The scientific community fiercely criticized the tool, with one scientist calling it “dangerous” due to its incorrect or biased responses. Meta took the chatbot offline after just a few days.

The e-retailer has found a new way to get involved with the trend of artificial intelligence. The company told CNBC that it is testing a rival internally. Alibaba has reportedly been experimenting with generative AI since 2017, but the company hasn’t provided any sense of when it could announce the tool it’s working on or what it might be capable of.

You.com, a company built by two former Salesforce employees, bills itself as the “search engine you control.” At first glance, it may seem like your typical search engine, but it comes with an AI-powered “chat” tool that works much like the one Microsoft’s piloting on Bing.

You.com has added built-in artificial intelligence image generators, including Stable Diffusion 1.5, Stable Diffusion 2.1, and Open Journey, to help you generate images based on a written description. The engine also breaks down your search results based on relevant responses on sites like Reddit, TripAdvisor, Wikipedia, and YouTube while also providing standard results from the web.

All of the other Chinese companies are likely to be subject to the same rules, as they too are developing an artificial intelligence (ai) chat bot.

There is a tool known as “Eros”, which is Enhanced Representation through kNowledge IntEgration, and it came out in 2019. In late 2021, Baidu said it trained the model on “massive unstructured data and a gigantic knowledge graph” and that it “excels at both natural language understanding (NLU) and generation (NLG).”

Meanwhile, Chinese gaming firm NetEase has announced that its education subsidiary, Youdao, is planning to incorporate AI-powered tools into some of its educational products, according to a report from CNBC. It seems the company is interested in employing the technology in one of its upcoming games as well, but it has not been clear what exactly this tool will do.

The director of research and insights at NetEase is reported to be considering bringing a tool to the mobile game Justice Online Mobile. As noted by Ahmad, the tool will “allow players to chat with NPCs and have them react in unique ways that impact the game” through text or voice inputs. However, there’s only one demo of the tool so far, so we don’t know how (or if) it will make its way into the final version of the game.

Then, there’s Replika, an AI chatbot that functions as a sort of “companion” that you can talk to via text-based chats and even video calls. The tool combines the company’s own version of the GPT-3 model and scripted dialogue content to build memories and generate responses tailored to your conversation style. The company that owned the tool recently decided against erotic roleplay.