OpenAI Meets Metachatbots: A new era in the Internet of Artificial Intelligence and AI-enabled Social Bots
Today, let’s consider the implications of a truly profound week in the development of artificial intelligence and discuss whether we may be witnessing the rise of a new era in the consumer internet.
On Monday, OpenAI announced the latest updates for ChatGPT. One feature lets you interact with its large language model via voice. You can ask questions about the images you have uploaded. The result is that a tool which was already useful for lots of things suddenly became useful for much more. You can chat with it, snap a picture of the tree, and ask the app questions on the phone, which is much more powerful than a mobile app.
For another, though, adding a voice to ChatGPT begins to give it a hint of personality. I don’t want to overstate the case here — the app typically generates dry, sterile text unadorned by any hint of style. But something changes when you begin speaking with the app in one of its five native voices, which are much livelier and more dynamic than what we are used to with Alexa or the Google assistant. The voices are earnest, upbeat, and — by nature of the fact that they are powered by an LLM — tireless.
You can imagine the next steps here. A bot that gets to know your quirks can offer you coaching, or therapy, or both; it can entertain you in whichever way you prefer. A synthetic companion not unlike the real people you encounter during the day, only smarter, more patient, more empathetic, more available.
People who have a lot of friends and family around them tend to look down at things like this in comparison to the human experience. I think it might feel different for those who are lonely, isolated and on the margins. On an early episode of Hard Fork, a trans teenager sent in a voice memo to tell us about using ChatGPT to get daily affirmations about identity issues. The power of giving what were then text messages a warm and kindly voice, I think, should not be underestimated.
Source: The synthetic social network is coming
How will Openai use AI in social feeds? An overview of two new avenues to think about Artificial Intelligence in Social Feeds
Openai often presents its products as simple utilities for getting things done. Meta, on the other hand, is in the entertainment business. The company revealed on Wednesday that it has found use for generative and voices in its own product.
All of this feels like an intermediate step to me. It is not possible for people to talk to MrBeast in a voice chat because they want him to be their big brother. I haven’t been able to chat with any of these character bots yet, but I struggle to understand how they will have more than passing novelty value.
I think celebrities are not yet willing to give their entire persona to Meta for safekeeping because of the new technology. Better to give people a taste of what it’s like to talk to AI Snoop Dogg and iron out any kinks before delivering the man himself. When that happens, the potential is very real. How much time would Taylor Swift fans spend talking to her digitally this year? How much would they pay for the privilege?
While we wait to learn the answers, a new chapter of social networking may be beginning. Machine- learning is used in consumer apps to create more engaging and personalized feeds for billions of users, even though it has been mostly about ranking.
This week we got at least two new ways to think about AI in social feeds. The other is the new stickers that will come to the company’s messaging apps. It’s unclear to me how long people want to spend creating images while texting, but the demonstrations seem nice enough.
More significantly, I think, is the idea that Meta plans to place its AI characters on every major surface of its products. You can message them directly in the same inbox that you send them to your friends and family. I think they are going to be making Reels.
Feeds that used to define the connections between humans will become a partially synthetic social network when this happens.
Will it feel more personalized, engaging, and entertaining? Will it feel strange, hollow, and junk? There will be a lot of opinions on this. But either way, I think, something new is coming into focus.
What Do I Need to Do Every Day? Google Assistant on a Pixel 7 Pro: Wordlessly Escorted Me through My E-mails
At least once every morning, whenever I wake up, I can see a reminder on the calendar of just one word: emails. Thus, my formal workday begins as I speed-delete nearly every email that landed in my inbox overnight. They are largely useless and clog up the space between legitimate emails that I need to read and respond to.
Imagine if you would be able to tell an assistant to show you your most important emails and then take care of the rest. I asked Google Assistant on a Pixel 7 Pro to do this, and it just wordlessly escorted me to my inbox so I could deal with it myself. Thanks for the thanks a lot.
Google, like almost every other tech company in the world, is all about AI right now. The company showcased a lot of new ideas for generative AI at I/O earlier this year — tools to help you compose a new message in Gmail, write a job description in Google Docs, and build a template for your dog walking business in Google Sheets.
They’re either hit or miss. Sometimes they’re useful: I asked it to expand a bullet point list of notes into some care instructions for my houseplants, and it added some helpful context about how often to water them. It often gives you the answers that are obvious, like the weekly meal plan that I asked for from Google Sheets. When I was forced to come up with healthy meal and snack ideas, there were some good suggestions but I was left to go nuts for fruit, vegetables, nuts, or yogurt in every cell.
Source: The Pixel 8 is Google’s best opportunity to bring its AI ideas together under one roof
Is Android the best artificial intelligence on our phones? Getting acquainted with the latest features of Google and Google’s Tensor G3 and 8
Realistically, a full Bard-on-your-phone assistant is probably still cooking. The features announced at the I/O are mostly in the trial stage, but that has not precluded them from being better or worse. There was a report earlier this year that Google was shaking up the Assistant team and aiming to make the product more Bard-like, and my guess is that we’ll see plenty of flavors of this future in next week’s announcement.
Apple is not interested in saying artificial intelligence, and Microsoft has computers that look like a phone. If AI is truly going to take the pain out of our daily chores like every tech company wants us to believe, then the Pixel ought to be the device that shows us how that works. I’m not all that interested in having generative AI write emails for me regularly when I’m sitting at my computer, but I can think of a bunch of things I’d like an AI assistant to be able to do for me on my phone.
Realistically, these kinds of features are still a ways off. Processing power can be one of the main barriers to let artificial intelligence run wild on your phone. AI needs a lot of it, and Google — like other companies — offloads the heavy lifting to the cloud when you ask Bard to summarize a document or write up a meal plan. It would take a long time to use an on-device assistant.
Google’s custom Tensor chips are supposedly designed with the goal of doing more of this processing locally, but is its third-generation chipset up to the task? It seems unlikely that there will be a lot of complicated processes on the Tensor G3 as there is a tendency for overheating complaints about the Pixel 7 phones. The Pixel 8 should give us a look at what Artificial Intelligence can do for us.