The memory is needed to remember who you are and what you like


Is Openai Really Just a Hypervigilant Virtual Assistant? What do we need to know about GPTs, and why should we care about them?

Fedus and Jang say that ChatGPT’s memory is nowhere near the capacity of the human brain. Fedus says that you’re limited to “a few thousand token.” If only.

But that approach, of course, makes a lot of people feel uncomfortable! Many users are wary of having their questions and missives hoovered up by Openai and fed back into the system as training data to help personalize the bot even further.

Each custom GPT has its own memory. OpenAI uses the Books GPT as an example, where it can automatically remember which books you’ve already read and which genres you like best. There are lots of places in the GPT Store you can imagine memory might be useful, for that matter. Once it knows what you know, the tutor me can offer a much better long-term course load. Kayak could travel straight to your favorite airlines and hotels, or GymStreak could track your progress over time.

“We think there are a lot of useful cases for that example, but for now we have trained the model to steer away from proactively remembering that information,” Jang says.

“You can think of these as just a number of tokens that are getting prepended to your conversations,” says Liam Fedus, an OpenAI research scientist. “The bot has some intelligence, and behind the scenes it’s looking at the memories and saying, ‘These look like they’re related; let me merge them.’ That goes on your budget.

Is this the hypervigilant virtual assistant that tech consumers have been promised for the past decade, or just another data-capture scheme that uses your likes, preferences, and personal data to better serve a tech company than its users? Possibly both, even though Openai might not say it that way. “I think the assistants of the past just didn’t have the intelligence,” Fedus said, “and now we’re getting there.”