header banner
Default

A customized one Me Agents Exist Here Are They Ready for the World?


Table of Contents

    Advertisement

    The Shift

    The age of autonomous A.I. assistants could have huge implications.

    Sam Altman gestures at a presentation, with a black background showing the words “GPT-4 Turbo Pricing.”
    Sam Altman is the chief executive of OpenAI, which is introducing personalized chatbots.Credit...Justin Sullivan/Getty Images

    You could think of the recent history of A.I. chatbots as having two distinct phases.

    The first, which kicked off last year with the release of ChatGPT and continues to this day, consists mainly of chatbots capable of talking about things. Greek mythology, vegan recipes, Python scripts — you name the topic and ChatGPT and its ilk can generate some convincing (if occasionally generic or inaccurate) text about it.

    That ability is impressive, and frequently useful, but it is really just a prelude to the second phase: artificial intelligence that can actually do things. Very soon, tech companies tell us, A.I. “agents” will be able to send emails and schedule meetings for us, book restaurant reservations and plane tickets, and handle complex tasks like “negotiate a raise with my boss” or “buy Christmas presents for all my family members.”

    That phase, though still remote, came a little closer on Monday when OpenAI, the maker of ChatGPT, announced that users could now create their own, personalized chatbots.

    I got an early taste of these chatbots, which the company calls GPTs — and which will be available to paying ChatGPT Plus users and enterprise customers. They differ from the regular ChatGPT in a few important ways.

    First, they are programmed for specific tasks. (Examples that OpenAI created include “Creative Writing Coach” and “Mocktail Mixologist,” a bot that suggests nonalcoholic drink recipes.) Second, the bots can pull from private data, such as a company’s internal H.R. documents or a database of real estate listings, and incorporate that data into their responses. Third, if you let them, the bots can plug into other parts of your online life — your calendar, your to-do list, your Slack account — and take actions using your credentials.

    Sound scary? It is, if you ask some A.I. safety researchers, who fear that giving bots more autonomy could lead to disaster. The Center for AI Safety, a nonprofit research organization, listed autonomous agents as one of its “catastrophic A.I. risks” this year, saying that “malicious actors could intentionally create rogue A.I.s with dangerous goals.”

    But there is money to be made in A.I. assistants that can do useful tasks for people, and corporate customers have been itching to train chatbots on their own data. There is also an argument that A.I. won’t truly be useful until it really understands its users — their communication styles, their likes and dislikes, what they look at and shop for online.

    So here we are, speeding into the age of the autonomous A.I. agent — doomers be damned.

    To be fair, OpenAI’s bots aren’t particularly dangerous. I got a demo of several GPTs during the company’s developer conference in San Francisco on Monday, and they mostly automated harmless tasks like creating coloring pages for children, or explaining the rules of card games.

    Custom GPTs also can’t really do much yet, beyond searching through documents and plugging into common apps. One demo I saw on Monday involved an OpenAI employee asking a GPT to look up conflicting meetings on her Google calendar and send a Slack message to her boss. Another happened onstage when Sam Altman, OpenAI’s chief executive, built a “start-up mentor” chatbot to give advice to aspiring founders, based on an uploaded file of a speech he had given years earlier.

    These might seem like gimmicks. But the idea of customizing our chatbots, and allowing them to take actions on our behalf, represents an important step in what Mr. Altman calls OpenAI’s strategy of “gradual iterative deployment” — releasing small improvements to A.I. at a fast pace, rather than big leaps over extended periods.

    For now, OpenAI’s bots are limited to simple, well-defined tasks, and can’t handle complex planning or long sequences of actions. Eventually, Mr. Altman said on Monday, users will be able to offer their GPTs to the public through OpenAI’s version of an app store. (He was light on details, but said the company planned to share some of its revenue with the makers of popular GPTs.)

    OpenAI has made it very easy to build a custom GPT, even if you don’t know a line of code. Just answer a few simple questions about your bot — its name, its purpose, what tone it should use when responding to users — and the bot builds itself in just a few seconds. Users can edit its instructions, connect it to other apps or upload files they want it to use as reference material. (And to answer the question going through every corporate lawyer’s head right now: OpenAI says it doesn’t use uploaded files to train its A.I. models.)

    Image

    For now, OpenAI’s bots are limited to simple, well-defined tasks, and can’t handle complex planning or long sequences of actions.

    After OpenAI gave me early access to its custom GPT creator, I spent several days playing around with it.

    The first custom bot I made was “Day Care Helper,” a tool for responding to questions about my son’s day care. As the sleep-deprived parent of a toddler, I’m always forgetting details — whether we can send a snack with peanuts or not, whether day care is open or closed for certain holidays — and looking everything up in the parent handbook is a pain.

    So I uploaded the parent handbook to OpenAI’s GPT creator tool, and in a matter of seconds, I had a chatbot that I could use to easily look up the answers to my questions. It worked impressively well, especially after I changed its instructions to clarify that it was supposed to respond using only information from the handbook, and not make up an answer to questions the handbook didn’t address.

    Image

    After a day care’s handbook was uploaded to OpenAI’s GPT creator tool, a chatbot could easily look up answers to questions about it.

    Image

    A screenshot of a GPT “Day Care Helper” conversation between the author and the chatbot about circle time.

    My second chatbot was called “Grandpa Roose’s Financial Advice.” It was based on another document — a 23-page advice booklet that was written by my grandfather, an economist and passionate stock picker, with all his collected wisdom about financial planning over the years when he was alive. It’s a document I’ve consulted occasionally, and I wondered if, by putting it inside a chatbot, I might be inspired to use it more frequently.

    I uploaded the text of the booklet, and less than five minutes later I had a chatbot that could parrot my grandfather’s financial advice and answer questions in something vaguely resembling his voice. (Although, amusingly, it often slipped from folksy grandpa-speak back to ChatGPT boilerplate in the middle of an answer, such as when it answered a question about Tesla stock by saying, “It’s like I always say, before making any specific investment decision, such as buying Tesla stock, it’s crucial to evaluate your own investment goals, risk tolerance and investment horizon.”)

    None of my custom chatbots worked perfectly, and there was plenty they couldn’t do. But if you squint, you can see hints of the kinds of jobs that autonomous A.I. agents could replace, if they actually work.

    Imagine the benefits department at a big company. On a regular day, benefits administrators might spend 20 percent of their time answering employee questions like, “Does our dental insurance cover orthodontics?” or “What forms do I need to fill out to apply for parental leave?” Build a chatbot to answer those questions and you might free those administrators to do higher-value tasks — or you might just need 20 percent fewer of them.

    Or imagine a company that trains A.I. agents to respond to the vast majority of customer service requests by feeding them information and hooking them up to a database of past customer problems. Would that company’s customer service department shrink? Could it eventually be as good as, or superior to, a customer service department staffed by humans?

    Don’t get me wrong — I’m not saying millions of jobs will disappear tomorrow, or that OpenAI’s customized GPTs are the harbingers of doom. They seem fairly harmless and limited in their scope, and nothing I saw this week worried me on an existential level.

    But making A.I. agents more autonomous, giving them access to our personal data and embedding them inside every app we use has profound, head-spinning implications. Soon, if the predictions are right, A.I.s could get to know us on a deep level — perhaps, in some cases, better than we know ourselves — and will be able to perform complex actions with or without our oversight.

    If OpenAI is right, we may be transitioning to a world in which A.I.s are less our creative partners than silicon-based extensions of us — artificial satellite brains that can move throughout the world, gathering information and taking actions on our behalf. I’m not fully prepared for that world yet, but by the looks of things, I’d better start getting ready.

    A version of this article appears in print on  , Section

    B

    , Page

    1

    of the New York edition

    with the headline:

    World Might Not Be Ready, But A.I. ‘Agents’ Are Here. Order Reprints | Today’s Paper | Subscribe

    Advertisement

    Sources


    Article information

    Author: Wendy Perry

    Last Updated: 1702160881

    Views: 1295

    Rating: 3.9 / 5 (59 voted)

    Reviews: 99% of readers found this page helpful

    Author information

    Name: Wendy Perry

    Birthday: 1992-04-01

    Address: 4102 Chase Courts, Johnton, MO 33072

    Phone: +4807365511807537

    Job: Urban Planner

    Hobby: Basketball, Arduino, Juggling, Backpacking, Rowing, Tea Brewing, Card Games

    Introduction: My name is Wendy Perry, I am a brilliant, accomplished, accessible, daring, dedicated, transparent, artistic person who loves writing and wants to share my knowledge and understanding with you.