The expert in the machine: AI as partner in supply chain

Discover how combining human judgment with AI-driven expertise leads to smarter, faster decisions in supply chain and retail planning. Learn how planners stay in control, using AI to automate analysis and model scenarios—not replace human judgment.

Unknown Speaker 0:00:02.6: 

Good afternoon, everyone. We’re going to get started with our first breakout session in this room. If everyone could help welcome Ben Dussault to the stage, he’s going to be speaking on the expert in the machine. We have The Expert in the Machine, AI as a True Partner in Supply Chain, and if you want to save your questions for the end Ben will take them. We’ll have a microphone, and with that we’ll get started. 

Ben Dussault 0:00:38.0: 

All right. So hello, everyone. So I’d like to open with a question which is, we all know we’re somewhere on the hype cycle of AI, and I’m curious in the audience where you guys think we are. So I’ll start from the left-hand side and work my way over to the right, and when you think it’s at the area that you think it is just raise your hand. If I go too far then please put your hand back down. So we start at the innovation trigger, when this thing launched, and then there’s this peak of inflated expectations where everybody thinks that this thing is going to solve everything. Then we get to the trial of disillusionment, where we actually try to solve everything and nothing seems to work, and then we start going up the slope of enlightenment. I think that we’ve gone through the first three, and I think that it’s a bit meaningful in the sense that a lot of us are probably tired of the peak of inflated expectations, but we’re also probably a little bit tired of the trial of disillusionment, because there is a lot of learning that has happened. So when we’re talking about what the slope of enlightenment is, the purpose of what I’d like to talk about today is the enlightenment part. We actually have learned something. There is a way to do various things. There are things that we will be able to do, and we have line of sight how we’re going to do it, as opposed to the hand-waving, ‘AI is going to solve everything.’ 

Ben Dussault 0:02:03.5: 

So I’m Ben Dussault. I’m Director of Product Management of Supply Chain and Analytics, and so I’ve been doing AI and analytics for most of my career. I served briefly as a professor of practice at Georgetown University teaching business analytics before coming here to Anaplan. Starting with the problem with AI. Probably most of you have been using AI in your personal lives, and it is really tempting, especially if you started using it early, to treat AI as an answer-generating machine. You would ask it a question it would provide an answer, and you would evaluate whether you liked that answer or not, and it’s easy to get into this cycle because it’s an easy way to evaluate something. Is it right or is it wrong? Is it useful or is it not? If we treat AI in this particular kind of way, then we get this, AI spits out answers, we say yay or nay, and that’s not actually terribly useful. It’s not the way, for example, that we work with our colleagues. It’s not the way that we work with members of our team. We don’t say, ‘Please give me responses and I will grade them as either correct or incorrect, and then we will move on with our lives.’ 

Ben Dussault 0:03:23.8: 

What’s instead much more useful is to have a conversation with an expert. It actually lowers the bar, right? When you’re talking with your colleagues, they are allowed to be wrong, or they’re allowed to disagree, and you’re allowed to be able to talk back and forth, but that is still a meaningful, useful interaction. A conversation with an AI expert is useful, and I’ll just give the example that I’ve been doing. I told you that I’ve been doing analytics for ages and I’ve been formally trained in certain branches of analytics, and I know a whole heck of a lot about them. Over my career, I’ve had to do all the analytics, and I’ll tell you, in the past three years when I’ve been able to use AI, I’ve learned more in those three years than all the other years combined. Why? Because I have access to this expert, frankly, where I can be like, ‘What analytics should I use to do this? What analytics should I use to do that?’ Keep in mind, it’s not on the left-hand side because if I was trying to solve a particular problem, let’s say pricing and I ask it, it provides an answer. That first answer is almost never right, but that’s okay because it’s extremely useful to talk with them. The ultimate answer that we arrived upon, we only got there because we had an AI to be able to chat back and forth and to investigate different ideas. So the conversation is what we’re driving towards. That’s the actual useful piece of the interaction. 

Ben Dussault 0:04:54.8: 

I want to separate the notion of LLMs for a moment. So the LLMs are what we typically think of when we think of generative AI. There’s Google, there is OpenAI, there’s Anthropic, there’s some of the big players. They are the verbal interaction components of AI, and these things are basically commoditized at this point. All of them have consumed the entirety of the internet. They’ve consumed every single written word that has ever been written, ever. They are all trained in roughly similar kinds of ways. Now, in this world of agentic reasoning, the reasoning approaches are roughly similar, and so these are the things that are not going to provide a competitive advantage, because any company can get any one of these three things and do reasonably well. You ask the same question to all three of them, you’ll receive roughly the same answer. So what’s the difference, then? If it’s not your choice of AI in particular, then expertise, the thing that was useful for me when I was talking about, ‘What kinds of analytics should I be using?’ where does that come from? When we’ve been disappointed with AI in the past, it’s because we’re making a comparison that really shouldn’t be made. 

Ben Dussault 0:06:16.3: 

On the left-hand side you ask, ‘Write me a limerick about AI expertise,’ and it comes back and it writes you a limerick. It’s got the right number of syllables, it’s got rhyming. It actually makes sense. It does all of these things and you think, ‘Wow, that’s pretty neat. Sadly, it’s not very useful.’ So when I go and I ask a useful question, ‘What should I do about this delayed shipment?’ I get a very generic response, something that is grammatically correct but otherwise not terribly useful. What’s the difference between these two scenarios? Well, the difference is that if you ask it to write a limerick, it has literally read every single limerick that’s ever been written, and so it’s not too far of a stretch of the imagination to say, ‘Well, maybe it can write a limerick,’ whereas if you ask it anything about your own particular business situation, it doesn’t know anything about any of it. When you say shipment, it doesn’t know what shipment means. It doesn’t know where it’s going, where it’s coming from. Why is it important? What is it going to? It doesn’t know any of that information, so of course it’s not going to be able to tell you anything about it. 

Ben Dussault 0:07:23.7: 

So the thing that’s going to make the difference, right, if the LLM, the AI, that part of the AI is commoditized, this information, the context that is going to make it actually be an expert in whatever you want it to be an expert in, is the thing that is going to make the difference. When you are evaluating AI use cases and thinking like that, you want to be able to ask yourself, ‘Where is this expertise going to come from?’ If you can’t identify where that expertise is coming from, ask yourself, ‘Is this actually going to be useful?’ It can’t conjure something out of nothing. So what does an AI expert look like? There’s a bunch of different ways to think about it, but this is the layers of cake of how we are thinking about it. It starts with a living business model. So this is just understanding, ‘What is the problem at hand and what are the levers that I can pull to try to solve a particular kind of problem?’ There’s the built-in domain expertise of, ‘I can pull these levers. What is a definition of good for pulling these levers?’ There’s cross-functional synthesis, so when we’re talking about the connected planning of all the different applications and stuff like that, ‘How can I start to branch out beyond my own specific area of expertise?’ There’s forward-looking simulation, so, ‘I can read now. Can I write and try to make recommendations and test hypotheses automatically?’ and then corporate DNA, which is your company in particular. What information is special about your company in particular? What difference does that make? 

Ben Dussault 0:09:01.1: 

So two years ago, this kind of framework didn’t exist. It didn’t exist because people generally thought the AI is just going to get better and better. As it gets better and better, it will be able to solve more and more questions, and eventually the question that you’re interested in is going to be solved by the AI. So that would basically be saying something like, ‘Google is going to get sufficiently good at Gemini to be able to say, “What should you do about your late shipment?”‘ Which I think, in retrospect, we can all imagine that probably wasn’t going to happen. One year ago, there was a framework that, ‘Hey, we do need to provide expertise, but the way that we’re going to provide expertise is we’re just going to give it all the data. Let’s give it access to your SharePoint. Let’s give it access to your ERP. Let’s give it access to all the data that you have. and the LLM is going to just go through it all and figure it all out,’ and sadly, that hasn’t worked either. So now we’re in this area where we need to curate expertise. We need to define what an expert actually is, and what is the content of information that makes an expert, and we are going to provide that limited amount of data to the AI to make it good. 

Ben Dussault 0:10:15.4: 

So let’s go layer by layer. The living business model. So putting this in the context of Anaplan applications, we need to teach the AI what the application is. So we need to be able to say, ‘This is a trade promotion management application. The person who’s going to be using it probably looks something like this. When they type in what a promotion is, these are the kinds of things that they are thinking about,’ and this is real. What do I mean by real? These are actual screens of us teaching the AI how to do stuff. The bottom-left is, we give the AI every single formula. When you’re building stuff in Anaplan, you have to write down formulas. We give those formulas to the AI so that it can infer, ‘Hey, how is this number calculated? Oh wait, I can go check.’ If you didn’t give that context and you ask it, ‘What should I do about my trade promotion plan?’ it’s not going to have any idea. So we’re starting to give it context in the form of the formulas. We give it in context in terms of, ‘What are the things that we are actually calculating? When we say revenue, what do we mean by revenue? When we say ROI, what do we mean by ROI? What do users care about? Do they care about ROI? Do they care about volume? Do they care about the trade-off between the two?’ 

Ben Dussault 0:11:25.0: 

All of that information is being written down. It’s being taken out of experts’ heads and being put onto paper to give to the AI, and hopefully you can start to imagine, if you don’t have this information and you’re asking an AI to be able to be meaningfully useful in an application, it’s not going to be able to. All right. So then, what about domain intelligence? We can write down the mechanics of what the application is. We can write down, ‘These are the numbers that a user might type in. These are the numbers that would come out, and these are how the numbers are calculated,’ but there is the industry-specific or domain-specific intelligence for a particular type of activity or application. So just taking demand planning as an example, ‘Hey, something went wrong. What should I do?’ That information is not written down. So I want you to think about it for a moment. Whatever specialty that you’re in within supply chain or whatever, the last decision that you made, something happened, you responded to it, you did something about it, and then you either solved or did not solve the problem. Did any of you write it down? Probably not. So if you didn’t write it down and then give it to an AI, how is it ever going to know what good looks like? It can’t. It won’t. It’s not there. It’s not in all the limericks that was ever written on the internet. Those won’t help you with your problem. 

Ben Dussault 0:13:02.3: 

So what we are doing is writing those things down. ‘Hey, if I am a demand planner and I run into this type of problem, here are the types of things that I might want to do,’ and the document on the left is from our piloting and prototyping of, ‘What kinds of contexts do we need to give in the context of demand planning to actually help somebody do demand planning in a useful kind of way?’ That document was written by demand planners, because they’re the experts, and we’re taking the information that they know and putting it down onto paper so that the AI can actually consume it. This is where expertise can come from. You can actually identify the source of it. It’s a document. It’s text. That’s it. It’s right there. You can decide where it’s coming from as opposed to the nebulous, ‘Well, it’s got stuff. It’s got information, it’s got data. It’s going to take it all in and be really smart.’ No, it has to come from somewhere, and the place where it came from was people that were experts in their particular field. 

Ben Dussault 0:14:01.6: 

What’s the next area? Well, the next area is the threshold or the intersection between each of these different domains. So taking demand planning and trade promotion management as an example, in the previous slide we were talking about demand planning has a demand planning expert, or curated expertise about demand planning. We have curated expertise about trade promotion management and what you should do in those particular areas. Well, I can get the product manager for demand planning and I can get the product manager for trade promotion management. I can put them in a room and I say, ‘Write down 200 pages of what the heck you would do about how demand planning and trade promotion management should interact. What are the things that somebody would actually want to understand? What should your response be if, in demand planning, I’m not hitting my revenue target? What should you be thinking about? Am I hitting my price targets? Am I hitting my volume targets?’ If you don’t tell the AI that, ‘Hey, revenue decomposes into price and volume. You might want to check one or the other or both and see which one is the bigger issue,’ the AI is not going to figure that out because it doesn’t know. 

Ben Dussault 0:15:10.5: 

So we had to get information out of people’s heads, and the demand planner and the trade promotion management product managers know the most about their respective fields, and they know the most about how they should interact. So every pairwise combination of every single discipline, those experts get in a room and they talk about, ‘How should we as experts interact with each other?’ and so now we have another source of expertise, not just in each of the domains individually, but in the connection between each of the different domains. I think Anaplan in particular has a pretty strong competitive advantage here, because we’re the only ones that make an integrated financial planning application, and a demand planning application, and a trade promotion management application, and an assortment planning application, and, and, and, right? So we have the capability of having these experts in the room to be able to actually define, ‘How should IFP and TPM actually interact?’ because we own both applications. You can imagine in a more generic setting where, if I have point solutions for each one of these different things and I want some AI to rule them all, that seems hard. 

Ben Dussault 0:16:26.0: 

All right. So in terms of forward-looking simulation, here are two Anaplan applications, and they both have scenario capabilities. They have simulation capabilities. I want to try a scenario where it looks like this. I want to try a scenario that looks like that. From an application perspective, whether it’s a demand planning application or otherwise, that’s not revolutionary. Other applications have the ability to do scenario planning. The difference is if you have multiple applications. I mentioned that if you have point solutions for different things, they’re each going to have their own AI to be able to figure it out, because if you have a demand planning application and you have a TPM application and they are coded differently, they have their own individual code base, you have to go and teach one AI, ‘Hey, what are the things that I can change in demand planning and what are the outputs that you’re expecting? What can you change in TPM and what are the outputs that they’re expecting?’ and the way that it interacts with both of those applications is kind of bespoke. So I have multiple versions of an AI interacting with each one of the different applications. Because we’re Anaplan, Anaplan itself is a simulation platform. It is not a set of applications that can do scenario planning. It is a scenario planning platform, so we only have to teach the AI once. 

Ben Dussault 0:18:03.3: 

All we have to do is we have to teach the AI, ‘How does Anaplan the platform work?’ and if it can understand how Anaplan the platform works, then any model that is built on top of Anaplan, it can understand how that works. Now you might need to give it additional domain-specific context, but how the application is working, it comes for free. So in the keynote, if you hear custom analysts, you can get a custom analyst to do whatever you want for your own bespoke things. This is how. We can train it to understand Anaplan, and therefore have it be trained to understand anything that is built upon Anaplan, regardless of what type of application it is. So this unlocks a lot of different things, like the things I just talked about before. I have multiple applications. I have the intersection between multiple applications. How is it going to understand the context of all of it? Well, it understands the platform that they were all built on, and it understands the domain of every single combination between them. We have to provide all that stuff. That’s hundreds to thousands of pages of just pure context. 

Ben Dussault 0:19:08.1: 

The final link is corporate DNA. A pretty tangible example that I like to give is cold supply chains. So vaccine transportation from one place to another, the vaccines need to stay a particular type of temperature. We went to conferences, we saw examples along the lines of, ‘This package is in St Louis. Its temperature is getting too high. If it continues on this path through St Louis, it’s not going to get there in time, and it’s going to it’s going to spoil. You can either change its packaging in St Louis, or you can transfer it off of rail and put it onto a plane, in which case if it gets on the plane, it will get there in time.’ That sounds like a useful use case, and it’s by and large real. So where? How? Where did it get the ability to say all of those extremely specific things? It’s the thing in the bottom left, which is the standard operating procedures for St Louis. Hundreds and hundreds of pages of, ‘What are your different transportation modes in St Louis? What are the schedules for those? What are your options for transferring from one transportation mode to another? What are the things that you should take into account?’ Things that an actual human being would end up usually reading to figure out, ‘What the heck do I do in this situation?’ was already written down, and this is specific to this one particular company. 

Ben Dussault 0:20:33.9: 

So Anaplan has the ability to say, ‘We understand our applications. We understand best practices for different domains and specific areas, but the last mile, the, “How does it make it extremely specific for your application in particular?” is going to have to come from you. We don’t know.’ So there’s line of sight to be able to see that this is the kind of thing that you would want to be able to add, and I think our hope is that by being experts in our own applications and providing context for applications working together, that we have a really good idea of what kind of context is useful to add, so that we would be able to provide guidance. ‘Hey, you should add these kinds of standard operating procedures. You should add this kind of information. It should look like metadata that looks like this, as opposed to something else.’ You can start to see that, hey, that kind of information comes from somewhere. It is no longer relying on the vendor to make a magical AI that is sufficiently sophisticated. You are pointing out, for the questions that I wanted to answer, ‘What is the expertise that it comes from?’ or ‘Where is the expertise that it comes from?’ 

Ben Dussault 0:21:45.7: 

So what does the future look like? So here is a chat, and many of you are probably rolling your eyes. ‘Oh no. It’s another chat that’s been generated for our vision of what the future might look like,’ and my response to that is, ‘Yes, I completely understand.’ I have been in that room, the back of that room, in previous lives where people have been dreaming up, ‘These are the kinds of interactions that we think we can have. Let’s just write it down and, and then hope that the AI gets sufficiently sophisticated to be able to actually do it.’ The difference between that and this is that we’ve been testing to be able to do these kinds of things, to understand, ‘Is the expertise that I just talked about that we’re writing down able to provide enough context to be able to answer this?’ and we feel pretty strongly that, yes, it’s not an easy problem to solve, but it is a solvable problem. So in this particular case, you probably can’t read it, but just talking about, ‘What is driving the largest gap to plan?’ it says, ‘Your American region is the largest gap to plan.’ The response is, ‘I’m in product. I don’t care about regions,’ and then it comes back and says, ‘Widgets are the problem.’ 

Ben Dussault 0:22:58.5: 

How is it able to do that? Well, it needs to know what a product is. It needs to know what a region is. It needs to know what a plan is. It needs to know what a gap to plan is. Where did it get that information? It got it from the application model that we have. There are line items called gap and there are lists called product, and we defined all of that stuff and told the AI, ‘What do all of these things mean and how do they interact?’ so it is therefore able to answer questions like this. Next, ‘Any ideas?’ ‘Well, it looks like it’s a pricing problem. Your volume targets are on track, but the prices are lower than what you expected. Your average sale prices are lower than what you expected, so I think that it’s a pricing problem.’ So, ‘Is it a discount problem, or is it a list price problem?’ That’s the user responding. ‘Is it a discount problem or is it a list price problem?’ ‘Well, it looks like the average discount increased in September, and the price also increased in December.’ So there’s a back and forth here, but where is the context or the expertise that says, ‘Hey, if you’re missing your revenue plan, look at price, look at volume and see where it’s going.’ 

Ben Dussault 0:24:22.0: 

It identifies it’s a pricing thing because we told it that those are the two options. ‘Check both, see which one is the source of the problem,’ and then the user had to go and say, ‘Well, okay, the average sale price is lower than you expected. Is it a discounting thing or is it a list price thing or is it both?’ That’s the interaction. It didn’t get it right on the first try. It didn’t identify the problem on the first try. The user is going back and forth and providing more context and investigating things. So, ‘What did finance plan for?’ That’s the user asking, ‘What did finance plan for?’ We, we identified that the list price went up and the discount went up, and finance planned that the list price was going to go up, but the discount was going to remain the same. So basically sales is giving away the farm, right? We raised our list price, they increased their discounts to match. That’s bad. Again, to be able to go to see what finance planned, for the finance application has to plan, ‘What was the average discount?’ and all that kind of stuff, and the AI needs to know, ‘What does finance mean when the discount plan is whatever?’ It has to understand all this. We had to give it to them. We have to give it that information, and it has to look something like this. 

Ben Dussault 0:25:37.2: 

This is actual text. We have got the product manager for one of these applications sat, them down and told them, ‘Write down 50 pages-worth of stuff, of things that you think it should be able to answer, and then let’s see if it can actually answer them.’ All right. So, ‘What kind of volume increase would I need to offset it from a margin perspective?’ and it gives a number. How would we do this in Anaplan today? You’d type in different numbers until you got something that offset the decrease, so you would just type in numbers. Well, let the AI type in numbers, right? So if we teach the AI how to interact with Anaplan, then it can do that stuff, and we have line of sight. It knows which cell to pick. It knows how to type a number into a cell. Those are things that we are confident we can teach it, and then finally, you can ask, ‘Okay, what are my options?’ and it can give you some options, right? So there’s price leakage, or getting rid of price leakage. You can change your portfolio to include mix of a higher-margin product such as Project Alpha. You can use value engineering, but oh no, you have a commodity so that might be difficult. How does it come up with all that kind of context? How does it know that it’s a commodity? How does it know that the American region is price-sensitive, and so you have to be careful about locking down price leakage? You had to give it to it. 

Ben Dussault 0:27:07.9: 

It had to come from you. There’s no way on the planet that AI all by itself is going to infer those kinds of things, and so the combination of the two of us have to really get good at deciding, ‘Okay, well, what kind of information are we going to be able to give it such that it can make these kinds of calls? Is this a good idea, that the AI is making these kinds of recommendations or not?’ This is the kind of thing that we know what would have to be there for it to be able to do this, and it’s stuff that exists. It just needs to be committed to paper and put in the right format. So then finally, you could pick a particular option and it could go act on that. All right. So to close, I mentioned where I thought we were on the hype curve, that we’re at the bottom of the slope of enlightenment. We have a line of sight to how we’re going to get there, and the argument that I have is that the height of the slope of enlightenment, or the plateau of productivity, is not going to be the inflated expectations that we had before, but now we have line of sight to how we are going to get there. The five layers that we just went through are the ways that we are going to walk up this plateau, and it’s going to be a journey. It’s not going to be, ‘In the next month there’s going to be a flip of a switch, and then the AI is going to become all-knowing.’ We have to get better and better at providing more and more context in a better way, and each layer is going to make it more and more useful. Okay. Thank you very much. 

Unknown Speaker 0:28:52.2: 

All right. We will take any questions. 

Ben Dussault 0:29:05.1: 

It’s okay. I was a professor. I’m used to it. 

Unknown Speaker 0:29:10.5: 

I’ve got one. 

Audience 0:29:13.2 

Hi. My name is Samiksha. I had a question. When you said you have to give the information to the AI, where do you mean we have to give the information? Are you saying that we type into the… You had analyst. So what component are you talking about? Is it multiple components or just one component? If you could share some information or thoughts on that. 

Ben Dussault 0:29:34.2: 

Yes, so I can… I’m going to give a slightly roundabout answer to that. Back in the day, Google had an image generation AI, and they tried really, really hard to make it not racist. So if you typed in, ‘Make me a picture of a man,’ it was always a white man. So how did they solve this problem? They politely asked the AI. It’s called prompt engineering, so basically there’s a hidden prompt that the user didn’t see which said, ‘Make sure that you respect racial diversity,’ in an attempt to get the AI to do the thing that they wanted it to do. So then when people said, ‘Please make me a picture of a Nazi,’ then they got Asian Nazis because it’s an unintended byproduct of what people do. The point of that story is, it’s really, really hard to get an AI to do what you want. You’re literally asking it what to do, and the insight into that is, ‘What are the ways that we can interact with an AI by giving information or giving instructions or all that kind of stuff?’ and the answer is, ‘Not very much.’ The AI is a black box. You shove in some text, it gives you out some text or an image, and you have two choices. You can either handle things on the incoming side through prompt engineering to modify the user’s prompt, or to otherwise add context or whatever, or you can modify the output, which is basically like, print the output or don’t print the output. 

Ben Dussault 0:31:04.0: 

That’s really the only choices that you have, and so it’s really hard to do the thing that you’re talking about. Where do we put this information in? So the primary third option is a technology called RAG, retrieval-augmented generation. The idea here is, for those of you that have enterprise AI and you connect it to SharePoint, you connect it to whatever, is that it vectorizes - and don’t worry about what that means - but it takes your information and it basically makes it searchable by the AI, and it can therefore, when you give it a prompt and you say, ‘I am thinking about ponies,’ then it will go and search through all the stuff that you might have and say, ‘Is any of this stuff related to ponies? Oh, yes it is. Okay, well, let me go get it, and I’m going to go put it as context,’ in the form of prompt engineering. I’m going to tell the AI, ‘By the way, here’s some context that might be useful,’ and then your question plus the context that it went and searched for generates the response. So that retrieval-augmented generation, what you put in that and the format that it comes in, is the primary lever that we have today to be able to add that kind of context. 

Ben Dussault 0:32:18.8: 

So things like knowledge graphs, which is metadata, because that’s an easily interpretable way for the AI to deal with things, tells the AI things that it should know, the structure of how things might be. What your supply chain looks like, for example, might be something that you put in that form There’s that, and then there’s prose. In our tests, when we went and asked the project manager, ‘Hey, write down everything you know about MFP,’ it was prose, just text, and that worked remarkably well. So did that answer your question? Cool. Any other questions? 

Audience 0:33:04.3: 

Thank you. Hi. I have a question on going through the cycle, and being in the slope of enlightenment. I feel like something similar has happened with data privacy law over the past 15 years, and how social media has evolved. Do you think this is something that we’re seeing and will continue? We have all of Claude, and ChatGPT came out, like popped off a few years ago. I feel like right now, even brainstorming in the sessions we’ve already been through today, like, ‘How can we use Claude outside of these platforms that we’re investing in?’ and I’m wondering how long you think that will cycle continue, if that makes sense? 

Ben Dussault 0:33:43.4: 

How long will that cycle continue from a privacy perspective? 

Audience 0:33:47.2: 

Not really a privacy. That’s just an example. I don’t know if I think that we’re in a slope of enlightenment right now because of all the buzz with Claude and agents and building. I feel like I just can’t keep up. I’m wondering your opinion on being in the peak of inflated expectations still, if that makes sense. 

Ben Dussault 0:34:08.2: 

Yes. Okay. So I’ll phrase it just to make sure I understand. What does the plateau of productivity look like? 

Audience 0:34:14.1: 

Yes, exactly. 

Ben Dussault 0:34:15.1: 

Okay. So I don’t think it’s going to be that dissimilar from the internet, for example. For those of you that are old like me, and remember when the internet was. You had the modem and it went [imitates dial-up modem] or whatever, we had no concept of what the internet was going to unlock. The notion that the primary use case today is that I pull a rectangle out of my phone, I find a bathroom toy that my wife asked me to get on a website - Amazon, let’s be honest - and I tap it and it’s already has all my information already on hand and it shows up later that day, I don’t think any of us imagined that was going to be the particular use case. So in that sense, I think there’s a lot that we don’t understand in terms of how we use it. How it’s going to work, I think we do have a fairly decent understanding. When you started out with the internet like, ‘Oh, this computer connects to this computer. Imagine a world where all the computers are connected to all the other computers. Wouldn’t that be neat? Here’s how you would do it and here’s how it would scale, and all that kind of stuff,’ I think those were answered pretty early on, but the actual use cases on the day to day like, ‘How is this going to impact my life?’ was a bit of an open question. 

Ben Dussault 0:35:39.3: 

I think that that’s similar to how it is here. Five years from now, ten years from now, I think the way that our day operates will look fundamentally different. How? I have no idea, but the way that it’s going to do that, I think, is going to be a mix of the things that we just talked about plus one other thing, which is agents talking to each other, and those are the ingredients. What dishes we make, I have no idea. 

Unknown Speaker 0:36:06.1: 

All right, and with that, I think we’ll wrap up. If everyone could join me in thanking Ben for today’s session. 

SPEAKER

Ben Dussault, Director Product Management, Anaplan