How to build an AI-first organization | Ethan Mollick

23.1K views June 05, 2025

Most companies are using AI to cut costs. Ethan Mollick argues that the biggest mistake companies make is thinking too small.

In the first episode of Strange Loop, Wharton professor and leading AI researcher Ethan Mollick joins Sana founder and CEO Joel Hellermark for a candid and wide-ranging conversation about the rapidly changing world of AI at work.

They explore how AI is not just an efficiency tool but a turning point—one that forces a choice between incremental optimization and transformational scale. The discussion covers the roots of machine intelligence, the relevance of AGI, and what it takes to build organizations designed from the ground up for an AI-native future.

What’s in this episode:

- Why most companies are underestimating what AI makes possible
- The tension between using AI for efficiency vs. scaling ambition
- How traditional org charts, built for a human-only workforce, are breaking
- The collapse of apprenticeship and its long-term implications
- How prompting is becoming a foundational business skill
- Why “cheating” with AI may be the new form of learning
- The risks of using AI to optimize the past instead of inventing the future
- What it means to build truly AI-native teams and organizations

Strange Loop is a podcast about how artificial intelligence is reshaping the systems we live and work in. Each episode features deep, unscripted conversations with thinkers and builders reimagining intelligence, leadership, and the architectures of progress. The goal is not just to follow AI’s trajectory, but to question the assumptions guiding it.

Subscribe for more conversations at the edge of AI and human knowledge.

--
00:20 - Origins: AI in the early days at MIT
01:53 - Defining and testing intelligence: Beyond the Turing test
06:35 - Redesigning organizations for the AI era
08:56 - Human augmentation or replacement
14:58 - Navigating AI's jagged frontier
17:18 - The 3 ingredients for successful AI adoption
23:31 - Roles to hire for an AI-first world
33:41 - Do orgs need a Chief AI officer?
39:45 - The interface for AI and human collaboration
43:50 - Rethinking the goals of enterprise AI
49:15 - The case for abundance
52:30 - Best and worse case scenarios
58:51 - Avoiding the trap of enterprise AI KPIs

0:00 You can either fire most of your staff
0:01 and make more money per barrel of ale or
0:03 you can be Guinness and hire 100,000
0:04 people and expand worldwide. And I
0:06 really worry about too many people
0:07 taking the small path and not the big
0:09 one.
0:14 [Music]
0:25 I would love to start from the very
0:27 beginning when you were back at MIT with
0:30 you know the OJ and Marvin Minsky and
0:33 and so on. Uh what were sort of the
0:35 ideas at at that stage? So so this is a
0:38 little bit of like stolen technical
0:40 glory because I was not the coder with
0:42 Marvin. I was the person from the MBA
0:44 program who was trying to help the AI
0:46 people explain what AI was to everybody
0:48 else. So I worked with Marvin and a few
0:50 other people at the the media lab quite
0:51 a bit on this and what was really
0:53 interesting was a lot you know this was
0:55 sort of during one of the various AI
0:57 winters right so it was no one was
0:59 paying much attention to AI and it was
1:01 all about sort of elaborate schemes for
1:03 how we can create intelligence and so
1:05 there was projects to observe everything
1:07 a baby did and maybe that would somehow
1:09 let us make AI there was Marv Minsky's
1:11 society of mind of all these kind of
1:13 complex interlocking pieces and um I I
1:15 think about how kind of ironic it was
1:17 that the actual solution solution turned
1:18 out to be just shove a lot of language
1:20 into a learning system and you end up
1:22 with with LLMs. It's it's interesting
1:24 because a lot of the the technical ideas
1:28 turned out um to to be incorrect. Uh but
1:32 there was a lot of the core philosophies
1:34 there that I think are are are back in
1:36 fashion now. You had Minsky and
1:38 Angelbart. Uh Engelbart had this
1:41 philosophy of augmenting human
1:43 intelligence and Minsky was a lot about
1:46 replacing human intelligence and trying
1:47 to make machines conscious. Um what what
1:50 were some of those sort of foundational
1:52 ideas of how AI could be applied then
1:55 that you think can be relevant now?
1:56 Well, I mean, I think that's what we're
1:57 all kind of struggling with right now is
1:59 now that we have these things in sight
2:01 and you know, we're back to what is
2:03 sensient and what are the I mean, I
2:06 think it was two weeks ago a new paper
2:07 came out showing that the actual
2:09 original touring test, right, gets
2:10 passed the three-party touring test that
2:12 GBD 4.5 is capable of passing it, right?
2:14 And in fact, 70% of the time, um, people
2:17 will pick the AI as the human in the
2:19 room. Uh, which I don't know what that
2:21 means, but it's better than chance, but
2:22 that that's interesting. Um and so I
2:24 think we're faced with all these exact
2:26 set of issues that a few thinkers are
2:27 worrying about for a long time. So does
2:29 this replace humans and what do we use
2:31 that for? Right? And for augmentation,
2:33 what does augmentation look like becomes
2:34 the big question, right? Is it you know
2:36 and we we that that debate I think never
2:38 got as far as as it could be partially
2:40 because this was still kind of
2:41 fictional, right? So what do we do with
2:43 these very intelligent also very limited
2:46 machines and then where do humans fit
2:49 into the equation? And I don't think
2:50 that was ever answered and now it's
2:51 suddenly very very important. And the
2:54 Turing test was um it was a beautiful
2:56 idea back back then. But if we were to
2:58 design a new test now, a mollik test,
3:01 what would be your mollik test for for
3:03 AGI? So I uh I'm struggle with AGI as
3:07 this concept all the time, right? Which
3:09 is it's badly defined. I mean the reason
3:11 why the touring test is interesting just
3:12 like all the other tests is they were
3:13 great when we didn't have anything to
3:14 test, right? Like the touring test was
3:16 great when computers obviously failed
3:18 it. And similarly, we have the issues
3:19 where like the AI is acing all the
3:21 creativity tests we have, but those were
3:22 designed and they were always mediocre
3:24 for humans and now we're expecting AI to
3:25 do them. The way we figure out whether
3:27 someone has empathy in social science is
3:29 the best test is something called the
3:30 reading the mind in the eyes test. We
3:32 show a bunch of eyes and ask people what
3:33 emotion they have they have. Like none
3:35 of these things were designed for AI
3:37 stuff. So I think about this a lot and I
3:39 tend to be very practically oriented on
3:41 this, right? So first of all, everyone
3:42 kind of has their own AGIish test. Um,
3:45 you know, I'm a business school
3:46 professor. some some of the easiest is
3:48 can this agent go out in the world and
3:49 make money and and do things as a useful
3:51 useful test. Can we discover new
3:53 knowledge and actually test and come
3:55 with results? But I mean I think what
3:57 we're starting to realize is AGI is
4:00 going to be this sort of phase we're in
4:02 rather than a moment in in time, right?
4:05 There's not going to be a you know
4:07 fireworks going off. Tyler Cohen just
4:08 said O3 is AGI and when asked why he
4:11 says it's like pornography I know when I
4:12 see it. Um, and so we don't know what
4:14 the answer to these questions are. And I
4:16 think it's kind of realizing the
4:17 meaninglessness of it because it turns
4:19 out also like as you guys have learned,
4:21 if you connect AI to systems in the
4:22 right way and you connect with company
4:24 processes, suddenly you have something
4:25 that's much better than the sum of its
4:26 parts versus something you're prompting.
4:28 If you're just doing conversation, that
4:29 feels very different than can we do
4:31 strategic decision-m for example. And
4:34 frequently when these models are
4:35 released, it's always on the most
4:37 hardcore math problems and science
4:39 problems. It's it's very rarely they
4:41 take more business applications. If you
4:44 were to design a benchmark that was more
4:47 focused on the applications that you see
4:49 in in companies, what what would a
4:51 benchmark uh for that look like? So I
4:54 think that is one of the most critical
4:55 problems we're facing right now because
4:57 all of the people in the labs are math
4:59 and science people and they view the
5:01 only good thing you could do with your
5:02 life as coding, right? And then add to
5:04 that the fact that they want to use AI
5:05 to make better AI and coding and math
5:07 becomes like the important things
5:09 followed by biology because they all
5:10 want to live forever. So like that
5:12 becomes the angle that that that this
5:14 goes down and there are very few
5:16 benchmarks other things. So we know the
5:17 AI companies build towards benchmarks.
5:19 They build sketchy ways right of like
5:20 optimizing for benchmarks but also in
5:22 more broad ways of they use this for
5:24 testing and so the fact the lack of good
5:26 business benchmarks is a real problem.
5:28 So I actually one thing I've been
5:30 pushing is companies should be doing
5:31 this themselves to some extent right
5:32 like and some of this can be direct
5:34 number based like how often does it mess
5:36 up in being asked to do an accounting
5:37 process but some of this is vibes based
5:40 as as they say would you actually could
5:41 have outside experts and we've done this
5:42 for some of our experiments judge the
5:44 quality of answers and is this as good
5:46 as a human or not have your own touring
5:47 tests for various important parts of
5:49 your job right is the analysis report
5:51 good enough what's the error rate on it
5:53 you know if we use this to give us to
5:55 give us strategy advice how good is it
5:57 how good is it at a selection decision.
5:59 And those are questions that are not
6:01 that hard to measure, right? They're not
6:03 that technical, but they do require a
6:05 little bit of effort. I think that's
6:06 that's one of the areas where products
6:08 have been largely lacking, too,
6:10 especially when you deploy agents. The
6:12 ability to test these agents and see
6:14 what knowledge they have and what
6:15 knowledge they're lacking and correct
6:17 them and run these test sets has has
6:19 been really really limit limited. Um so
6:22 as we think about designing an an an AI
6:24 first uh org um so you basically get a
6:28 thousand person company and you get to
6:30 redesign the org to be completely AI
6:32 native. How do you structure it? So the
6:36 first thing you say is redesign to be AI
6:37 native is hard because it wasn't AI
6:39 native right so we are we are in this
6:41 really interesting spot where we've had
6:44 basically hundreds of years of
6:46 organizational development that that is
6:47 paralleled you know industrial
6:49 revolution the communications revolution
6:51 I mean the first org chart came out in
6:53 1855 for the New York and eerie railroad
6:55 and it solved a problem that never
6:57 existed before which is how do we
6:58 coordinate vast amounts of you know
7:00 traffic on train lines in real time
7:02 using a telegraph and they came up
7:04 McKenzie the guy who came up with this
7:06 came up with the org chart as a solution
7:07 and we still use them today. 1910s huge
7:10 breakthroughs in organizing work. Henry
7:12 Ford's production lines time clocks
7:14 still use those today. Early 2000s agile
7:17 development right all of these things
7:19 broke because they all depended on there
7:20 being only one form of intelligence
7:22 available which is human comes in
7:23 humansized packages can only be deployed
7:25 with a span of control of five or seven
7:27 people. Uh the you know two pizza
7:29 problem and now we're in a world where
7:30 that isn't the case. So things have to
7:32 be rebuilt from the ground up and I I
7:34 worry a little bit that um modern
7:37 western companies have given up on
7:38 organizational innovation as something
7:39 that they do. It used to be that the way
7:41 Dowo Chemical would win or the way that
7:44 IBM would win would they come up with
7:45 new approaches to sales or new
7:47 approaches to working with organizations
7:49 and now we've outsourced that. So
7:50 enterprise software companies will tell
7:52 you how to build your company because
7:53 Salesforce sells you a Salesforce
7:54 product that tells you how you use sales
7:56 or a large scale consulting company will
7:58 come in and tell you how how your
8:00 organization should run. And now is a
8:02 time where leaders actually need to
8:03 innovate. So if I to return to your sort
8:05 of core question, it has to be building
8:07 from both the idea that we're heading in
8:09 a in a trend line where humans are less
8:11 necessary in the product and then where
8:13 do you get you have to pick whether you
8:15 augmentation or replacement and then you
8:18 have to start building the systems from
8:19 that fewer people um doing more
8:22 impressive work or more people doing
8:24 ever more work and trying to take over
8:25 the world together. Does does this mean
8:28 that we have sort of fewer 100x
8:31 employees or do we sort of boost double
8:34 the productivity of everyone? Do do we
8:36 create this sort of small clusters of
8:38 folks that are uh overseeing the
8:40 orchestration of of of of agents and are
8:43 you know orders of magnitude more more
8:44 more productive or is it sort of more
8:47 deployed horizontally across the
8:48 organization where a few people get more
8:50 more so I think those are key choices. I
8:53 mean one of the things I really worry
8:54 about is when I look at early
8:56 implementations what people view this is
8:58 as an efficiency technology and I bear a
9:00 little blame for that our earliest work
9:02 focused on productivity gains from AI
9:04 and I still focus on that because it
9:05 matters but I I worry a lot that at the
9:07 edge of industrial revolution what we're
9:09 seeing happen or some sort of new
9:10 revolution what we're seeing happen
9:12 right now is that companies are viewing
9:14 this like a normal technology so they
9:16 get a 25% cost savings from you know or
9:19 efficiency gain in in customer service
9:22 let's cut 25% of people, right? Like I
9:24 hear that all the time and there's a
9:26 whole bunch of dangers with that. One of
9:27 them is nobody knows how to deploy AI in
9:30 your organization other than you, right?
9:32 You can build tools and the techniques
9:33 that are really useful, but ultimately
9:34 it has to be people in the company that
9:36 figure out is this good or bad. They're
9:38 the ones with the experience, the
9:39 evidence to do it. If they're terrified
9:40 of doing that because they'll get fired
9:42 or punished for using AI or they'll be
9:44 replaced if there's an efficiency gain,
9:46 they'll never show you an efficiency
9:47 gain. Right? And then the second set of
9:49 problems around that is if we're really
9:50 in a world where we're about to see an
9:52 explosion of performance and
9:53 productivity. The idea that you should
9:54 be as small and lean as possible going
9:56 into that. Like it's like if you imagine
9:58 the early industrial revolution and you
10:00 are you're a you know a local brewer in
10:02 the early 1800s, you got steam power.
10:04 You can either fire most of your staff
10:06 and make more money per barrel of ale or
10:07 you can be Guinness and hire 100,000
10:09 people and expand worldwide. And I
10:11 really worry about too many people
10:12 taking the small path and not the big
10:13 one. and and you've generally advocated
10:16 more for human augmentation and the idea
10:19 that you know the back in the days we
10:21 used to talk about bicycles for the mind
10:24 and now we might be getting you know
10:26 airplanes uh for for the minds to to to
10:28 to some extent. Um in in what ways do
10:31 you think this will be augmenting uh
10:34 human uh human intelligence? What what
10:37 because it's it's been quite
10:39 counterintuitive. What what we thought
10:41 historically was it would start with the
10:43 mundane repetitive tasks and then it
10:46 would move on to knowledge work and
10:47 coding and then the very last thing it
10:49 would take would be the creative tasks
10:51 but it's almost been like the exact
10:53 opposite um in the sense that you know
10:55 the creative tasks the knowledge work
10:57 but the mundane repetitive has been
10:59 really tricky to automate. So in what
11:01 what what ways do you think we'll
11:02 actually be be implementing this? I mean
11:04 it it is fascinating how much like
11:06 everyone you know the image of of AI
11:08 would be that if you talk if you tried
11:10 to explain the concept love it would
11:11 explode right does not compute instead
11:13 we have like these weird systems that
11:15 are super emotional and have to be
11:17 convinced to do things right like we've
11:19 actually found in prompt engineering
11:21 sometimes what you have to do is
11:22 actually just justify to the AI why it
11:24 should do a step rather than tell you to
11:25 do something it's like no this is why
11:26 it's important and you should do it
11:27 which is super weird um and so the thing
11:31 to think about with augmentation though
11:33 is that Our jobs are that we do are
11:35 bundles of many different tasks, right?
11:37 Nobody would have designed any job the
11:39 way we have as a professor, right? What
11:40 am I supposed to do? I'm supposed to be
11:41 a good teacher and come up with good
11:43 ideas and be able to have a conversation
11:44 with you and do research and run an
11:46 academic department and you know like no
11:49 one would be a counselor, right? No one
11:51 would want all of these jobs and a lot
11:53 of them are sort of hot AIS jobs. I
11:55 don't mind giving away grading to an AI,
11:58 right? If that helps. Like I wouldn't
11:59 mind providing more counseling support
12:01 through AI if that helps. even though
12:02 these are very human kinds of things. So
12:04 I don't think that augmentation
12:05 necessarily mean like just because it
12:07 does creative engaging sort of human
12:10 knowledge work tasks at least at the
12:12 current levels we're at it's definitely
12:13 below the expert level in these kind of
12:15 cases whatever you're best at you're
12:16 probably better than AI. So the question
12:18 become augmentation level one is just
12:20 hand off stuff you're less good at as
12:22 part of your job bundle and the second
12:24 level is how do you use it to boost what
12:25 you're doing right now and we're
12:26 starting to get some good evidence for
12:27 that too. And what happens when the
12:30 systems become uh more proactive than
12:33 than than reactive? We're so reliant on
12:37 giving these systems input to what they
12:40 should give back to us and um prompting
12:43 them and and and and so on. At some
12:45 point, we should be getting systems that
12:48 are better than us at asking those
12:50 questions too and can sort of
12:52 proactively serve this to us. Is that
12:55 something you're seeing? If if you take
12:57 your domain as an example, it could go
12:58 out and do all of the research for you
13:00 and then come and say then sort of this
13:02 this matches your research test. Here
13:04 are five papers I wrote. Um, pick the
13:08 best one. Um, have you started seeing
13:10 any applications like that? Yeah, I mean
13:12 I there's a couple things you said there
13:13 that are really important. One of them
13:16 is actually the more minor point, which
13:18 is the idea that it gives me 10 papers
13:20 to pick from, right? This idea of of,
13:22 you know, it's a hot word now, but
13:23 abundance. But we're not used to a
13:25 situation where you can just get a lot
13:26 of something and curate, right? So, one
13:28 of the things that actually matters a
13:29 lot is taste and curation that I want to
13:31 be able to pick out of a subset of
13:33 options and that still matters a lot.
13:35 That kind of taste piece or what to
13:36 pursue and start it starts to look like
13:38 management, which is not the end of the
13:40 world, right? Like management is what
13:42 most of us aspire to anyway, right? Or
13:44 at least a lot of people aspire to. And
13:46 it starts to be giving your direction
13:47 and taste where it goes. But like I
13:49 think at the end of this, we don't know
13:51 how good these systems get. And that
13:53 ultimately every question becomes
13:54 downstream of how good you think AI
13:56 gets, right? If it's good enough that it
13:58 does all of our work at the high level
13:59 of what the work you do and your
14:01 organization does and the work that I do
14:02 as a professor, then, you know, we're
14:04 sort of in uncharted territory overall
14:06 and I don't know what the answer is. I
14:08 think that real organizations um are,
14:11 you know, work much more in much more
14:12 complex ways than we think about.
14:14 They're not always aimed for efficiency
14:15 and AI remains very jagged in its
14:18 frontier of capability. So it can't
14:20 quite do the whole paper because parts
14:21 of it'll fail. But I I if I have
14:23 experience, I'll know where those fail
14:25 and can intervene and shape in those
14:27 places just like I would with a PhD
14:28 student. So I think we're going to be in
14:30 a longer world of limited autonomy than
14:32 people think where like direction
14:34 guidance, you know, is still going to be
14:36 important. I think the jagged frontier
14:39 is is probably one of the areas that's
14:41 most bottlenecking or organizations now.
14:44 It's so incredibly confusing talking to
14:46 a system that sometimes is genius and
14:48 sometimes is completely stupid and it
14:51 also makes it very difficult to uh
14:53 deploy it uh uned in organizations and a
14:58 bit similarly we've had with
14:59 self-driving cars where the deployment
15:01 took a very long time because it was
15:03 both sort of superhuman in some
15:05 applications and other uh in other
15:07 situations get quite tripped up. What
15:09 what do you think we'll we'll we'll see
15:11 in uned agents and how how they will be
15:14 deployed? Will we'll be you know
15:16 bottlenecked for another decade by the
15:19 jagged frontier or will we start
15:20 trusting these systems and to end quite
15:23 soon? I mean I think we're already in a
15:24 place where narrow agents are very good
15:26 right so the the best example of those
15:28 are the deep research agents that now
15:30 have been rolled out by you know Google
15:32 openai and uh x right um and the
15:35 perplexity as well they're all very good
15:37 right and they do the narrow task of
15:39 finding information being you know
15:41 giving you answers very well and that is
15:43 a highly renumerated task right and
15:45 they're not quite there yet because they
15:46 don't have access to the kind of private
15:48 data that people need to be able to use
15:49 these systems fully but you know they're
15:51 starting to get very good at legal
15:52 research accounting and market research
15:54 and finance research and like so I think
15:56 that there'll that kind of delegation to
15:58 a fairly complex task to narrow agents
16:00 feels very doable. I think there are
16:02 clever ways to do generalized agents
16:04 with other agents watching them that no
16:06 one's really pushing yet. Like we we're
16:08 so new into this that the that you kind
16:10 of have to make two bets, right? One is
16:13 the whole view of Frontier when when I
16:15 came with the idea of Jagged Frontier is
16:16 like the frontier is constantly pushing
16:18 out. So it's jagged, right? Some of
16:20 those jaggedness will stick around for a
16:22 while. some it doesn't matter if it's
16:23 still bad at it because it as the AIS
16:25 get better overall it still beats humans
16:27 right and so I think part of this is the
16:29 question do you wait for the frontier to
16:31 move out and then solve the problems or
16:33 do you build around them today and I
16:35 think part of the key is doing both
16:37 right of like how do we but if you
16:38 invest too much on trying to solve the
16:40 jaggedness today as long as models keep
16:42 getting better you end up stuck with a
16:44 legacy system built around a jagged
16:47 frontier that no longer exists makes a
16:49 makes a lot of sense and and one thing
16:51 that's organizations uh is find it quite
16:54 tricky is discovering the the the AI use
16:56 cases um and they have some bottoms up
17:00 strategies where effectively most parts
17:02 of the organizations is already using
17:04 these AI tools to some extent but just
17:06 not telling their their their leadership
17:08 and then they have some top down
17:10 initiatives where they're like let's
17:11 build some AI SDRs or whatever that that
17:14 that might be. How would you approach
17:16 you know discovering these use cases
17:18 internally? What are some tactics there?
17:20 So I tend to say you need three things
17:23 to make AI work in an organization. You
17:25 need leadership, lab and crowd. So we
17:28 can talk more about leadership later but
17:30 that's the idea that like this the
17:32 organization needs to start grappling
17:33 with the questions at the CEO level
17:35 seuite level of the kinds of things
17:37 you've been talking about here. What
17:38 does our organization do? How do we want
17:40 it to look? What experiments do we want
17:41 to do in organizational form? Like those
17:43 are fundamental questions. And by the
17:44 way, if those aren't answered, then the
17:46 incentives aren't set correctly for
17:48 people in the organization. And everyone
17:50 in the company wants to know what the
17:51 vision like you can't say people work
17:53 sideby-side with agents without giving
17:55 people an articulation of what that
17:56 actually looks like their day-to-day
17:57 job. So that has to have come from the
17:58 leadership level. And one of the
18:00 bottlenecks by the way has been that sea
18:02 level people have not used these systems
18:04 enough. And you can see where they do
18:05 because transformation happens much more
18:06 quickly. Um, you know, uh, Mary Erdo
18:09 said JP Morgan, for example, is, you
18:11 know, been very public about using AI
18:13 and that's trickled down and part of why
18:15 JP Morgan does quite well on AI stuff.
18:17 And so there's this leadership piece and
18:18 then there's the crowd that you're
18:19 talking about. Everybody gets access to
18:20 these tools in some way or another. Um,
18:23 and then how do you create the incentive
18:24 so they share what they're doing, right?
18:26 Because there's at least like seven or
18:27 eight reasons why people use AI and
18:29 don't tell you. Like everyone thinks
18:30 they're a genius. They don't want to
18:31 seem like a genius right now. They know
18:32 that efficiency gains get translated
18:34 into people being fired. They don't want
18:35 that to happen. they're working a lot
18:37 less and why would they ever return the
18:38 extra value to the company itself? They
18:41 they have come up with brilliant ideas
18:43 that that they don't want to share
18:44 without you know taking a risk for it.
18:46 Like there's lots of reasons people
18:47 don't share this stuff. So you have to
18:48 align that organization to do it. And
18:50 then the issue is like this is done
18:52 through individual prompting. So to turn
18:53 those into products, to turn those into
18:55 agents, to test whether they work or
18:57 not, you need to then extract some of
18:59 those and start doing some actual real
19:01 R&D work, which doesn't mean necessarily
19:03 coding, right? Tool bases like the kinds
19:05 you build are really important for what
19:06 you're doing here, but it's also just
19:08 how do we start experimenting? How do we
19:10 take what was a basic prompt and turn
19:11 into an agent agentic system? How do we
19:14 benchmark that system? So you need all
19:15 three of those pieces at the same time.
19:17 And what use cases have have you found
19:20 uh over the last year? you've done a lot
19:23 of research both as AI as a collaborator
19:26 in a in a team, AI as sort of assisting
19:28 BCG consultants and and so on. Um what
19:32 type of of of use cases do you think are
19:34 inside of the the frontier now where
19:36 it's delivering meaningful value? So I
19:39 think it's really clear at this I mean
19:41 so there's stuff that I think is that
19:44 like CSR people still struggle with
19:46 right and I think that those are in some
19:47 ways riskier things are external facing
19:49 human replacement the augmentation angle
19:52 the results are really clear right
19:53 individuals working with AI and
19:55 especially if you have way people
19:56 sharing that information ideation it's
19:58 absolutely useful to have you generate
20:00 better ideas working with AI in this
20:01 right there's some methods that work
20:02 better than others but that kind of
20:04 approach for supplementing work of all
20:06 kinds right translation
20:08 not just you know translation up and
20:10 down levels of abstraction not just
20:11 translation directly summarization um
20:14 but where you start to see the really
20:16 interesting stuff is trying to
20:18 accelerate cycles so I'm seeing a lot
20:20 more of like rapid prototyping and
20:22 development so going from like let's
20:24 take an idea then let's have the AI um
20:27 let's have the AI generate 25 ideas
20:29 let's have it create a rubric and test
20:30 those ideas then let's put simulated
20:32 people through those ideas and get their
20:34 reactions to it refine the ideas further
20:36 then let's go and and create a um you
20:38 know a prototype working prototype and
20:40 interview me about how to make it better
20:42 and then build a vibe coded first
20:43 version that is literally 25 minutes of
20:46 work at this point right with just a
20:47 command line in 03 so like we're in a
20:50 very weird spot where it's like but then
20:52 the organization ends up tripping that
20:53 up right because what do you do with the
20:55 fact that now we have 45 great
20:56 prototypes where's the manufacturing
20:58 capability build it where's the output
21:00 so that augmentation piece is pretty
21:02 good at the beginning and then research
21:03 agents are looking really interesting um
21:05 and then knowledge management agents
21:07 also seem to have a lot of value, right?
21:09 Which is like actually this is something
21:10 you forgot or thought about. Where I'm
21:12 starting to see really interesting stuff
21:14 happen is advisory. Like the idea that
21:15 we're going to give you advice that's
21:16 timely or un is is also really
21:18 interesting. What do you think happens
21:20 to uh the economy when we have I mean
21:24 it's effectively a renaissance where we
21:25 just have an abundance of everyone can
21:28 code, everyone can do science, everyone
21:31 can go deep into so many different uh
21:33 disciplines. uh if we um you know get
21:37 another sort of 10x uh the output from
21:41 the medical community as a as an example
21:44 will we still be bottlenecked by by by
21:46 the FDA or do you think the the system
21:48 will will will adapt and both right
21:52 systems take a lot longer to change um I
21:55 mean we've been talking to some of the
21:56 deep mind people and they are saying
21:58 that there's getting real drug
21:59 development results in a year that look
22:01 really good right um so there'll be
22:03 pressure to adapt to those kind of
22:05 things. And I think part of the question
22:07 like part of the issue with the
22:08 uncertainty in the regulatory
22:10 environment whether for different
22:11 reasons in Europe versus the US for
22:13 example um is is that it makes it hard
22:16 to figure out where to invest to make
22:18 these kind of changes happen because we
22:20 are going there's going to be societal
22:21 bottlenecks all over the place and
22:22 there's also you know the AI only has
22:24 limited ability to act in the physical
22:25 world at this point right robotics lags
22:27 this organizational structure lags this
22:29 so how do we start thinking about that
22:30 becomes a really big deal I think part
22:32 of why people find agents so appealing
22:34 is in part the idea that they solve some
22:36 of this problem by just doing stuff so I
22:38 don't have to worry about it. But at
22:39 some point they're going to hit the real
22:40 world, right? And and at those friction
22:42 points, that is where things slow down.
22:44 On the other hand, if you can get up to
22:46 that friction point and deliver here's
22:48 seven really good-looking, you know,
22:50 like um compounds that might make a
22:52 difference, that is a huge gain anyway.
22:54 So I think that the gain will be more
22:56 spread out. Um but we just don't know. I
22:58 mean part of this also is how autonomous
23:00 these systems get, right? Which roles do
23:03 you think will uh will end up being more
23:06 useful in organizations as a function of
23:09 this? Oh, that's a tough one and based a
23:10 lot on organizational choice, right? But
23:13 I think I think management roles does
23:16 like um roles that are sort of thinking
23:18 about systems are tend to be very
23:20 valuable because there's systems are
23:22 problematic. I think experts anywhere
23:24 become valuable, right? Uh it turns out
23:26 expertise actually is really good. None
23:27 of these systems are as good as an
23:29 actual expert at the top of their
23:31 fields. we tend to measure against the
23:32 average in a field and like the AI does
23:34 really well but if you're in the top 2%
23:35 of something you're going to be beating
23:37 the AI in that field and so expertise
23:39 actually matters a lot in this space so
23:41 either deep subject matter expertise
23:43 broad expertise across many areas um as
23:46 a system leader uh or really good taste
23:49 tend to be the three things that help
23:50 you one one thing that that I've been
23:53 thinking a lot about is um you know on
23:55 on one end you could be hiring more
23:58 senior developers as an example where
24:00 you say you know, we just hire the top
24:02 2%. Those are the only folks that are
24:04 going to be, you know, make a big
24:06 difference to us. Another argument could
24:08 be actually you could hire much more
24:09 junior developers nowadays because the
24:11 junior developers will be able to
24:13 execute at the quality of much more
24:16 senior uh developers. Um, what do you
24:20 think there? Should does does the
24:22 democratization of expertise actually
24:24 enable you to maybe staff your team with
24:26 more junior junior talent and maybe
24:29 folks that are slightly more senior will
24:31 actually not benefit as much from from
24:33 from this technology. So there's
24:35 actually a few effects happening at once
24:37 and I think it's worth unpacking them.
24:38 Like our our our Boston Consulting Group
24:40 study was the first one to document in
24:42 the real world the idea that like there
24:44 was this performance gain for the lower
24:46 performers got the highest gain. Uh but
24:48 people don't talk as much about the why
24:50 we found out that happened which is we
24:51 measure something called retainment
24:52 which is how much of the AI's answers
24:54 the consultants only turn ultimately
24:55 turn as their own and for sort of 80% of
24:58 consulting tasks the only way to screw
25:00 up was to added your own thoughts or
25:01 ideas into the AI's answer right as long
25:03 as you were just turning in the AI's
25:04 answer you did great as soon as you're
25:05 adding your own thoughts or ideas so
25:07 it's basically work at the eighth
25:08 percentile so when you say you're hiring
25:10 a junior developer and the AI makes them
25:11 better I think it's worth specifying is
25:13 it just that the the human is
25:15 substituting for the things we can't do
25:17 agentically yet which is like I'll paste
25:19 in the requirement and I'll attend the
25:20 meeting and the AI is actually doing the
25:21 work right is or is it actually bringing
25:24 people up to that level and at the same
25:26 time at this sort of really good person
25:28 level we're seeing effects where if
25:29 you're very good and you use AI the
25:32 right way you can get 10 or 100 times
25:33 performance improvement so I think you
25:34 need to think about both things right
25:36 there is this sort of substitution
25:37 effect and my view has been that a lot
25:40 of the benefit comes from having
25:43 expertise and then using AI to
25:45 supplement the areas that you're not
25:47 you're bad at, right? Like I think about
25:49 founders all the time. I was an
25:50 entrepreneur. I teach entrepreneurship.
25:51 Like entrepreneurship is all about you
25:53 being very bad at many things but really
25:56 really really good at one thing. And
25:58 your whole task as an entrepreneur and
25:59 the reason why I teach entrepreneurship
26:00 is to have those, you know, the 95% of
26:03 stuff you're bad at not trip you up,
26:04 right? Like the fact that you didn't
26:06 know you needed a business plan or that
26:07 you didn't know how to do a pitch like
26:09 because your idea is brilliant and you
26:10 know how to execute it in this market.
26:12 And so the fact that AI could bring you
26:14 to 80% in all of that is a really good
26:16 thing, right? And that is replacing your
26:18 work. But in the area where you're at
26:20 the 90 9.9th percentile, you get a 100
26:22 times multiplier. And I think that's the
26:24 same kind of angle. And I think the
26:25 danger is is that if you're hiring
26:27 junior people and expect them to use AI
26:29 the whole time, how will they ever
26:30 become senior? Becomes a real challenge.
26:33 What what do you think the answer to
26:34 that is? like a lot of the law firms I
26:37 speak to for example there's a a core
26:39 part of the training is the you know
26:41 basic work you do and then as you become
26:43 more more senior but you do more complex
26:47 legal analysis but when you look at
26:49 actually what the juniors are are are
26:52 doing I think most of that work is not
26:54 actually adding up to what the more
26:56 senior role will will be doing it's very
26:59 simple repetitive work and so on do you
27:01 think that will be an issue where people
27:03 don't grow um you know through the
27:06 hierarchy to the same extent and as a
27:08 function of that we don't have as many
27:10 folks that can step into this more
27:12 senior roles or will you just go into
27:14 the senior roles more quickly? No, I'm
27:16 I'm really worried about that, right?
27:17 Because like any other university I at
27:20 Wharton, you know, I teach really smart
27:22 people and but I teach to be
27:23 generalists, right? I don't teach them
27:25 to be, you know, I teach them about how
27:27 to do analysis. I don't teach them how
27:28 to be a Goldman Sachs analyst, right?
27:30 But then they go to Goldman Sachs or
27:32 they go to a law firm or whatever it is
27:33 and they learn the same way we've been
27:35 teaching any white collar knowledge work
27:36 for 4,000 years which is apprenticeship
27:38 right and it you're right they're asked
27:40 to do repetitive work over and over
27:41 again the repetitive work doing it over
27:43 and over again that's how you learn
27:44 expertise right you get yelled at by
27:46 your senior manager you're you know at
27:47 the wrong kind of firm or else treated
27:49 nicely. Um but you're basically given
27:51 correction over and over again till you
27:52 write a deal memo. But it's not just
27:53 that you're learning to write a deal
27:54 memo it's that you're also learning why
27:57 this approach didn't work. you're
27:58 absorbing a whole bunch of stuff from
27:59 your mentor about what the goal of this
28:01 is. So, we let like it just happens,
28:04 right? Apprenticeship, if you have a
28:05 good uh mentor, apprenticeship is a
28:07 thing that happens. We don't spend a lot
28:08 of time training people for. We just
28:09 sort of it's magic and some people pick
28:11 it up and then other people get fired,
28:12 right? And they might get fired because
28:14 they're bad, but they might get fired
28:14 because they got unlucky and got a men
28:16 good bad mentor or didn't learn the
28:17 right things. That mentorship just
28:20 snapped this summer. That chain that's
28:22 kept going for a few thousand years.
28:23 Because what happens now is if you're a
28:26 junior person, you go to a company, you
28:28 don't want to show people you don't know
28:29 something. It's because you want a
28:30 senior job. So you're going to use AI to
28:32 do everything. So you've turned off your
28:33 brain because the AI is better than you.
28:34 And every middle manager has realized
28:36 that rather than going to an intern who
28:37 sometimes like take messes up or cries,
28:40 um you could just have the AI do the
28:41 work because it's better than an intern.
28:43 And I really worry about that pipeline
28:44 being snapped. And the problem is is
28:47 that we've viewed this as an implicit
28:48 thing. Like there's very little work in
28:50 law firms to teach you how to be good at
28:53 teaching a lawyer, right? To someone to
28:54 be a good lawyer. Instead, you hope that
28:56 you had a good mentor yourself and you
28:57 replicate what they did, right? It's why
28:59 bankers will often, you know, like 120
29:01 hour weeks is part of your job. Why?
29:04 Because that's always been part of your
29:05 job and somehow that teaches you
29:06 something. And so I think we have to
29:08 move much more formally to how do we
29:10 want to teach people expertise and work
29:12 on it. that ironically the one place we
29:13 do this really well is actually in
29:15 sports because like that's an area where
29:17 we've learned how to build expertise
29:18 right practice with a coach and yeah
29:21 we're gonna have to do the same kind of
29:22 thing in other forms of learning as
29:23 well. So how would you think about it if
29:25 you started a new university now uh for
29:28 for the intelligence era. So assuming
29:31 you know models keep getting better over
29:34 the next few decades how would you
29:36 design a university around that? So
29:38 there's a few things happening, right?
29:40 One is what should we teach and the
29:42 other is how should we teach it. I'm
29:44 more concerned about two than one. I
29:46 think there's a big thing of like we
29:47 need to teach people AI skills and I
29:49 think as somebody who's worked with
29:50 these systems a lot, you know, like
29:52 there's not that like the skills are
29:53 first of all there's like five classes
29:55 worth of skills to learn, right? Unless
29:57 you want to build an LLM, which you
29:58 shouldn't do. Um it's really like five
30:00 or six skills classes and then there's a
30:02 lot of experience. Um, and so I think
30:04 the qu it's less about teaching people
30:07 to use AI and in fact I think a lot of
30:09 the discipline stuff that we teach are
30:10 really important. We want people to
30:11 still learn to be good writers. We want
30:13 that broad knowledge, right? As well as
30:15 deep knowledge. I think universities are
30:16 well suited to that. Where we break down
30:19 is how we teach, right? And so
30:21 everybody's cheating, right? And AI
30:22 detectors don't work. And they're
30:23 already cheating, by the way, but now
30:24 everyone's really cheating. There's a
30:26 great study that shows that from the
30:27 beginning of um from like when the
30:29 internet era and social media really
30:31 kicked in in like 2007 or 2006 students
30:34 at Ruters um who did their homework
30:37 almost all of them did better on tests
30:39 and by the time you reach 2020 almost
30:41 none of them like 20% were getting
30:43 better in test because everyone else was
30:44 just cheating right so like you have to
30:46 do the kind of hard work so AI doesn't
30:48 let us skip the hard work but it will
30:50 let us with AI tutors on a onetoone
30:52 basis you can actually teach people at
30:54 their level we can help excel accelerate
30:55 the learning process in real ways. And
30:57 so I'm much more interested in how you
30:59 ch and I already did this with my
31:00 classes. How do you transform how we
31:02 teach with AI becomes a really
31:03 interesting question. I don't know if
31:04 the subject matter changes and I think
31:06 we can increase scale also teach more
31:09 people but I think that some of the core
31:11 subjects stay the same and you've done
31:13 some really cool things and were
31:15 probably one of the first to actually
31:17 ask your students to to shoot. What are
31:19 some other things in in which you've uh
31:22 deployed this and how how you teach how
31:25 everything my class are 100% AI based I
31:27 mean so I teach entrepreneurship so the
31:28 easiest version is it used to be at the
31:30 end of a class right and you know people
31:32 have raised hundreds of millions of
31:33 dollars from my class and the ones
31:34 taught by my colleagues the same class
31:36 number but um you know you would you
31:38 basically have a business plan and a
31:39 powerpoint now at the end of a week I
31:41 have people have working products right
31:43 like literally when I first introduced
31:45 chatbt to my entrepreneurship class um
31:48 the Tuesday after it came out um you
31:51 know the one student was really
31:52 distracted came to me afterwards said I
31:53 just built my entire our product while
31:55 we were talking, right? And that seemed
31:56 entirely novel at the time that you
31:57 could it would write code was like
31:59 shocking, right? And now we're in a very
32:00 different world for where that is. Um,
32:02 but I think that um so I I have my
32:05 students now have AI simulations they
32:07 play. They have to teach the AI
32:09 something. We have a purpose naive AI
32:11 student. There's AI mentors for all the
32:13 class material. Uh they have to build
32:15 cases with AI. There's AI um watching
32:18 what they do in a in team settings and
32:20 giving feedback or acting as devil's
32:22 advocate. So there's lots of cool stuff
32:23 you can do to supplement it, but that's
32:25 all in service of having a classroom
32:26 experience that's active and engaged.
32:29 And so I think that classrooms don't go
32:31 away, right? But but what we do in them
32:33 kind of transforms. So one thing we've
32:36 been discussing is is the organizational
32:38 uh design and what it should be
32:40 structured like. Should companies hire a
32:42 shift AI officer who sort of oversees
32:44 all of the internal deployments? should
32:47 have a model where they deploy someone
32:49 in each team to figure out the the use
32:52 cases. What do you think like how how do
32:54 you structure your your AI or so I worry
32:57 a little bit sometimes on the chief AI
32:59 officer thing for the same problem that
33:00 everybody is having which is everybody
33:02 wants answers and like I talk to all the
33:04 AI labs on a regular basis I know you
33:05 guys do too you've been doing this for
33:07 much longer than most people in the
33:08 space and you know the horrible
33:10 realization you have fairly quite
33:11 quickly is that nobody knows anything
33:13 right it's not like the labs have an
33:14 instruction manual out there that they
33:15 haven't handed to you it's not like that
33:17 there's like more data than what I'm
33:19 sharing with you guys about this or that
33:21 I share online like there's no secrets
33:22 right there. Like there isn't everyone's
33:24 like desperate to copy somebody else and
33:26 there isn't. So like when you say hire a
33:28 chief AI officer, how are they going to
33:29 have any more experience in the last two
33:32 years than anyone else did? No one
33:33 thought LLMs would be this good. Like
33:34 you guys were there before almost anyone
33:36 else that like that gave you a year of
33:37 head start, right? Like this is a weird
33:39 place we're in. So there isn't someone
33:41 you can hire who's like an expert and
33:42 they often I mean one of the major
33:44 problems of AI in organizations is that
33:46 AI meant something very different from
33:48 2010 to 2022 that is still important by
33:51 the way. large data, you know, going
33:53 ahead and actually boosting everything
33:54 like still worth doing, right? But like
33:56 that's a very different beast. So a
33:58 chief AI officer is kind of a hard hire.
34:00 I really feel strongly that
34:02 organizations have the expertise they
34:03 need to succeed internally because the
34:06 only people who know how to use AI will
34:07 be the people who are experts. It's very
34:08 easy for someone who's done a job a
34:10 thousand times to, you know, run a model
34:12 and figure out whether it works or not.
34:14 And in fact, in our BCG study, we have a
34:16 second paper that shows that junior
34:17 people are much worse at using AI than
34:19 senior people, which is not something
34:20 people think about usually. They're
34:21 like, "We need the digital generation to
34:22 come in." Turns out not to be true
34:24 because junior people produce a memo and
34:26 they show that memo and you're like,
34:27 "It's a memo. It's great." And you're
34:28 like, "Well, I've looked at this for 20
34:29 I've done this for 20 years. Here's
34:31 seven things the memo doesn't do well,
34:32 right? So expertise and knowledge
34:34 matter. So I think it's less about
34:36 embedding people in teams." And then we
34:38 don't even know what makes someone good
34:39 at AI. So what I tend to do is suggest
34:41 the crowd and lab need to be linked
34:43 together. So what the crowd does is
34:44 you're not just surfacing you know AI
34:47 use cases. It basically by the way in
34:49 almost every organization you max out at
34:51 20 30% of people using your AI model
34:54 internally and everyone else is either
34:55 not using it or they're cheating and
34:56 using someone else some of their AI
34:58 because they don't want to show you what
34:58 they're doing. But you get like 20 30%
35:00 of your of your organization using it.
35:02 And then you'll find like one or two% of
35:04 your organization is just brilliant at
35:05 this stuff. They're amazing at it. Those
35:08 are the people who will be able to lead
35:10 you in your AI development effort. I
35:12 don't know who they're going to be at
35:13 first, right? And you won't know either,
35:14 but they will emerge. And then the
35:16 danger is they're making so much profit
35:18 for you on the line that you don't want
35:20 to pull them off the line. But those
35:22 become the people that become the center
35:23 of your lab and figure out how to use
35:25 it. So I really think building internal
35:26 effort is the right way. And it's very
35:28 hard for me to recommend hiring a bunch
35:30 of people for AI when we don't know what
35:32 makes someone good or bad at this. And
35:34 your organizational context actually
35:35 matters here. And what how do you think
35:38 we set up the incentives? So if you have
35:40 the experts in each domain and uh you
35:43 really hand it to them to figure out how
35:45 to deploy AI and effectively automate
35:48 away their own role. How do you create
35:51 the right incentives for them to do
35:52 that? And that's why the leadership leg
35:54 matters so much, right? So there's a few
35:56 things you need to do. One is this is
35:58 easier for companies with good culture,
36:00 right? If the CEO says and in growth
36:02 mode, right? If the CEO can, if you
36:03 trust the CEO or the founder and they
36:05 say things like, "Listen, we're not
36:06 going to fire anyone because of AI.
36:08 We're going to expand what we can do.
36:09 We're going to make this work for
36:10 everybody and people are incentivized to
36:11 do it, you're in a much easier spot than
36:13 if you're a large mature organization
36:15 that has a tendency to use it funds to
36:17 cut people, right? People will know the
36:19 difference. So, you have to acknowledge
36:20 this to start off with, right? Like, if
36:22 this is going to be a threat to people's
36:23 jobs, people want to know that and you
36:25 have to start thinking through what you
36:26 want to say. And then incentives can
36:28 often be pretty crazy in these
36:29 situations. I've talked to one company
36:30 that gave out $10,000 cash prizes at the
36:32 end of every week to whoever did the
36:34 best job automating their job. Um, and
36:36 you save money versus a typical IT
36:37 deployment just shoving over a suitcase
36:39 full of cash. I've talked to another
36:41 company that um before you hired anyone,
36:44 you needed to show um you need to spend
36:46 two hours as a team trying to do the job
36:48 with AI and then rewrite the job
36:49 description around the fact that AI
36:51 would be used or that you had to spend a
36:53 few when you proposed a project, you had
36:54 to try using AI to do it and then
36:56 resubmit the project proposal as a
36:58 result. So like you can incentivize
37:00 people in lots of different ways but
37:02 that clarity of vision matters so much
37:04 right if you say your job in four years
37:06 will be working with AI to do something
37:07 people are going to be like well what
37:08 does that mean like am I sitting at home
37:10 you know giving instructions to an agent
37:12 am I in a room doing things are there
37:14 less of us so that vision actually
37:16 matters and I find way too many
37:17 executives just want to kick that down
37:19 the road and say AI will do great stuff
37:21 why would I ever want to share my
37:22 productivity benefits with the
37:23 organization without being compensated
37:25 and so starting with that kind of piece
37:27 is really important
37:28 So another research you you did was when
37:30 when AI is embedded and collaborating
37:33 like more like a colleague and you
37:34 studied folks that were working
37:37 individually, folks that were working in
37:39 teams, folks that were working
37:40 individually with AI, folks that were
37:42 working in teams with w with with with
37:43 with AI. What what did that sort of
37:46 teach us about how this might be
37:48 embedded into teams? So we did this big
37:50 study with my colleagues at MIT and
37:52 Harvard University of Warwick uh of 776
37:55 people at Proctor and Gamble the big
37:56 consumer products company and um like
37:59 you said they were either teams of two
38:01 cross functional teams or individuals
38:03 working alone and then working with AI
38:04 and teams are alone. First off we found
38:07 individuals and this is all real job
38:09 tasks right not just like innovation
38:10 tasks. We found that individuals working
38:12 alone with AI performed as well as teams
38:15 um and um which was a pretty impressive
38:18 kind of boost and were actually happier
38:20 too as a results of working with it.
38:21 Like they got some of the social
38:22 benefits of of working with these
38:23 systems and produced high quality
38:25 results. Um and we also found that but
38:28 the teams that work with AI were much
38:29 more likely to come through come up with
38:31 really breakthrough ideas. We also found
38:33 that expertise tended to even out. So if
38:36 you sort of mapped how technical a
38:37 solution was and you had technical
38:38 people in room they'd produce highly
38:40 technical solutions you prod marketing
38:41 people produce highly marketing
38:43 solutions as soon as you added AI the
38:45 solutions were across the board so they
38:47 were much more even so it really turned
38:49 out like this was a good supplement to
38:51 kind of human work um you know and again
38:54 this was pretty naive like we gave them
38:55 a bunch of prompts to work with but a
38:57 lot of it was them just kind of playing
38:58 with these systems back and forth so you
39:01 know this leaves the same problem that
39:03 we've had before which is you need to
39:05 make some decisions like the the typical
39:07 company that sort of sits back and waits
39:09 for someone else to provide a solution
39:10 to them is going to be less well off
39:13 than if you start experimenting now and
39:14 figure out what works and what doesn't.
39:16 And what do you think will be the
39:17 interface for for collaboration? will
39:20 they just be embedded in natively into
39:23 our Google Docs and our Slack and we'll
39:26 just communicate with them just the way
39:28 we communicate with with all of our
39:30 colleagues or do you think there will be
39:32 something that's more an agent native
39:35 interface where we collaborate with
39:37 them? I I mean I think an agent native
39:39 interface makes a lot more sense you
39:40 know that born built around teams rather
39:42 than having each document have a
39:44 co-pilot on them. I want something to
39:45 maintain state across the various tasks.
39:48 I mean, we're close, right? Like, I've
39:49 got my phone here and I can, you know,
39:51 turn on if if we want to, I can even do
39:53 it. Uh, we can turn on, you know, chat
39:55 GBD's agent. I can look around us and
39:57 give feedback on what we're doing in the
39:58 world. And I think that that like that's
40:01 a promising way forward. And again, it's
40:04 about that redesigning work. I think
40:05 agentic systems are more less
40:07 interesting almost because they automate
40:09 work than that they can bring together
40:10 many threads of work. And you mentioned
40:13 one example uh a while a while ago which
40:16 I think it was ship like hallucinated a
40:19 quote from you and you actually thought
40:20 that was your your your own your own
40:22 quote. When do you think we'll have the
40:24 systems crit you know sort of ethanolic
40:27 level research and what's required for
40:29 that? Is it just sort of feeding them
40:31 more of of of your of your context? Do
40:34 you think we'll get there quite soon?
40:36 And and what will that mean? Would that
40:37 mean that you're basically just using
40:39 your test to select among the best
40:41 papers that it's generating? I mean, I
40:43 think a lot of this is already possible
40:45 with the levels of models we have. I
40:46 mean, there's a paper that shows 01
40:48 preview, which is not even a cutting
40:49 edge model at this point. You know, the
40:51 hallucination rate on the New England
40:53 Journal of Medicine case studies went
40:55 from like 25% in previous models like
40:57 0.25%. Like the hallucination problem
41:00 starts to drop when you connected data
41:01 sources, when you have smarter models. I
41:04 mean it's still there but like you
41:05 mentioned uh at one point that you know
41:07 I used AI in classrooms and my first
41:09 classroom policy was you could use AI in
41:11 class and that was great for three
41:13 months right when chatb 3.5 came out my
41:15 students are smarter than chat GPT and
41:17 it produced much more obvious errors and
41:19 I let them use AI for anything they
41:20 wanted because if they don't add their
41:22 own thinking they would get like a B
41:23 right like AI was not capable of doing
41:25 that GBT4 came out does as well as my
41:27 students you know who aren't putting a
41:28 huge amount of effort in so I think
41:29 we're in the same kind of boat here
41:31 which is these systems are very good and
41:32 as people who build agentic systems. I
41:34 think you're probably realizing what you
41:36 know I've long realized what I think we
41:38 know which is they're capable of a lot
41:39 more when you start thinking about them
41:40 agentically and you know Google's been
41:43 doing some stuff of building AI labs.
41:45 There's been work out of Carnegie Melon
41:46 doing the same sort of stuff. I actually
41:48 think it's more willpower than anything
41:49 else to build a research system that
41:51 does interesting work. And it's like so
41:53 many other areas in AI where I'm like
41:54 wow we've already shown that this can
41:56 work really well as a tutor. Where are
41:58 the thousand tutor, you know, that are
42:00 actually well done as opposed to just
42:01 prompting the AI to be a tutor? Where
42:03 are the thousand science applications?
42:05 Where's the internal training systems?
42:07 These are capable right now. Like, it's
42:08 really just doing it. What What has been
42:10 some of the most surprising uh things
42:12 you've gotten to work recently? What
42:14 have you seen in the latest generation
42:17 of of of the models? Things that didn't
42:20 work previously that are now starting to
42:21 work really well. I mean, so at with the
42:24 latest versions of say Gemini, the
42:26 hardest thing you have to do as an
42:27 academic is writing what's called a
42:29 tenure statement. So you do this
42:31 hopefully once in your life and you have
42:32 to write a statement where you go up for
42:33 tenure. And what you have to do is take
42:35 all of the academic work you've done,
42:36 which is often 15 years of work, very
42:38 complicated, and boil it down to a few
42:40 themes and write an essay sort of about
42:41 why your research has these themes. I
42:44 was able to recently with the new Gemini
42:45 models dump in all of my academic papers
42:48 I wrote because the context one is huge
42:50 and have it develop those themes and it
42:52 found two of the three themes I ended up
42:53 took me two months to write on my own at
42:55 a fairly high analytical level right
42:57 like um you know or on the more fun
42:59 version I can now throw in any academic
43:01 paper I've ever written and say turn
43:02 this into a video game and get a good
43:04 video working video game out of it um
43:06 you know I vibe coded some 3D games
43:07 recently which was like I can't code um
43:09 and you know building pretty good
43:11 working systems so I mean I like
43:13 threshold after threshold kind of keeps
43:15 falling uh and I'm sh surprised on a
43:18 regular basis like can't believe how
43:19 much these systems can do and how how
43:21 should we be thinking about this in
43:23 companies is this the equivalent to like
43:25 deploying more IQ into the system is it
43:28 deploying more labor into the system or
43:31 how should I view this as a company so
43:33 there's a tactical and then there's a
43:35 philosophic view on the philosophic view
43:38 we don't really know right like
43:39 certainly in intelligence but like
43:41 you've you know intelligence labor are
43:43 just sort of like two very simple
43:45 inputs, right? But also what does it
43:47 mean for better adi to get better
43:48 advice? What's it mean to get better
43:50 mentoring? What's better to have a
43:51 second opinion, right? Um the and on the
43:54 tactical side, I think that the thing to
43:56 aim is to be maximalist. I think too few
43:58 organizations are maximalist. Just push
44:00 the system to do everything. If it
44:01 doesn't do it, great. You now have a
44:03 benchmark for future systems to test and
44:05 it might actually just do all the stuff.
44:07 If it does all the stuff, you've learned
44:08 something valuable. So I really worry
44:09 about the incrementalist sort of like
44:11 let's summarize our documents like
44:13 that's fine but it could do that a long
44:14 time ago. Why why are you having that
44:16 document summarized? Let's just have it
44:17 do the thing as opposed to the
44:19 intermediate step. I I I think that's a
44:21 really interesting uh point because a
44:24 lot of companies now are like let's
44:26 start with a small proof of concept and
44:28 then we scale up and then it's sort of
44:29 six months in and they get stuck in that
44:31 proof of concept and they never quite
44:34 never quite scale. Whereas you see
44:36 others take the approach of let's
44:38 actually deploy it everywhere, get
44:40 everyone access to this and then double
44:41 down on the use cases that work really
44:43 well. But even that isn't maximalist
44:45 enough, right? Like you're absolutely
44:47 right because the problem with the use
44:48 cases that work well is they worked well
44:49 given the limits of the system and given
44:51 what people were able to do at that
44:52 point. And the building apps is often
44:54 the worst kind of angle because you end
44:55 up with now a semisuccessful product
44:57 that you have like that you built around
44:59 the limitations of llama 22 or whatever
45:01 it is because that was I mean we can
45:03 talk about the problem one of the
45:04 problems IT teams have with being the
45:06 nexus of AI deployment is it is very
45:09 interested in low latency and low cost
45:11 right and it turns out that low latency
45:13 and low cost are the exact opposite of
45:15 high intelligence in these models. So
45:17 there are times where you want to be low
45:17 latency, low cost, but there's also
45:19 times where it's like I'm willing to pay
45:20 15 cents for a really smart decision or
45:23 new chemical, right? Like that's a
45:25 reasonable amount to pay. Um, and so you
45:27 have to like that balancing act can be
45:28 really hard because people tend to build
45:30 off of cheap small models and then they
45:32 get stuck later on, right? Which is why
45:34 being agnostic is so important, but also
45:36 updating. So even when people do this,
45:37 they often don't find the maximalist
45:39 approach. So that's where the lab comes
45:40 in. You really need people building
45:42 impossible things. And what what's the
45:45 difference between using it as a centaur
45:47 versus a cyborg and and what do you
45:49 recommend there? So the the centaur
45:53 definition is you know Gary Kasparov
45:55 used that term at first. This idea that
45:57 I kind of took from that was the half
45:59 person half horse, right? the idea that
46:00 like you you're you're basically
46:02 dividing up the work with AI and I know
46:04 you know you know Kasparov's definition
46:06 was was vague around that right but like
46:08 that's how we view this and this is sort
46:09 of the beginning thing like I hate
46:10 writing emails I'm good analysis I'll do
46:12 the analysis you do the emails cyborg
46:14 work is more blended right so my book is
46:16 a cyborg task and you know the system's
46:18 gotten much better since then but at
46:20 that time it was very bad at writing I'm
46:21 a very good writer I think or at least
46:23 I'm proud of my writing so the I did
46:24 almost no writing but writing books is
46:26 terrible and so all the things that made
46:28 writing books terrible it helped me I
46:30 got stuck on a sentence. Give me 30 ways
46:31 to end the sentence and pick one. Read
46:33 this chapter and make sure that I'm, you
46:35 know, like my Substack. I have the AIS
46:37 read my Substack all the time, two or
46:39 three of them, and give me feedback. I
46:41 rarely, you know, I use it for core
46:43 writing, but I absolutely get feedback
46:45 all the time from it and make changes as
46:46 a result. Read these academic papers and
46:48 make sure I'm citing them properly. Like
46:50 those sorts of use cases are where the
46:52 power really comes in. And there was
46:53 this other uh study where um the folks
46:56 that that got advice from the AI
46:58 ultimately ended up being more
47:00 productive, but it it was largely
47:01 benefiting the the more more senior
47:04 folks and and and and not as as much the
47:07 lower performers that they couldn't
47:09 quite sort of internalize the the the
47:11 advice. What does this mean for advice?
47:13 If everyone is sort of getting you know
47:16 your advice on how to deploy AI in their
47:19 or organizations, what what will that
47:20 mean for for the society? So, I mean, I
47:23 think that part of the thing is it's not
47:25 always the same advice, right? Like the
47:26 AI is good at context. I think the study
47:28 you're talking about is the Kenya study
47:29 of entrepreneurs, which was this great
47:31 controlled study that you only got
47:32 advice from GPT4. They couldn't get it
47:33 to make products for them or anything
47:35 else. And what they found was that for
47:38 um that high performers got I forget
47:40 what it was eight or 13% improvement
47:42 profitability, which is by the way
47:43 insane for advice. Like if I could do
47:45 that with my students and just give them
47:46 advice and get a 13% profitability
47:48 boost, that's amazing. And again,
47:50 remember people are jagged too. So like
47:52 even if you like you're going to need
47:53 different advice than someone else. So
47:54 even if you're getting advice from the
47:56 AI, it's going to be about the thing
47:57 you're weakest at, not the thing you're
47:58 strongest at. And the low performers did
48:00 worse because their business were
48:01 already struggling. So they couldn't
48:02 implement the ideas. So I think it's
48:04 very much true that the advisory role,
48:07 the second opinion role, there's some
48:09 danger that does shape us all in the
48:10 same direction, right? We find this in
48:12 ideation too. The AI has a bunch of
48:14 themes. If you've worked with these
48:15 models, you know that for example like
48:17 GPD40 loves to generate ideas that have
48:20 to do with crypto. It loves to generate
48:22 ideas that have to do with AR and VR and
48:24 it loves environmentally friendly ideas,
48:25 right? Like just from the way it's post
48:27 training worked, I assume and just
48:28 churns these out. And but we found in
48:30 some of our other work that if you
48:31 prompt it better, you can get as diverse
48:32 ideas as a group of people. So part of
48:34 this is about like what does the adviser
48:36 do for you? Maybe you wanted four or
48:38 five advisors. you don't want Ethan
48:39 Mollik to be the adviser or you want me
48:41 but you also want Adam Grant and you
48:42 also want Garrett Kasparov and that can
48:45 be valuable too and um if I if I take
48:48 the case of abundance uh here and and
48:52 prompt you to give 30 examples of uh of
48:56 good things companies are doing
48:58 deploying AI can you list as many as
49:00 possible uh in now what how should you
49:04 you mentioned the example of you know
49:06 handing out cash for the folks that are
49:08 uh deploying it the best. What are some
49:11 of these crazy ideas you've seen and
49:12 certainly work really well? I've been
49:14 see I mean so there's tons of them. I
49:16 can't give you 30 and I can't even talk
49:17 about all of them unfortunately because
49:18 I'm not allowed to but um you know
49:21 certainly right the easy stuff is all
49:23 your coders use these you know and but
49:25 then you know change the reward systems
49:27 around doing that. Um so every ideation
49:30 session you stop in the middle of
49:31 meetings and you ask AI how's it going
49:33 so far or whether or not you should
49:35 continue the meeting at all um and then
49:37 drop out otherwise. uh if if the meeting
49:39 if the AI think the meeting is done. Uh
49:41 even in physical meetings just stopping
49:42 and having an AI conversation with the
49:44 AI and thinking about what they're doing
49:45 at that stage. Um I I have uh the I have
49:49 seen cases where people are using
49:50 everyone gets an AI consultant or
49:52 adviser that they kind of ask about
49:54 strategy decision-m uh on every point.
49:57 Um there is some really interesting
49:58 stuff being done on training, right? So
50:00 ask the AI to simulate a training
50:01 environment or play through that one way
50:02 or another. Turns out to be really cool.
50:04 I don't know. I'm not going to be able
50:05 to hit 30 here in the room with you. But
50:07 I think that the Ethan probably could.
50:10 Abs. Absolutely. And that's how you know
50:12 I'm real is that I'm not doing a very
50:14 good job. And I'm I'm kind of worried
50:16 now. You're not even responding to my
50:17 prompts. You have enough footage of me
50:18 that I'm desperately worried that this
50:20 that you're going to get much better
50:21 answers. Yeah, we'll definitely try what
50:23 what the what the AI version will will
50:25 will do. And um and and what do you
50:28 think is the best case scenar scenario?
50:30 So assuming everything get gets right,
50:32 this gets deployed in into society. What
50:34 do you think is this the best case
50:35 scenario a decade from now? I mean so I
50:37 do think that the idea of sort of a
50:40 let's let's leave aside an ASI kind of
50:42 world where there we're all watched over
50:44 by machines of love and grace, right?
50:45 And let's just focus on on sort of I
50:48 think what happens is that you know I
50:52 mean I the problem is a best case link
50:55 also requires policy decisions because
50:57 there is clearly going to be employment
50:58 impacts from this. We don't know what
50:59 form they're going to take. It's very
51:01 possible that everyone gets more jobs,
51:02 but we need retraining. I don't know
51:04 what the future holds in that case. So,
51:06 there has to be some policy piece. It's
51:07 kind of missing on that right now. But I
51:09 think that there is a place where your
51:10 jobs get more satisfying because you
51:11 have to do less grunt work, where we
51:13 have a world where productivity is now
51:15 flowing in in fun ways rather than just
51:17 like productivity office is like are you
51:18 typing enough stuff? But like if you're
51:20 architecting a system of agents that's
51:22 building stuff for you, suddenly this is
51:24 feels like a very different kind of
51:25 world you're in. It's much more
51:26 satisfying, right? um where you work
51:28 less and more stuff comes out and you
51:30 add your humanness at the key elements
51:32 that you know the people who still have
51:33 a sense of style or approach or
51:36 perspective produce very different work
51:37 than somebody else so you have
51:39 differentiation variation I mean that
51:41 kind of looks like a world where AI gets
51:42 five to 10 times better than it does
51:44 right now but it doesn't get beyond that
51:46 you know which is sort of a weird thing
51:47 to root for in some ways but that's the
51:49 easiest way to imagine a you know a kind
51:51 of outcome that feels like the world of
51:53 today if these systems get a lot smarter
51:55 then it's like well why do you come into
51:57 work when it's like we could sit here
51:58 and you've we've autogenerated this
52:00 video here. I feel like 5 years to come
52:02 back like recreate the people, make it
52:04 3D, put us in a volcano um and have us
52:07 talk individually to everybody in their
52:09 language and voice, right? We're close
52:11 to that. So that starts to change jobs
52:13 much more dramatically. And what are
52:15 some beliefs u that are in the field
52:17 currently that that you really disagree
52:19 with? So I think that there is a a huge
52:23 focus and I understand the safety focus
52:25 but I think there's a huge focus that we
52:28 and there's a paper that just proves it
52:29 that we need to either focus on
52:30 existential risks or not. And I I think
52:32 that there's a lot on existential risks
52:33 and it's worth thinking about but that
52:35 worries me a lot less than agency over
52:38 the decisions we're making right now.
52:39 And I worry that people are by treating
52:41 AI as this technological thing which
52:43 we're even having this discussion here
52:44 where it's like a steamroller. That's
52:46 not actually how this is, right? like we
52:48 have to figure out how this technology
52:49 is used and shaped and that's important
52:51 and everybody who's at this you know at
52:53 this event gets to make decisions right
52:55 about how AI is used and shaped and
52:57 those will in turn shape where AI goes
52:59 so I really worry about this lack of
53:00 agency kind of approach which is like
53:02 the AI will do things to us we get to
53:04 make choices and we can make those
53:06 choices that defend what we think is
53:08 important to be human what our customers
53:10 need what society needs and so I that
53:13 concerns me is avoiding that kind of
53:14 conversation um I also think that a lot
53:17 of people in the technical field of AI
53:18 don't understand how actual
53:19 organizations work and that they're
53:20 messier and that you know even super
53:23 smart agents won't necessarily change
53:24 how companies work overnight right which
53:26 is why I always struggle five or 10
53:28 years we don't know when the change
53:29 happens and it will happen in bursts but
53:32 you know you know there's a naive
53:34 sometimes sort of like I have I have a
53:36 sister who who's a Hollywood producer
53:37 and every time I hear that AI will
53:39 replace Hollywood I'm like you don't
53:40 understand how much work goes into a
53:41 Hollywood film and some of that will
53:43 disappear in fact they're using AI
53:45 actually to accelerate performance is
53:46 one fun example. So, she is um she's
53:48 made a movie with Michelle Fefeifer and
53:51 every time uh and when they have to do
53:53 test audio dubs and they now have a fake
53:54 Michelle Feifer voice that they could
53:56 test the audio dubs with, but they never
53:58 can use that for actual theater crowds
54:00 because there's good union protections
54:02 around the actor. So, it's a test bed to
54:04 do experiments, but Michelle Fefeifer
54:06 still has to come in and record in her
54:08 human voice with what she wants to do or
54:09 not. So, I think we can build a world
54:11 where we defend that humanness, but we
54:13 have to make choices to do it. And if if
54:15 you had to prompt a a model and to
54:19 basically make all of your decisions
54:21 from from now on, what what would you
54:23 prompt it? Okay, so I'd probably do
54:25 something of, you know, I so first of
54:27 all, I'd like to give it a lot of
54:28 context, right? Something you guys know
54:29 a lot about um about me and my choices.
54:32 So paste in a couple couple million
54:33 characters of stuff. But I would
54:35 probably say, you know, the good thing I
54:36 have this advantage disadvantage, which
54:38 is I've written enough that the AIs care
54:40 about that they have opinions about me.
54:42 Um, and um, so I I get a pretty good act
54:45 like Ethan Mollik. I get pretty good
54:46 answers. It tends to be a little
54:47 overenthusiastic and it likes hashtags
54:49 for some reason uh, that I don't
54:51 recommend and really loves emojis and
54:53 I'm not really an emoji person. So I
54:54 think it thinks I'm more millennial than
54:56 I am. But aside from that, um, if I was
54:58 asking for it, I'd be like, okay, so you
55:00 know, taking on the person realizing
55:01 that you're working for Ethan Mllock to
55:03 help make decisions and knowing that,
55:04 you know, here are four or five things
55:05 that he values that are very important.
55:07 Before making a decision, I want you to
55:09 go and uh pick four or five possible
55:12 options that we might follow in the
55:13 decision. At least a couple of them
55:14 should be very radical. Then u I want
55:17 you to compare those decisions versus
55:19 each other and give for each one give
55:21 two or three simulated outcomes. Then I
55:23 want you to create a a um a expedient
55:25 version of Ethan and a thoughtful
55:27 version of Ethan. Have them argue over
55:28 which path is best. Then I want you to
55:30 give me a set of pros and cons for each
55:32 of them and then select the best of
55:34 those. So a little chain of thought,
55:36 little perspective taking. It's a very
55:37 good very good one. We should we should
55:39 should try it. One thing I actually I I
55:41 I did a couple of years back is I
55:43 trimmed one on everything that Steve
55:45 Jobs had ever said because it was very
55:46 interesting to get one that was founded
55:50 in in his principle. So during COVID for
55:52 example, I asked it you know should
55:54 should we go remote? Should we become a
55:56 remote first company? And uh Steve
55:59 replied to me no 95% of all
56:01 communication problems are solved by
56:03 putting people in the same room. Always
56:05 colllocate teams. And it's quite
56:07 interesting if you ground it in a in a
56:09 person's writing and so on. It gets a
56:12 specific point of view that's not like
56:13 the average of of of the internet. Yes.
56:15 And that's what's so important when you
56:17 going back to that idea of where you get
56:18 advice from like and that's why
56:20 companies are important like your
56:21 founder can have an influence on this.
56:23 Your principles if you give the AI a
56:25 manual if this is what we believe that
56:27 will get very different results than
56:28 someone who isn't. I think the idea of
56:30 viewing this is this you know universal
56:32 mind that is always giving you the right
56:34 answer. It's giving you opinions and
56:35 points of view and that is a shapable
56:37 thing. And if you believe your
56:39 principles about the world are right,
56:40 giving those principles to the AI to
56:42 have it help you execute those
56:43 principles is a lot better than just
56:44 letting it tell you stuff. One thing I
56:46 find quite quite interesting is that the
56:48 systems are are yet to be optimized for
56:51 for engagement. So we basically just
56:53 train them to predict the next the next
56:55 token. But um if we know anything about
56:58 the sort of consumer services, they'll
57:00 very soon um start um evolving to engage
57:04 us deep in deeper conversations. You can
57:07 can imagine a a bot deployed in our
57:10 organization and we want to maximize the
57:12 engagement with it and it starts
57:14 enticing people and asking them
57:15 interesting questions and so on. What do
57:17 you think will happen when this once
57:18 these systems get optimized for for
57:21 engagement, which hasn't really been the
57:23 case yet? Yeah, I'm nervous about that.
57:25 Um, I think that that is uh is starting
57:27 to play with fire and the bigger labs
57:29 are starting to realize they can do
57:30 that, right? I think if you kind of look
57:31 at the trend of OpenAI's stuff, it's
57:34 become more casual, more chatty. Um,
57:36 there's a a fun incident where the new
57:39 Llama 4 model was just released and it
57:41 was top of the leaderboards. Uh, and
57:43 then it was revealed that the version
57:44 that he had that was the top of the
57:45 leaderboards was not the same model as
57:47 the model that was released to
57:48 everybody. And if you look at the
57:50 transcript from the leaderboard one,
57:51 it's full of emojis. It tells you how
57:53 great you are. it like makes little
57:54 jokes that are kind of semi funny and
57:56 that's not the model they released,
57:58 right? There's there's an optimized for
57:59 engagement thing that throws out a lot
58:00 more tokens trying to flatter you and so
58:03 I do worry about that, right? We have
58:04 some early evidence that it makes things
58:05 more sticky and that you know that that
58:09 optimizing for engagement is what made
58:10 social media such a risky place to be
58:12 and I really do worry about that kind of
58:14 outcome. Um and I think it's inevitable
58:16 though. Uh and so this is kind of you
58:19 know what we do with that becomes a
58:21 really big question and and and what um
58:25 one thing that I get asked a lot is you
58:28 know how should we measure the outcome
58:30 of of this? So you made a business
58:32 leader and they they want to measure one
58:34 thing which showed that we deployed this
58:36 and it um improved uh productivity. Um
58:40 what do you think we should be
58:41 measuring? So, I'm going to this is one
58:44 of my uh opinions I feel most strongly
58:46 about, which is in the early R&D phase,
58:47 the worst thing you do is have a bunch
58:49 of KPIs, right? We just talked about
58:51 maximizing for engagement. You if you
58:52 maximize for something, you'll get the
58:54 thing you maximized for and probably not
58:56 the other stuff. We don't know what
58:57 these systems do. You're spending R&D
58:59 cash on this. Like, we know you get
59:00 performance improvements because we'll
59:02 see those. But if you're optimizing for
59:03 performance, is that how many word
59:05 documents are produced every day? Is
59:07 that how fast people turn around their
59:09 reports? Like, is that what you want?
59:11 Like part of the problem is
59:12 organizations aren't built for the KPIs
59:14 that you need to have. Like people are
59:16 like it it used to be valuable to
59:18 produce as many words as possible. Like
59:20 if you can write a good report or four
59:21 PowerPoint presentations or cover six
59:23 companies now do you want people
59:24 covering 25 companies 300 PowerPoints a
59:27 week like what what are we maximizing
59:30 the number of lines of code that people
59:31 are writing? I mean you can imagine some
59:33 cases how quickly clear the backlog is
59:35 important but is that what we want to
59:36 have people do? So I I really worry
59:38 about KPIs, measurable KPIs being doom,
59:41 especially because they end up always
59:42 end up falling to cost savings and
59:44 they're always 30% cost savings and
59:45 they're always let's fire people which
59:47 undermines everything you're doing. So I
59:48 think people do need to adopt an an R&D
59:51 mindset like the productivity gaines are
59:53 pretty clear and will happen pretty
59:54 quickly and fine throw them into coding
59:56 because like coding there's clear
59:57 productivity gains but I really worry
59:59 about people who's like productivity
60:00 gains for document writing feels like a
60:02 risky thing to do because what are you
60:03 optimizing for?
60:08 [Music]