Scenario Planning

I recently came across this very interesting article from Anton Korinek of the IMF and UVA (Wahoowa!). Scenario Planning for an A(G)I Future. I want to avoid being alarmist and believe the argument about AGI itself is a red herring.

What I do believe is important in the article is the (simple) framework that it outlines. Given 1) the frontier of automation - the edge of task complexity that we are able to automate primarily dictated by 2) your beliefs on the rate of AI advancements - does AI plateau, continue at a steady rate or accelerate and 3) the limits on the capacity of human beings to adapt to ever increasingly complex tasks, how does this all play out.

The article explores three scenarios. One where we're able to stay ahead of the frontier of automation and everything continues as is with great productivity and prosperity. And two, with differing time frames of 5 years & 20 years, where we don't keep up and society, wages, and employment change rapidly. There are of course an infinite number of scenarios; there are many possible curve shapes and slopes, maybe AI advancement will take 200 years, maybe society will adapt and change our governing methods in the mean time, maybe another issue will become critical and render this all moot.

Regardless, "Frontier of Automation" is a great (catchy) way to describe the concept of which tasks can (and will) be automated. And if you think of your job as a collection of tasks it can help you start thinking about how your job will change or be automated away. On the one end of the spectrum we have jobs that have a large component of tasks that can not be or we will not want to be automated such as nursing and care giving and where we want a human (face) to trust and be accountable. On the other end are jobs that are primarily digital, tolerant to error, and we're making progress on automating anyway ... like programming.

As a programmer though, I'm not terrified (yet). I know my job will change but I believe we'll still need people to work with the AI/automations to create the software products we need. And there will be an ever increasing need for new applications. Just as programmers today don't work with punch cards. Programmers of the future will use completely different tools. Which, working backwards using the model above, means that I believe that though AI advancements will accelerate in the near term I don't see them surpassing our ability to adapt to using them. We may not understand them completely and will need to come to terms with that. Which is uncomfortable, but few (if any of us) really understand how our cars or governments or any complex system truly works.

And finally, I think that for every job that will be affected it is critical to educate ourselves (and coworkers) to best handle these coming changes. What scenarios are you planning for?

Want to get notified of new articles and insights?