Why Not Wait On AI Risk? – Robin Hanson

Years ago when the AI risk conversation was just starting, I was a relative skeptic, but I was part of the conversation. Since then, the conversation has become much larger, but I seem no longer part of it; it seems years since others in this convo engaged me on it.

Clearly most who write on this do not sit close to my views, though I may sit closer to most who’ve considered getting into this topic, but instead found better things to do. So yes, I may be missing something that they all get. Furthermore, I’ve admittedly only read a small fraction of the huge amount since written in this area. Even so, I feel I should periodically try again to explain my reasoning, and ask others to please help show me what I’m missing.

The future AI scenario that makes AI seem most like prior wide tech categories goes as follows. AI systems are available from many competing suppliers at similar prices, and their similar abilities increase gradually over time. Abilities don’t increase faster than customers can usefully apply them. Such systems slowly displace humans on specific tasks, and are on average roughly as task specialized as humans are now. AI firms distinguish themselves via the different tasks their systems do.

The places and groups who adopt such systems first are those flexible and rich enough to afford them, and having other complementary capital. Those who invest in AI capital on average gain from their investments. Those who invested in displaced capital may lose, though over the last two decades workers at more automated jobs have not seen any average effect on their wages or number of workers. AI today is only a rather minor contribution to our economy (<5%), and it has quite a long way to go before it can make a substantial contribution. And we have only vague ideas of what AIs that made a much larger contribution would look like.

Today most of the ways that humans help and harm each other are via our relations. Such as: customer-supplier, employer-employee, citizen-politician, defendant-plaintiff, friend-friend, parent-child, lover-lover, victim-criminal-police-prosecutor-judge, army-army, slave-owner, and competitors. So as AIs replace humans in these roles, the main ways that AIs help and hurt humans are likely to also be via these roles.

Our usual story is that such hurt is limited by competition. For example, each army is limited by all the other armies that might oppose it. And your employer and landlord are limited in exploiting you by your option to switch to other employers and landlords. So unless AI makes such competition much less effective at limiting harms, it is hard to see how AI makes role-mediated harms worse. Sure smart AIs might be smarter than humans, but they will have other smart AI competitors and humans will have AI advisors. Humans don’t seem much worse off recently as firms and governments who are far more intelligent than individual humans have taken over many roles.

AI risk folks are especially concerned with losing control over AIs. But consider, for example, an AI hired by a taxi firm to do its scheduling. If such an AI stopped scheduling passengers to be picked up where they waited and delivered to where they wanted to go, the firm would notice quickly, and could then fire and replace this AI. But what if an AI who ran such a firm became unresponsive to its investors. Or if an AI who ran an army becoming unresponsive to its oversight government? In both cases, while such investors or governments might be able to cut off some outside supplies of resources, the AI might do substantial damage before such cutoffs bled it dry.

However, our world today is well acquainted with the prospect of “coups” wherein firm or army management becomes unresponsive to its relevant owners. Not only do our usual methods usually seem sufficient to the task, we don’t see much of an externality in these problems. You try to keep your firm under control, and I try to keep mine, but I’m not especially threatened by your losing control of yours. We care a bit more about others losing control of their cars, planes, or nuclear power plants, as those might hurt bystanders. But we care much less once others show us sufficient liability, and liability insurance, to cover our losses in such cases.

I don’t see why I should be much more worried about your losing control of your firm to an AI than to a human or group of humans. And liability insurance also seems a sufficient answer to your possibly losing control of an AI driving your car or plane. Furthermore, I don’t see why its worth putting much effort into planning how to control AIs far in advance of seeing much detail about how AIs actually do concrete tasks where loss of control matters. Knowing such detail has usually been key to controlling past systems, and money invested now, instead of spent on analysis now, gives us more to spend on analysis later.

All of the above has been based on assuming that AI will be similar to past techs in how it diffuses and advances. Some say that AI might be different, just because hey anything might be different. Others, like my ex-co-blogger Eliezer Yudkowsky, and Nick Bostrom in his book Superintelligence, say more about why thy expect AI advances to be far more lumpy than we’ve seen for most techs.

Yudkowsky paints a picture of a world full of familiar weak stupid slowly improving computers, until suddenly and unexpectedly a single super-smart un-controlled AGI appears and is able to decisively overwhelm all other powers on Earth. Alternatively, he claims (quite implausibly I think) that AGIs naturally coordinate to merge into a single system.

These folks seem to envision a few key discrete breakthrough insights that allow the first team that finds them to suddenly catapult their AI into abilities far beyond all other then-current systems. As I’ve previously explained at length, that seems to me to populate an extremely unusual lumpiness relative to the history of innovation, tools, computers, AI, and even machine learning. And to populate more of a conceptual essence re the keys to “betterness” than I find plausible. Recent machine learning systems today seem relatively close to each other in their abilities, and none seem remotely inclined to mount a coup.

I don’t mind groups with small relative budgets exploring scenarios with proportionally small chances, but I lament so many people taking this as a default AI scenario. And while I get why people like Yudkowsky focus on scenarios in which they fervently believe, I am honestly puzzled why so many AI risk experts seem to repudiate his extreme scenarios, and yet still see AI risk as a terribly important project to pursue right now. If AI isn’t unusually lumpy, then why are early efforts at AI control design especially valuable?

So far I’ve mentioned two widely expressed AI concerns. First, AIs may hurt human workers by displacing them, and second, AIs may start coups wherein they wrest control of some resources from their owners. A third widely expressed concern is that the world today may be stable, and contain value, only due to somewhat random and fragile configurations of culture, habits, beliefs, attitudes, institutions, values, etc. If so, our world may break if this stuff drifts out of a safe and stable range for such configurations. AI might be or facilitate such a change, and by helping to accelerate change, AI might accelerate the rate of configuration drift.

Similar concerns have often been expressed about allowing too many foreigners to immigrate into a society, or allowing the next youthful generation too much freedom to question and change inherited traditions. Or allowing a many other specific transformative techs, like genetic engineering, fusion energy, social media, or space. And other big social changes, like gay marriage. Many have deep and reasonable fears regarding big changes. And some seek to design AI so that it won’t allow excessive change. But this issue seems to me more about change in general than about AI in particular.

So what am I missing? Why would AI advances be so vastly more lumpy than prior tech advances as to justify very early control efforts? Or if not, why are AI risk efforts a priority now?

 

Read More

Leave a Reply