9 Comments

"Does human compatibility mean compatible with the priorities of its employer (or owner, or master — I really don’t know what word fits here) or with those of “all citizens” as mentioned on the above image from the Booz website? What does “human compatible” mean when not all human agendas are mutually compatible? " This seems to me the key question. I hope you'll continue this discussion

Expand full comment

PRISM is legal. ACLU may not like it but they couldn't point to a single court decision supporting their accusations. Meanwhile ACLU has first-rate lawyers and is well-funded. Snowden is no different from useful idiots of yesteryear, and criticizing US government while being protected by Putin isn't very convincing.

Expand full comment

Except that none of the government activities Snowden betrayed were/are illegal.

Expand full comment

as an aside, I'm very skeptical of the strategy where those people like you who are most concerned about a technology or the behavior of a company try to avoid being associated with or helping those companies/products. If you can't convince enough people to successfully boycott then I fear it just results in those with the most concerns having the least influence.

For instance, look at what happened with face recognition. There were valid concerns about making sure it wasn't racially biased, that it wouldn't be used in ways likely to cause unfair outcomes etc... And sure, had Amazon and the employees there continued (or Google had joined) to develop the tech for law enforcement those concerns may not have been 100% satisfied but the very concerned employees who pressured them to drop the project would have meant those issues would have recieved considerable concern and attempts to mitigate the issues.

Instead, by pushing for disassociation the result wasn't to stop law enforcement from using face recognition but for them to use face rec by Clearview -- a company which seems to have ignored every concern in the book about face rec including who it gets sold to.

I fear a similar problem here. You'll never convince enough mathematicians to disassociate themselves from these projects but maybe you can convince more people with concerns to join them and persuade more people who are associated with them to take them seriously.

But obviously depends on the particular details so maybe it doesn't apply here.

Expand full comment

You raise valid points, but my aims are more modest. The attitudes toward the new technology that predominate among mathematicians are a fatalistic acceptance, which overlaps with the belief, which I think is mistaken, that these developments do not concern us, and an enthusiastic embrace of the prospects the technology appears to offer, without regard for the motivations of the corporations and investors and government agencies that promote the technology and the discourse surrounding it. I find neither of these options satisfying and I try to articulate one possible alternative attitude.

Skeptical attitudes alone will not prevent law enforcement or the military from getting what they want, but they can contribute to creating the circumstances in which effective political action is possible.

Expand full comment

That's a fair and reasonable response and a good goal.

It's interesting that fairly similar concerns lead us to different worries though. My worries tend to be more about situations where corporations and governments gain access to tools and tech the average citizen only gets gated access to via a few monopolies and that this will fundamentally shift the balance of power. I think that leads me to be a bit more skeptical of various kinds of limitations on these technologies as I fear they will create barriers that only the biggest companies or governments can scale but not stop the tech.

Expand full comment

I think you raise an important point about the danger advanced AI creates in terms of making it much easier to hide illegal government (or corporate) activity. The primary limitation has always been the need to keep everyone from ratting you out. Efficiency always reduces the number of people you need but AI specifically lets you put a firewall in place where the developers don't know the AI will be put to illegal use. I think that does suggest we want rules in place that require building AIs for the government with reporting systems and training that allow them to report potential misuse (to the limits of their ability).

The interesting part of it is that it's a danger that should be more concerning to people like me who don't see AI alignment as a particularly concerning problem (beyond just the fact that more complex programs have more complex unexpected behavior).

Expand full comment

Apart from natural disasters the main existential threat to humanity is what it’s always been —

#COUPITHOI

Concentrations Of Unchecked Power In The Hands Of Idiots

Expand full comment