Technology decouples economies. AirBNB owns no rooms, but provides accommodations; Uber owns (essentially) no vehicles, but provides transport; Stripe is not a bank, but provides bank accounts; a vast panoply of corporate services run on Amazon-owned servers. There are many excellent things about this decoupling; it improves efficiency, aids focus, and spurs innovation.
But technology also has an increasingly nasty habit of decoupling authority from responsibility. Is there a problem with your gig-economy service? Oh, no, don't take it up with the company. You know, the entity who actually charges you the money, whose name will appear on your credit-card statement. They're very sorry for your problem, and will provide a full refund, but don't expect them to take any further responsibility whatsoever for the situation. They're just a matchmaker, suddenly, after all; your problem is with of the independent contractor providing the service; the company washes their corporate hands of it.
Or, alternately: did something go wrong? Oh, well, it's not really the company's fault. The company is made up of human beings, you see, and no human being did anything wrong, so you can't really fault the company. It was just the algorithm. Your negative experience will be noted as a valuable data point to train the algorithm in future. But responsible? Good heavens, no. Like we just said, no human being was at fault; therefore, no one is responsible. Right?
Am I being too abstract? OK, I'll name a name: Facebook. In particularly, Facebook's attitude towards advertising, and its refusal to take any a priori responsibility for the content of its ads. Consider:
— Sam Thielman (@samthielman) September 21, 2017
That is a remarkable claim. It is also deeply disingenuous. If advertisers started showing Facebook users hardcore pornography, you can be absolutely certain that they would very quickly find a way to stop this and prevent it from happening in the future, without defending the porn on the basis that "we don't check what people say and I don't think people should want us to." Especially in light of this "horseshit":
you think the person who wrote this copy actually believes this horseshit pic.twitter.com/IPjdAlwVEa
— ಠ_ಠ (@MikeIsaac) September 30, 2017
Now, in fairness, I agree with Zuckerberg that the malevolent-disinformation problem is relatively small. If you think $100,000 in Facebook ads and $270,000 in Twitter ads meaningfully elected the multibillion-dollar U.S. election, you are delusional.
And of course you can make a very good case that political advertising should not be censored. But proudly trumpeting "we don't check what people say" is not that case; rather, it implicitly passes the responsibility for Facebook's ethics from its employees to its algorithms. Just because something was done by a corporate algorithm, rather than a corporate employee, doesn't mean the corporation is not responsible for it.
This decoupling of authority and responsibility is not unique to new technology, of course. Consider Equifax, who help to dictate the all-important credit ratings of the American public ... but have zero incentive to treat the American public well, since they are not customers. Again, it's authority without responsibility, while individuals are left with responsibility without authority.
This disjoint isn't inherently endemic to technology. And there's a lot tech can do to improve the correlation between authority and responsibility. But we can't expect major companies to do so; almost by definition, the more authority-without-responsibility they have, the more money they can make. We'll have to find other ways to balance those scales.