One of the dumbest things I’ve ever published

While I was reading through some of my older essays the other day, I came across a piece called Privacy vs. User Experience, published in 2014. In the article, I argued that Apple’s then-nascent philosophical stance on the supremacy of user privacy was going to slow down its product development while competitors fully embraced deep data mining techniques to build better user experiences.

It is, I believe, one of the dumbest, most wrong things I have ever published.

The essay’s structure isn’t actually that bad. It has a strong thesis. The whole thing seems relatively innocuous, whether or not you agree with the premise. What makes Privacy vs. User Experience so dangerous as an essay is that the thesis is undeniably correct in the abstract and yet completely wrong in practice. There is no valid counterargument to the abstract idea. It is a fact that if a company has better data and analyzes it more completely, then they can obviously produce better experiences for their users. I wrote:

The truth is that collecting information about people allows you to make significantly better products, and the more information you collect, the better products you can build. Apple can barely sync iMessage across devices because it uses an encryption system that prevents it from being able to read the actual messages. Google knows where I am right now, where I need to be for my meeting in an hour, what the traffic is like, and whether I usually take public transportation, a taxi, or drive myself. Using that information, it can tell me exactly when to leave. This isn’t science fiction; it’s actually happening.

Ah yes, that was the dream. If it were a benevolent system, created and run in a vacuum – in the land of butterflies, lollipops, and pure intellectual theory – what I posited could have been correct. But we are not in that place, and I was wrong. In fact, as the theory described above has been put into actual practice, it has caused at least two things to happen:

  1. A new enemy of mankind, called the algorithm, has arisen. With all of the private information collected about you and your network, this black-box set of neutral networks has been tasked with deciding what you will be exposed to in feeds of information on sites like Twitter and Facebook. I purposefully do not call these “social feeds” or “social sites”, because they are not social. News Feed and Twitter’s Timeline are artificial intelligence-powered aggregators that watch behaviors within communities of people and then serve content to maximize engagement. There is nothing social about these services; they are built around observation, collection, and profiling in the pursuit of conditioning people to behave in certain ways. This system does not create a better user experience, and it certainly is not a good reason to sacrifice privacy.

  2. I argued in my piece that Tim Cook had conflated privacy with security. He may have. But in the five years since 2014, the following fact has become absurdly clear to me: there is no difference between privacy and security. Security is an illusion, just like the lock on your front door. Advanced cryptography can prevent immediate threats, but in the long run, it is impossible to keep things private at scale. Humans can only build flawed software. There will always be bugs. And thus your “private” information is not now and will never be safe in the hands of a third party, no matter how competent. The only solution is to keep the information within only your control, and that is how Apple has attempted to architect its systems.

Thus: (1) The building of tools to aggregate private information in order to ostensibly improve user experience has in fact, at scale, caused strange and negative things to happen. Some of these things are threatening totally unrelated social constructs like democracy, addiction, and human decency. Even more insane is that the mechanisms of action driving the functionality behind these hyper-trained algorithms are not very well understood. Machine learning models are trained on huge amounts of data, and while you can input information (like a person’s private interests, likes, browsing history, etc) and then see clear output (a customized feed), you can’t really know exactly how or why the output was derived. All you can be confident of is that the output will perform according to some mysterious heuristics, often decided by the neural network itself. And: (2) If, after Snowden, Experian, Starwood, Yahoo!, and countless other examples of leaks, you think security is going to protect your privacy, you are either ignorant or insane.

I ended Privacy vs. User Experience with this discussion about compromise:

As long as people understand the potential risks, the answer to the [question of whether to sacrifice a little privacy to improve user experience] is almost always, “Yes.” And with the emergence of artificial intelligence, the answer to that question will become increasingly more clear. The vast improvements in user experience far, far outweigh the potential security risks to private information.

Wrong. In fact, my initial thesis was so wrong that the exact opposite turned out to be true. AI made user experiences worse. Private data being made available to algorithms made them unpredictable, extreme, and potentially damaging to society.

As it turns out, a bit of mystery, even when dealing with advanced artificial intelligence, appears to be way more valuable for user experience than a full profile of the private information inside someone’s mind and life.

Humans are complex, private creatures. Without privacy, they become drones… to the countless algorithms ready to guide their way.


 
651
Kudos
 
651
Kudos

Now read this

Andrew Johnson’s Obituary

The New York Times obituary for President Andrew Johnson, published on August 1, 1875: The history this man leaves is a rare one. His career was remarkable, even in this country; it would have been quite impossible in any other. It... Continue →