There is a type of humility, an admission of limitations, that is as rare as it's ever been on the internet. But for some reason, I'm finding it standing out to me more and more. It sounds something like, "Here's what I think I know, and here's how I know it, but if that turns out to be wrong, I stand corrected."
Here's a real world example: InstaHide Disappointingly Wins Bell Labs Prize, 2nd Place. The core argument of the article is thus:
InstaHide (a recent method that claims to give a way to train neural networks while preserving training data privacy) was just awarded the 2nd place Bell Labs Prize (an award for “finding solutions to some of the greatest challenges facing the information and telecommunications industry.”). This is a grave error.
InstaHide is a recent proposal to train a neural network while preserving training data privacy. It (ostensibly) allows someone to train a machine learning model on a bunch of sensitive training data, and then publish that model, without fear that the model leaks anything about the training data itself.
Unfortunately, it turns out that InstaHide offers no privacy. It is not private for any reasonable definition of privacy, and given the output of InstaHide it is possible to completely recover the inputs that went in to it.
It is a grave error that InstaHide was awarded this prize because of how fundamentally misguided InstaHide is---both the idea itself and the methodology of the paper. Drawn to the right is what we're able to do: given a set of encoded images that try to preserve some notion of privacy, we recover extremely high fideltiy reconstructions.
But as the authors break down the claims of InstaHide and how they admit the limits of their knowledge and methodology:
As a result, it's impossible to ever write a paper that claims to break [InstaHide's privacy guarantees], because defining an attack necessarily requires a definition to break. The best one can do (and indeed, what we do in our paper) is to define potential definitions of what InstaHide may mean by privacy, and show that it doesn't satisfy those definitions. But it's always possible that there exists some definition of privacy that InstaHide does satisfy.
Reading a paragraph like that is breath of fresh air these days.
Maybe it's my imagination, but it does feel like it's rarer and rarer.
Social media and clickbait reward certainty. How do people find articles to read and videos to watch? Through social media (Facebook, Twitter, reddit) or recommendation algorithms like YouTube's video suggestions. There's some overlap there too, in the form of suggested posts or "We think you'd like..." recommendations on the regular social media platforms. But neither method of discovery reward measured, careful pronouncements.
You have no loyalty to the blog. You're not subscribed to the RSS feed to see every post. Odds are most of the links you follow or recommendations you're served will be the only piece of content you ever consume from that author. In the interest of having an infinite feed of content that engages your limbic system so you keep coming back to pull the handle of the slot machine, we've atomized authors to links.
The tide may be turning. Patreon and Substack may be recofiguring the business model of internet publishing to make carefully-composed, researched, thought-out work viable again. (Compare with the Gawker model of making a certain number of posts a day regardless of whether anything newsworthy has happened. That ad inventory isn't going to sell itself. But you get a bonus if you have an article that gets a lot of clicks!)
Perhaps people are growing tired of the outrage-go-round and want off. I know I did and significantly limited my social media use back in the spring. I use Feedly and Instapaper daily now. I always have things to read (more on that in a coming post), but they are the things I choose to give time to, not just what pops up in my face and I can't resist.