Tuesday, Oct. 22, 2019: I’m at the Mayo Clinic Social Media Network’s #MCSMN annual conference, where at 4 pm CT I’ll participate in this debate:
Resolved: Facebook’s disregard for user privacy should compel responsible hospitals to abandon the platform.
As host @ChrisBoyer and my fellow speakers and I had some preparatory discussions, it became clear that there’s way more information that’s really important for the audience to understand. (The audience is social media professionals in healthcare, people who really work at being competent.)
Regular readers know that I’ve become convinced that FB is seriously unscrupulous – especially, they’re not even careful about whether the data they collect is kept reasonably confidential.
Here, for convenient access, is a pile of links we’ll refer to.
- Facebook, I’m out. Your irresponsibility with patient groups has gone too far. My January post on why I stopped participating there. Please note the key phrase: irresponsibility with patient groups, particularly its users’ data.
- SicGRL vulnerability is still not fixed (September post. SicGRL is the name of the leak, explained in the post, discovered by breast cancer “previvor” Andrea Dowling, leader of a breast cancer patient group that got invaded by screen-scrapers.)
- Three profound podcast episodes from Sam Harris’s “Making Sense”:
- The Age of Surveillance Capitalism by Shoshanna Zuboff. It’s way too big and academic-speak for most people, but it reveals an essential truth that social media professionals must understand: the social media companies have a compelling motivation that didn’t exist 6-7 years ago and has changed everything – the name of the game now is to monopolize your attention (keep you from leaving), so it can be sold to others.
- The more they know about you, the better able they are to do this, so the snooping and collecting of data has become a fundamental economic need.
- Weapons of Math Destruction, an amazing and somewhat terrifying book about analytics software that misuses data sloppily gathered from all kinds of sources, including Facebook profiles … which commonly include information on everything you’ve clicked and everything you’ve liked.
- Analytics engines can be great when used correctly, because the scientists who run them check the results and improve their logic. But the ones described here can’t be audited, can’t be fact-checked, and thus are never improved … they can thus be a death spiral for citizens who get low ratings and thus never get recommended for improvement.
- These engines are used for employment pre-screening (so a bad rating means you never get an interview), for credit ratings, for police departments to decide which neighborhoods to patrol, and even used by judges for sentencing guidelines. And, in general, used by data brokers who simply sell whatever they get their hands on, and may or may not care about data quality and responsible downstream use of the data.
For this debate, my feeling is:
- Any hospital (at least in the US) has to be on Facebook, because that’s where their public is.
- But they should not host any patient groups, because all the articles above make clear that data posted on Facebook may get into the wrong hands, with potential harm as a predictable possibility.
Additions after the session
The first of these is relevant to misuse by Facebook (and its partners) of data combed from observing everyone’s behavior. The second is about the broader issue of the software learning to keep your eyes glued. Both are about manipulating us through data we may not have realized was being harvested.
1. Facebook’s role in Brexit and the 2016 US election
To understand how insidious Facebook’s irresponsibility is, you must spend 15 minutes watching this TED Talk from April by Carol Cadwalladr, the Welsh journalist who busted the whole Cambridge Analytica scheme that first destroyed the Brexit vote (with lying ads placed incredibly cleverly), then (as Zuck has since admitted) did similar things for Russia in the US presidential election. It was no amazing coincidence that only 80,000 votes, strategically distributed, tipped the entire US election. Not an accident, as you’ll see.
How the scammers used detailed FB data about specific individual users was truly brilliant! (Really, watch this.) And Facebook had embedded employees inside both campaigns, teaching them how to use the data to suit their objectives.
I’m not saying FB intentionally threw each election in any particular direction. (The Clinton campaign declined a similar offer.) I’m saying FB is seriously negligent / irresponsible about what happens when their powerful data gets used.
Added 10/25: Another example: in the Making Sense podcast above, McNamee says a company used FB profile data (which includes everything you click) to identify people who had clicked on Black Lives Matter topics, and sold the list to police departments.
2. Noah Yuval Harari’s Sapiens, Homo Deus, and 21 Lessons for the 21st Century
This is not specific to social media but it’s absolutely relevant to how we experience and trust the web platforms that are trying to monopolize our attention. If this author’s anywhere near right in his vision, it most certainly affects your grandchildren and perhaps your children.
Harari has burst on the global scene in the past few years with his sharp observations about what makes sapiens different from other species, and how recent changes in the cognitive world we live in (particularly online manipulations) may mean sapiens has reached the ends of its ability to cope with what it’s created. That’s a serious charge, but I’ve read all three books (two of them twice), and I think it’s worth considering what he says.
To make a long story short, here’s one question for you:
What if these artificial intelligence-driven websites get so good at knowing what you’ll like (cat videos, political things you like, or ones you don’t, football scores, anything) that they know what you’ll like better than you do?
What if they get so good at keeping you (or your kids) happy that you don’t even know they’re doing it? Would you be happy to be happy (or have them happy)?
This is very much the world portrayed in Aldous Huxley’s Brave New World, published 87 years ago. Go have a look at the Wikipedia article. Astoundingly relevant. Are you happy with the thought? Nobody can say whether it’s happening, but the smart authors and podcast guests above suggest more may be happening, faster, than we think.
Whew. I hope you can see why I’m passionate about this.