In 2014, Google bought an AI company that would go on to set some pretty golden standards for "cool stuff you can do with AI" around the world (beating up Starcraft players, mastering Go). DeepMind Health, an arm of that company, then won a contract with the NHS to use citizens data to solve technology challenges with machine learning. They created cool things like the "Streams" app that helps identify kidney failure in a hospital.
In July 2017, the Information Commissioners Office examined the NHS--DeepMind deal, identifying questionable treatment of patient data and ultimately causing it to be revised. DeepMind helpfully identified that "mistakes had been made and lessons needed to be learned."
In November it was announced that Deepmind Health would be brought closer in to the Google hivemind.
As this movement (like the WhatsApp --> Facebook movement) gets closer to bein' real, it's worth thinking a little bit about the burden to consent. NHS-meets-DeepMind isn't illegal anymore (that we know of) but the patients whose information is now just that little bit closer to living in Google's advertising servers never individually consented to it being there, and they never will. It feels wrong to have a multi-million market merger means your kidney status is available to the highest bidder.
As data gets passed on, becoming second and then third hand information, it gets dirty, and we stop caring about it. My data may be precious, but data about me sourced from someone who got it from someone else doesn't feel super valuable or important.
That's where we lose power and autonomy - in the cracks. AI and algorithms, like apps and governments, should get their information straight from the source, where the permission and transaction for it can clearly be seen.
Data rights protected. There should be a little badge over the good data somewhere, like there is for tuna that's ethically sourced. You could even sell it to advertisers (she says, with a smirk).
Yours in HAT,