Don’t want to read privacy policies? This AI tool will do it for you.

This story is part of a group of stories called

Finding the best ways to do good. Made possible by The Rockefeller Foundation.

Let’s be real: When you download a new app, you probably don’t bother to read its privacy policy first. I write about privacy as a journalist and even I rarely bother to read those policies. They’re written in eye-glazing legalese perfectly calibrated to make any normal human being want to stop reading as soon as possible.

Who can blame us for rushing to check that little box that says we agree to the terms of service?

Now, a new tool called Guard promises to read the privacy policies of various apps for us. It harnesses the power of AI to analyze reams of text, breaking down each sentence for the level of risk it represents for our privacy.

Guard currently takes the form of a free website featuring analysis of certain popular apps like Twitter, Instagram, Tinder, Whatsapp, Netflix, Spotify, Reddit, and Duolingo. It hasn’t yet analyzed every app out there, not even close, but you can suggest new apps for it to check out. (Facebook comes to mind.) Unfortunately, Guard can’t inspect a given app immediately on demand — at least not yet.

The plan is to release a downloadable app that will scan all the other apps you use and alert you to privacy threats embedded in them. Guard’s app is in the beta-testing phase. It’s not yet available to the public, but you can sign up to join the beta here.

For now, the website version already offers useful information. It tells you how many threats it’s detected in an app and whether they’re mild or worrisome. It also tells you how many privacy scandals the app has been involved in. And it gives each app an overall score and a letter grade.

Twitter, for example, gets an overall score of 15 percent and a “D” grade. Ouch. Here’s how Guard explains some privacy scandals in which Twitter has been embroiled:

Guard isn’t guaranteed to catch each and every privacy threat, so I wouldn’t rely too heavily on it just yet. But what’s especially interesting is that it invites all of us to help train the AI so that, over time, it’ll get better and better at sniffing out the sorts of privacy issues that would worry real humans beings. You don’t need to have any specialized knowledge to do this — you just need to have a couple minutes to spare.

How do you teach a machine the meaning of privacy?

Javi Rameerez, the Madrid-based developer who created Guard’s software, is interested in AI systems dedicated to natural language processing (NLP). He’s also clearly interested in AI ethics.

Rameerez describes Guard as an academic experiment — it’s actually his thesis in progress on AI and NLP. He says the aim is to teach machines how humans think about privacy. To do that, Guard needs input from lots and lots of humans.

You don’t need to help train the AI in order to use Guard, but if you do agree to devote a couple ofminutes to it, Guard will direct you to a quiz on its website. It shows you two snippets from different policies and asks you to judge which one you think is more privacy-friendly. That data can help build an AI whose perspective on privacy aligns with that of real humans. (However, a potential pitfall of this approach is that we humans don’t always know what’s best for us and can’t necessarily anticipate the risks certain policies might pose for us down the line.)

“Each data point helps the AI understand what’s acceptable and what’s not in regard to human privacy,” the website says. “The end goal is to teach machines so they can keep us safe in an increasingly dangerous internet.”

Here’s an example of a dilemma:

This is pretty clear, right? Option A looks more privacy-friendly. But some of the dilemmas are much harder.

Once you’ve solved a few dilemmas as best you can, you can check your results. Guard lets you know whether your ability to select the most ethical option is above or below the average of other participants. (I was relieved to learn that I’m in the top 10 percent; given my line of work, any other result would’ve been embarrassing!)

Other than the fun of taking a quiz and seeing how you stack up against other people, what’s in it for you? Guard’s website says that by training its AI you’ll “become a pioneer” by being “one of the first humans to ever teach machines privacy.” Plus, “you will discover your cognitive biases when making privacy-related decisions.”

It also asks you for two additional favors — but for a good purpose. First, it requests some demographic information about you, like age, ethnicity, and education level. Then it says, “We’re trying to ethically avoid any kind of bias in the data we gather, so if you know of anyone part of underrepresented minorities (particularly in tech, like women, or minority ethnic groups) please share this with them so we can teach computers in the most ethical way as humanly possible.”

Providing your information and inviting your friends are both totally optional, and Guard promises all data will be completely confidential, unidentifiable, and anonymized.

The fact that Guard makes an explicit goal of diversifying its dataset is laudable, because too often AI systems are trained on a subset of the population (usually white men) that leads to the well-documented problem of AI bias (for example, facial recognition technology’s high error rate when it comes to identifying women and people of color).

And too often, AI systems that specialize in NLP come with scary risks, like the possibility that they’ll be used to generate fake news and sow misinformation.

In Guard’s case, however, we’ve got an example of an NLP system that stands to help us — both by alerting us to our apps’ problematic privacy policies and by giving us a chance to learn by developing an AI from the ground up.

Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.

Sourse: vox.com

No votes yet.
Please wait...

Ответить

Ваш адрес email не будет опубликован. Обязательные поля помечены *