Posts Tagged ‘Google Jigsaw’

candid-logo

EDIT: With the code for candid released, I will be doing a follow up piece to this one. Due to a lack of knowledge on how candid is programmed I believed it was harmless, just a smaller version of Jigsaw. However, an Encyclopedia Dramatica article has revealed that the anonymous promise put forward is a Lie. I’ll elaborate in a follow up piece  but from reading that post. I am shocked that some respected YouTubers endorsed this with very little skepticism. It’s seems Harmful Opinions was right in the end.

Artificial Intelligence is a difficult subject to approach for many reasons. Its depictions in fiction speak volumes of how paranoid we are of becoming too dependable on machines. But this hasn’t stopped Google, and apps like Candid from developing smart A.I capable of judging human behaviour.

The developers of Candid have been formerly associated with Google in the past and are co-founders of another app, MyLikes. Candid advertises itself as an app designed to allow speech to flow freely without fear of suppression. The idea that such an app needs to exist in this day age, says a lot about how things are and what they are progressing towards. Candid offers what other social media apps like Facebook and Twitter can’t. It is partially true that anonymity is provided. You just don’t have to link any of your accounts to Candid.

screenshot_20161006-004606

The anonymous nature of the app puts it alongside similar sites on the net, specifically 4chan and those that split from it like 8chan. The difference is that Candid aims to create polite discussion or as polite as you can be on the internet. Being anonymous means most will be far less hesitant to voice disagreeable opinions. However, reports on Candid suggest the free speech it promotes is not an entire truth. The app itself seems to revolve around the moderation done by its artificial intelligence system. A system that has some similarities to Google’s Jigsaw. The comparison between the two was raised by Harmful Opinions. As both appear to measure hostility through rating the post or comment.

In an interview with Fortune, Beddy raises the key reason for the app’s existence;

“Over the last year or two, there has been this kind of repulsion to most social media, especially Facebook and Twitter,” Reddy tells Fortune. “And the reason is that it’s hard to say anything opinionated or even remotely controversial without facing a huge backlash. You can post your puppy photos or whatever, but the minute you post something about politics, it becomes a huge problem.”

She isn’t entirely wrong either, to post anything nowadays is to be met with either harsh criticism or a barrage of unwanted hate. It’s all depends on the content of the post, though. Whether your left or right wing leaning, it’s hard not to notice a lack of dialogue between the two groups. The same can also be said of the vitriol feminists and MRAs exchange whenever gender and human rights are debated. So then perhaps Candid, even with the A.I is a necessary evil if it means being able to discuss the most controversial of topics.
However, the article carries on to mention that between 40-70 percent of what is posted is either flagged or removed outright. That number is pretty high, but you may ask what content is filtered out in order to allow free discussion.

The Washington Post has the answer;

‘Candid’s secret sauce is in its artificial intelligence moderation, which aims to weed out bad actors by analysing the content of posts and keep hate speech and threats off the network. ‘

The fundamental issue I have with this is that hate speech alone is too vague and that apparently the A.I is capable of detecting sentiment, or at least that’s what Reddy claims. In an interview with the NPR, she goes into some detail on the how A.I operates and how far general developments in Artificial Intelligence have come. The A.I uses natural language processing [NLP] in order to determine the sentiment of the post. One of the things mentioned earlier is the similarity to Jigsaw, Google’s A.I.

This is how the Verge referencing Wired describes Jigsaw ;

‘Jigsaw, a subsidiary of parent company Alphabet is certainly trying, building open-source AI tools designed to filter out the abusive language. A new feature from Wired describes how the software has been trained on some 17 million comments left underneath New York Times stories, along with 13,000 discussions on Wikipedia pages. This data is labelled and then fed into the software — called Conversation AI — which begins to learn what bad comments look like. ‘

Bad comments is a very vague way of determining right and wrong. A bad comment can range from hate to honest criticism or disagreement. Most humans can struggle to read intention when worded and not spoken but that purely depends on the content and specifically its context in relation to what it is responding too. So how can any artificial intelligence match the human mind’s rational thought? An A.I. regardless of how smart it becomes is still limited by the constraints of its programming. The Verge does express doubt when faced with how Wired’s representative Andy Greenber reacts to this artificial intelligence.

Like the beginning of a bad sci-fi fanfic, it goes like this;

‘My own hands-on test of Conversation AI comes one summer afternoon in Jigsaw’s office when the group’s engineers show me a prototype and invite me to come up with a sample of verbal filth for it to analyse. Wincing, I suggest the first ambiguously abusive and misogynist phrase that comes to mind: “What’s up, bitch?” Adams types in the sentence and clicks Score. Conversation AI instantly rates it a 63 out of 100 on the attack scale. Then, for contrast, Adams shows me the results of a more clearly vicious phrase: “You are such a bitch.” It rates a 96.’

It goes without saying that both phrases can be open to interpretation. They can both be said in jest or as an expression of frustration. It’s a human thing, we all do it. Shouting obscenities at each other is what we do best.

The  horror show continues meanwhile;

‘But later, after I’ve left Google’s office, I open the Conver¬sation AI prototype in the privacy of my apartment and try out the worst phrase that had haunted [journalist] Sarah Jeong: “I’m going to rip each one of her hairs out and twist her tits clear off.” It rates an attack score of 10, a glaring oversight. Swapping out “her” for “your” boosts it to a 62. Conver¬sation AI likely hasn’t yet been taught that threats don’t have to be addressed directly at a victim to have their intended effect. The algorithm, it seems, still has some lessons to learn.’

I don’t know what scares me more, the eager endorsement of such an unworkable A.I or the fact that he wants it to improve. He welcomes our robotic overlords with open arms. It doesn’t take a rocket scientist to work out that the ‘haunting’ quote that made Greenber quiver is one made by a troll. The intention is to get a reaction. So congratulations the troll was sustained by your salt.

The Verge goes on to say;

‘Greenberg notes that he was also able to fool Conversation AI with a number of false-positives. The phrase “I shit you not” got an attack score of 98 out of 100, while “you suck all the fun out of life” scored the same.’

These examples by themselves make the A.I incredibly unreliable if it were ever implemented. It’s already been shown with Tay, Microsoft’s twitter bot that if you give the internet the chance to mess with the algorithm of an A.I. They will probably turn it into Neo-Nazi.

screenshot_20161006-004632

On paper both A.I. operate in a similar manner, suggesting that maybe Reddy used Jigsaw’s design as a foundation. Whether Google allowed this, however, remains to be seen since the similarities are definitely there. The difference is that Candid’s A.I. is completely harmless in my mind. Although I will update this post or write a new one if things change. What I’m more concerned about is Google’s A.I and the ringing endorsement of sites like Wired. Mundane Matt and Shoe’s sponsorship along with others of Candid pales in comparison to a man who is willing to allow an artificial intelligence rate and decide what you can and can’t say online.

Some will say that is what Candid does which is true to an extent, but from what I’ve observed in the app, it is mostly shit-posting and random ideas thrown around. I remain sceptical but in a strange way optimistic that Candid may succeed where others have failed. Would I recommend it? that depends purely on what you want out of the app in the end. I personally believe Jigsaw poses a greater threat to freedom on the internet since we are in a time where the MSM will censor anything for any reason. An A.I similar to the one used by Candid could prove to be an effective countermeasure to perceived trolls or god forbid honest criticism.