Posts Tagged ‘Artificial Intelligence’

candid-logo

I have previously written a piece on Candid and its A.I as well as mentioning Google’s Jigsaw and the possible dangers it posed to freedom of expression on the internet. Initially, I believed Candid’s A.I to be flawed but relatively harmless in the grand scheme of things. I should never have been so naive.

From the short time, I used it. I quickly found the app to be an unorganised mess. There was no real discussion, like people were being offensive for the sake of it. There wasn’t a lot of productive discussion in any of the groups and unlike Minds, you’re limited by how much reach you have. You also risk being damned to the random group if your post is considered to be offensive. Or worse have the post removed without so much as a notification.

It’s already been mentioned how Harmful Opinions video criticising Candid could not be posted but as time has gone by the CEO Bindu Reddy and those under her employment have engaged in a witch-hunt of anyone critical of the app. Claiming that there will be legal consequences and that she has a case for libel.

This all appears to be coming off the back of an Encyclopaedia Dramatica article that goes into some depth on what lies behind the code of Candid. If you are interested in reading it, then here is the archived page. However, I do suggest reading it outside of work since ED has a lot of NSFW content dotted around its pages.

The first to observe is that Candid is to some extent recording the details of those who chose to connect their facebook accounts (really defeating the point of anonymity) although you can skip this. The button to do so is relatively obscure. The real concern is that Candid is data-mining its users using an app called Kochava. A quote from MobyAffiliates in the ED article describes Kochava as:

‘Kochava is a mobile app marketing tracker with a unique approach, it looks at all device identifiers as equal and as such is able to match the identifiers of different publishers to provide effective analysis and reporting to advertisers. In addition to this, Kochava also automatically engages a device fingerprinting system, using a number of algorithms incorporating carrier and geo-location to match clicks to installs with an accuracy rate of 85%. Offering deep level integration support, Kochava supports server-to-server integration as well as an SDK for Android and iOS. Match reporting for each attribution includes how (device, hash types etc) and Cohort analysis is offered for ROI overlay as well as optimisation according to various campaign metrics (clicks, installs, post-install revenue etc).’

The ultimate point is that Kochava is using your information to feed you ads. For a service built on allowing users to be anonymous. It certainly seems to be doing the exact opposite of this. The ED elaborates on how connecting Facebook allows Candid access to your feed, your likes, your app invites and your messages. This data-mining extends to knowing the model of your phone and even your cellphone number if its present on facebook. But even without connecting to Facebook, more code reveals that your location is being tracked. It’s also sorting your apps into lists of whether based on quality.

The ED article is ultimately damning of what has gone into this apps programming. However, it gets much worse, after Mark Kern spoke negatively about the app. He was forced into silence by the actions of those at Candid since they had begun digging into his past. And when Harmful wanted to talk to Reddy on stream she made the request that ‘comments be disabled’. For someone who has created an app centred on free speech this is immensely hypocritical.

This situation in all fairness has been overblown and Reddy fanning these flames has merely caused the Streisand Effect. If this escalates any further into legal implications, I honestly wouldn’t be surprised. Reddy and those working for her seem to not realise that criticism is allowed in today’s society. For those under her to actively seek to ruin the lives of their critics is abhorrent by itself. But in the age of the social media mob, I’m just disappointed. I will note that she has since apologised for these actions but they never should have been undertaken in the first place. If you want to hear more about all this, Reddy did go on a stream with Bearing, and Harmful posted his own response to that.

If any more developments occur, I will probably write a part 3. If you haven’t read part one, you can find it here. It mentions Jigsaw google’s AI. That has already policed a few comment sections and is going to save humanity from itself.
God help us all…

candid-logo

EDIT: With the code for candid released, I will be doing a follow up piece to this one. Due to a lack of knowledge on how candid is programmed I believed it was harmless, just a smaller version of Jigsaw. However, an Encyclopedia Dramatica article has revealed that the anonymous promise put forward is a Lie. I’ll elaborate in a follow up piece  but from reading that post. I am shocked that some respected YouTubers endorsed this with very little skepticism. It’s seems Harmful Opinions was right in the end.

Artificial Intelligence is a difficult subject to approach for many reasons. Its depictions in fiction speak volumes of how paranoid we are of becoming too dependable on machines. But this hasn’t stopped Google, and apps like Candid from developing smart A.I capable of judging human behaviour.

The developers of Candid have been formerly associated with Google in the past and are co-founders of another app, MyLikes. Candid advertises itself as an app designed to allow speech to flow freely without fear of suppression. The idea that such an app needs to exist in this day age, says a lot about how things are and what they are progressing towards. Candid offers what other social media apps like Facebook and Twitter can’t. It is partially true that anonymity is provided. You just don’t have to link any of your accounts to Candid.

screenshot_20161006-004606

The anonymous nature of the app puts it alongside similar sites on the net, specifically 4chan and those that split from it like 8chan. The difference is that Candid aims to create polite discussion or as polite as you can be on the internet. Being anonymous means most will be far less hesitant to voice disagreeable opinions. However, reports on Candid suggest the free speech it promotes is not an entire truth. The app itself seems to revolve around the moderation done by its artificial intelligence system. A system that has some similarities to Google’s Jigsaw. The comparison between the two was raised by Harmful Opinions. As both appear to measure hostility through rating the post or comment.

In an interview with Fortune, Beddy raises the key reason for the app’s existence;

“Over the last year or two, there has been this kind of repulsion to most social media, especially Facebook and Twitter,” Reddy tells Fortune. “And the reason is that it’s hard to say anything opinionated or even remotely controversial without facing a huge backlash. You can post your puppy photos or whatever, but the minute you post something about politics, it becomes a huge problem.”

She isn’t entirely wrong either, to post anything nowadays is to be met with either harsh criticism or a barrage of unwanted hate. It’s all depends on the content of the post, though. Whether your left or right wing leaning, it’s hard not to notice a lack of dialogue between the two groups. The same can also be said of the vitriol feminists and MRAs exchange whenever gender and human rights are debated. So then perhaps Candid, even with the A.I is a necessary evil if it means being able to discuss the most controversial of topics.
However, the article carries on to mention that between 40-70 percent of what is posted is either flagged or removed outright. That number is pretty high, but you may ask what content is filtered out in order to allow free discussion.

The Washington Post has the answer;

‘Candid’s secret sauce is in its artificial intelligence moderation, which aims to weed out bad actors by analysing the content of posts and keep hate speech and threats off the network. ‘

The fundamental issue I have with this is that hate speech alone is too vague and that apparently the A.I is capable of detecting sentiment, or at least that’s what Reddy claims. In an interview with the NPR, she goes into some detail on the how A.I operates and how far general developments in Artificial Intelligence have come. The A.I uses natural language processing [NLP] in order to determine the sentiment of the post. One of the things mentioned earlier is the similarity to Jigsaw, Google’s A.I.

This is how the Verge referencing Wired describes Jigsaw ;

‘Jigsaw, a subsidiary of parent company Alphabet is certainly trying, building open-source AI tools designed to filter out the abusive language. A new feature from Wired describes how the software has been trained on some 17 million comments left underneath New York Times stories, along with 13,000 discussions on Wikipedia pages. This data is labelled and then fed into the software — called Conversation AI — which begins to learn what bad comments look like. ‘

Bad comments is a very vague way of determining right and wrong. A bad comment can range from hate to honest criticism or disagreement. Most humans can struggle to read intention when worded and not spoken but that purely depends on the content and specifically its context in relation to what it is responding too. So how can any artificial intelligence match the human mind’s rational thought? An A.I. regardless of how smart it becomes is still limited by the constraints of its programming. The Verge does express doubt when faced with how Wired’s representative Andy Greenber reacts to this artificial intelligence.

Like the beginning of a bad sci-fi fanfic, it goes like this;

‘My own hands-on test of Conversation AI comes one summer afternoon in Jigsaw’s office when the group’s engineers show me a prototype and invite me to come up with a sample of verbal filth for it to analyse. Wincing, I suggest the first ambiguously abusive and misogynist phrase that comes to mind: “What’s up, bitch?” Adams types in the sentence and clicks Score. Conversation AI instantly rates it a 63 out of 100 on the attack scale. Then, for contrast, Adams shows me the results of a more clearly vicious phrase: “You are such a bitch.” It rates a 96.’

It goes without saying that both phrases can be open to interpretation. They can both be said in jest or as an expression of frustration. It’s a human thing, we all do it. Shouting obscenities at each other is what we do best.

The  horror show continues meanwhile;

‘But later, after I’ve left Google’s office, I open the Conver¬sation AI prototype in the privacy of my apartment and try out the worst phrase that had haunted [journalist] Sarah Jeong: “I’m going to rip each one of her hairs out and twist her tits clear off.” It rates an attack score of 10, a glaring oversight. Swapping out “her” for “your” boosts it to a 62. Conver¬sation AI likely hasn’t yet been taught that threats don’t have to be addressed directly at a victim to have their intended effect. The algorithm, it seems, still has some lessons to learn.’

I don’t know what scares me more, the eager endorsement of such an unworkable A.I or the fact that he wants it to improve. He welcomes our robotic overlords with open arms. It doesn’t take a rocket scientist to work out that the ‘haunting’ quote that made Greenber quiver is one made by a troll. The intention is to get a reaction. So congratulations the troll was sustained by your salt.

The Verge goes on to say;

‘Greenberg notes that he was also able to fool Conversation AI with a number of false-positives. The phrase “I shit you not” got an attack score of 98 out of 100, while “you suck all the fun out of life” scored the same.’

These examples by themselves make the A.I incredibly unreliable if it were ever implemented. It’s already been shown with Tay, Microsoft’s twitter bot that if you give the internet the chance to mess with the algorithm of an A.I. They will probably turn it into Neo-Nazi.

screenshot_20161006-004632

On paper both A.I. operate in a similar manner, suggesting that maybe Reddy used Jigsaw’s design as a foundation. Whether Google allowed this, however, remains to be seen since the similarities are definitely there. The difference is that Candid’s A.I. is completely harmless in my mind. Although I will update this post or write a new one if things change. What I’m more concerned about is Google’s A.I and the ringing endorsement of sites like Wired. Mundane Matt and Shoe’s sponsorship along with others of Candid pales in comparison to a man who is willing to allow an artificial intelligence rate and decide what you can and can’t say online.

Some will say that is what Candid does which is true to an extent, but from what I’ve observed in the app, it is mostly shit-posting and random ideas thrown around. I remain sceptical but in a strange way optimistic that Candid may succeed where others have failed. Would I recommend it? that depends purely on what you want out of the app in the end. I personally believe Jigsaw poses a greater threat to freedom on the internet since we are in a time where the MSM will censor anything for any reason. An A.I similar to the one used by Candid could prove to be an effective countermeasure to perceived trolls or god forbid honest criticism.