Posts Tagged ‘free speech’


I have previously written a piece on Candid and its A.I as well as mentioning Google’s Jigsaw and the possible dangers it posed to freedom of expression on the internet. Initially, I believed Candid’s A.I to be flawed but relatively harmless in the grand scheme of things. I should never have been so naive.

From the short time, I used it. I quickly found the app to be an unorganised mess. There was no real discussion, like people were being offensive for the sake of it. There wasn’t a lot of productive discussion in any of the groups and unlike Minds, you’re limited by how much reach you have. You also risk being damned to the random group if your post is considered to be offensive. Or worse have the post removed without so much as a notification.

It’s already been mentioned how Harmful Opinions video criticising Candid could not be posted but as time has gone by the CEO Bindu Reddy and those under her employment have engaged in a witch-hunt of anyone critical of the app. Claiming that there will be legal consequences and that she has a case for libel.

This all appears to be coming off the back of an Encyclopaedia Dramatica article that goes into some depth on what lies behind the code of Candid. If you are interested in reading it, then here is the archived page. However, I do suggest reading it outside of work since ED has a lot of NSFW content dotted around its pages.

The first to observe is that Candid is to some extent recording the details of those who chose to connect their facebook accounts (really defeating the point of anonymity) although you can skip this. The button to do so is relatively obscure. The real concern is that Candid is data-mining its users using an app called Kochava. A quote from MobyAffiliates in the ED article describes Kochava as:

‘Kochava is a mobile app marketing tracker with a unique approach, it looks at all device identifiers as equal and as such is able to match the identifiers of different publishers to provide effective analysis and reporting to advertisers. In addition to this, Kochava also automatically engages a device fingerprinting system, using a number of algorithms incorporating carrier and geo-location to match clicks to installs with an accuracy rate of 85%. Offering deep level integration support, Kochava supports server-to-server integration as well as an SDK for Android and iOS. Match reporting for each attribution includes how (device, hash types etc) and Cohort analysis is offered for ROI overlay as well as optimisation according to various campaign metrics (clicks, installs, post-install revenue etc).’

The ultimate point is that Kochava is using your information to feed you ads. For a service built on allowing users to be anonymous. It certainly seems to be doing the exact opposite of this. The ED elaborates on how connecting Facebook allows Candid access to your feed, your likes, your app invites and your messages. This data-mining extends to knowing the model of your phone and even your cellphone number if its present on facebook. But even without connecting to Facebook, more code reveals that your location is being tracked. It’s also sorting your apps into lists of whether based on quality.

The ED article is ultimately damning of what has gone into this apps programming. However, it gets much worse, after Mark Kern spoke negatively about the app. He was forced into silence by the actions of those at Candid since they had begun digging into his past. And when Harmful wanted to talk to Reddy on stream she made the request that ‘comments be disabled’. For someone who has created an app centred on free speech this is immensely hypocritical.

This situation in all fairness has been overblown and Reddy fanning these flames has merely caused the Streisand Effect. If this escalates any further into legal implications, I honestly wouldn’t be surprised. Reddy and those working for her seem to not realise that criticism is allowed in today’s society. For those under her to actively seek to ruin the lives of their critics is abhorrent by itself. But in the age of the social media mob, I’m just disappointed. I will note that she has since apologised for these actions but they never should have been undertaken in the first place. If you want to hear more about all this, Reddy did go on a stream with Bearing, and Harmful posted his own response to that.

If any more developments occur, I will probably write a part 3. If you haven’t read part one, you can find it here. It mentions Jigsaw google’s AI. That has already policed a few comment sections and is going to save humanity from itself.
God help us all…


EDIT: With the code for candid released, I will be doing a follow up piece to this one. Due to a lack of knowledge on how candid is programmed I believed it was harmless, just a smaller version of Jigsaw. However, an Encyclopedia Dramatica article has revealed that the anonymous promise put forward is a Lie. I’ll elaborate in a follow up piece  but from reading that post. I am shocked that some respected YouTubers endorsed this with very little skepticism. It’s seems Harmful Opinions was right in the end.

Artificial Intelligence is a difficult subject to approach for many reasons. Its depictions in fiction speak volumes of how paranoid we are of becoming too dependable on machines. But this hasn’t stopped Google, and apps like Candid from developing smart A.I capable of judging human behaviour.

The developers of Candid have been formerly associated with Google in the past and are co-founders of another app, MyLikes. Candid advertises itself as an app designed to allow speech to flow freely without fear of suppression. The idea that such an app needs to exist in this day age, says a lot about how things are and what they are progressing towards. Candid offers what other social media apps like Facebook and Twitter can’t. It is partially true that anonymity is provided. You just don’t have to link any of your accounts to Candid.


The anonymous nature of the app puts it alongside similar sites on the net, specifically 4chan and those that split from it like 8chan. The difference is that Candid aims to create polite discussion or as polite as you can be on the internet. Being anonymous means most will be far less hesitant to voice disagreeable opinions. However, reports on Candid suggest the free speech it promotes is not an entire truth. The app itself seems to revolve around the moderation done by its artificial intelligence system. A system that has some similarities to Google’s Jigsaw. The comparison between the two was raised by Harmful Opinions. As both appear to measure hostility through rating the post or comment.

In an interview with Fortune, Beddy raises the key reason for the app’s existence;

“Over the last year or two, there has been this kind of repulsion to most social media, especially Facebook and Twitter,” Reddy tells Fortune. “And the reason is that it’s hard to say anything opinionated or even remotely controversial without facing a huge backlash. You can post your puppy photos or whatever, but the minute you post something about politics, it becomes a huge problem.”

She isn’t entirely wrong either, to post anything nowadays is to be met with either harsh criticism or a barrage of unwanted hate. It’s all depends on the content of the post, though. Whether your left or right wing leaning, it’s hard not to notice a lack of dialogue between the two groups. The same can also be said of the vitriol feminists and MRAs exchange whenever gender and human rights are debated. So then perhaps Candid, even with the A.I is a necessary evil if it means being able to discuss the most controversial of topics.
However, the article carries on to mention that between 40-70 percent of what is posted is either flagged or removed outright. That number is pretty high, but you may ask what content is filtered out in order to allow free discussion.

The Washington Post has the answer;

‘Candid’s secret sauce is in its artificial intelligence moderation, which aims to weed out bad actors by analysing the content of posts and keep hate speech and threats off the network. ‘

The fundamental issue I have with this is that hate speech alone is too vague and that apparently the A.I is capable of detecting sentiment, or at least that’s what Reddy claims. In an interview with the NPR, she goes into some detail on the how A.I operates and how far general developments in Artificial Intelligence have come. The A.I uses natural language processing [NLP] in order to determine the sentiment of the post. One of the things mentioned earlier is the similarity to Jigsaw, Google’s A.I.

This is how the Verge referencing Wired describes Jigsaw ;

‘Jigsaw, a subsidiary of parent company Alphabet is certainly trying, building open-source AI tools designed to filter out the abusive language. A new feature from Wired describes how the software has been trained on some 17 million comments left underneath New York Times stories, along with 13,000 discussions on Wikipedia pages. This data is labelled and then fed into the software — called Conversation AI — which begins to learn what bad comments look like. ‘

Bad comments is a very vague way of determining right and wrong. A bad comment can range from hate to honest criticism or disagreement. Most humans can struggle to read intention when worded and not spoken but that purely depends on the content and specifically its context in relation to what it is responding too. So how can any artificial intelligence match the human mind’s rational thought? An A.I. regardless of how smart it becomes is still limited by the constraints of its programming. The Verge does express doubt when faced with how Wired’s representative Andy Greenber reacts to this artificial intelligence.

Like the beginning of a bad sci-fi fanfic, it goes like this;

‘My own hands-on test of Conversation AI comes one summer afternoon in Jigsaw’s office when the group’s engineers show me a prototype and invite me to come up with a sample of verbal filth for it to analyse. Wincing, I suggest the first ambiguously abusive and misogynist phrase that comes to mind: “What’s up, bitch?” Adams types in the sentence and clicks Score. Conversation AI instantly rates it a 63 out of 100 on the attack scale. Then, for contrast, Adams shows me the results of a more clearly vicious phrase: “You are such a bitch.” It rates a 96.’

It goes without saying that both phrases can be open to interpretation. They can both be said in jest or as an expression of frustration. It’s a human thing, we all do it. Shouting obscenities at each other is what we do best.

The  horror show continues meanwhile;

‘But later, after I’ve left Google’s office, I open the Conver¬sation AI prototype in the privacy of my apartment and try out the worst phrase that had haunted [journalist] Sarah Jeong: “I’m going to rip each one of her hairs out and twist her tits clear off.” It rates an attack score of 10, a glaring oversight. Swapping out “her” for “your” boosts it to a 62. Conver¬sation AI likely hasn’t yet been taught that threats don’t have to be addressed directly at a victim to have their intended effect. The algorithm, it seems, still has some lessons to learn.’

I don’t know what scares me more, the eager endorsement of such an unworkable A.I or the fact that he wants it to improve. He welcomes our robotic overlords with open arms. It doesn’t take a rocket scientist to work out that the ‘haunting’ quote that made Greenber quiver is one made by a troll. The intention is to get a reaction. So congratulations the troll was sustained by your salt.

The Verge goes on to say;

‘Greenberg notes that he was also able to fool Conversation AI with a number of false-positives. The phrase “I shit you not” got an attack score of 98 out of 100, while “you suck all the fun out of life” scored the same.’

These examples by themselves make the A.I incredibly unreliable if it were ever implemented. It’s already been shown with Tay, Microsoft’s twitter bot that if you give the internet the chance to mess with the algorithm of an A.I. They will probably turn it into Neo-Nazi.


On paper both A.I. operate in a similar manner, suggesting that maybe Reddy used Jigsaw’s design as a foundation. Whether Google allowed this, however, remains to be seen since the similarities are definitely there. The difference is that Candid’s A.I. is completely harmless in my mind. Although I will update this post or write a new one if things change. What I’m more concerned about is Google’s A.I and the ringing endorsement of sites like Wired. Mundane Matt and Shoe’s sponsorship along with others of Candid pales in comparison to a man who is willing to allow an artificial intelligence rate and decide what you can and can’t say online.

Some will say that is what Candid does which is true to an extent, but from what I’ve observed in the app, it is mostly shit-posting and random ideas thrown around. I remain sceptical but in a strange way optimistic that Candid may succeed where others have failed. Would I recommend it? that depends purely on what you want out of the app in the end. I personally believe Jigsaw poses a greater threat to freedom on the internet since we are in a time where the MSM will censor anything for any reason. An A.I similar to the one used by Candid could prove to be an effective countermeasure to perceived trolls or god forbid honest criticism. 

No Platforming

No Platform is a policy of the National Union of Students (NUS) of the United Kingdom. Like other no platform policies, it asserts that no proscribed person or organisation should be given a platform to speak, nor should a union officer share a platform with them. – NUS No Platform Policy

The no platform policy has existed for many years and its singular intent is to selectively limit who can and cannot speak at university campuses across the U.K, with protesters actively disrupting speakers or actively attempting to prevent students from attending an event. This can be for a variety of reasons but the policy mainly applies to those who may hold racist or fascist views but it does extend to other views that the union may deem offensive, such as transphobia. This piece aims show just how many people of varying backgrounds have been deemed unsafe for holding a disagreeable perspective.

Firstly, radical feminist Julie Bindel was No Platformed after the NUS concluded she held transphobic views. Julie Bindel responded to this with her own article on the Guardian. In the article she claims that her exclusion proves that this is an anti-feminist crusade, and the refers to how a male student leader accused student supporters of Bindel of being ‘transphobes’ and ‘whorephobes’. This accusation stems from statements made in Bindel’s 2004 article ‘Gender benders, beware‘. However, it should be noted that Bindel recently apologised for her comments in this particular article. There is a point in her article on No Platforming that I agree with, and it is that ‘the current climate in universities of creating “safe spaces” in which no evil must enter is pathetic.’ and she is right about this, especially if we consider that Academia is supposed to be the place where controversial and otherwise offensive views are challenged and countered instead of being suppressed. However, she also directly counteracts her above ‘creating safe spaces statement’ by stating later;

“Initially, the University of Manchester decided to no platform me and not my opponent, Milo Yiannopoulos, a vocal anti-feminist, (though he too was later dis-invited, after protests over the hypocrisy). In doing so, they handed me a gift. Here is proof that this is an anti-feminist crusade, and nothing at all about so called safe spaces.”

So which is it Bindel? is NUS creating safe spaces or are the two unrelated. She is also wrong on this being an anti feminist crusade, No Platforming can and has affected both sides of the political spectrum. But this isn’t the first time the NUS has coddled students or acted in their interests. It’s been happening for many years, and is only becoming more prominent.

For instance in 2010, two members of the NUS forced the University of Durham’s student union to cancel a proposed debate on multiculturalism. An article written by Mark Tallentire mentions how the ‘student debate featuring two BNP politicians’ was cancelled due to fears of violence. Furthermore, the article states Anna Birley from the NUS said that she was ‘confident the debate would have been intelligent, responsible and an opportunity for students to challenge offensive views; and was disappointed the focus had become the threatened confrontation outside.’ On the other hand Simon Assas from the (The Unite Against Fascism – UAF) group, ‘called it a victory for common sense and for people who wanted to stand up against racism and fascism.’ The article goes on to state how the NUS’s president of the time Wes Streeting ‘believed there was no place on university campuses for the BNP; and that the idea the NUS, rather than the BNP, had caused a welfare and public order issue was preposterous.’ Paul Nicholls, Co-founded the facebook group Durham University Students for Freedom of Speech in response to the cancellation of the debate. The group gained a lot of support from students at the university. Nicholls claimed that this was a ‘NUS betrayal of students outside the chamber where the debate would have been held.’ This happened six years ago but even so despite what Bindel claims, No Platforming is something that can and may at some point affect us all. It also suggests that this growing act of censorship is being initiated only by a select group within the NUS, and that the NUS itself is not entirely to blame.

In addition, more recently Nick Lowles The head of a campaign which seeks to counter racism and fascism in the UK was allegedly “no-platformed” by the NUS on the grounds that he is “Islamophobic.” The Independent reported on how this may have been tied to a facebook post by Lowles that was shared by a user on twitter, in which he claims black students opposed his appearance on an anti-racism platform. Lowles in the interview for the Independent believed that the students took issue with his position on condemning on-street grooming by gangs;

“My crime, it seems, has been to repeatedly call on the anti-racist movement to do more to condemn on-street grooming by gangs and campaigning against Islamist extremist groups in the UK and abroad. I make no apology for either position. We need to be consistent in our opposition to extremism – from whatever quarter it comes.”

He also adds that he has no issue with the NUS as a whole, only with those who are ultra-leftists. Another man that has been No Platformed, is veteran activist Peter Tatchell who was No Platformed for being racist as claimed by Fran Crowling. She had refused to go on stage at the Canterbury Christ Church University unless he didn’t attend. The reason purportedly being;

‘Ms Cowling stated in emails to event organizers that she could not share the stage with Mr Tatchell, because he signed an open letter in the Observer last year supporting free speech and against no-platforming, the practice by some universities to ban speakers because of their views.’

This statement alone suggests what many have come to fear from some students on University campuses, they are literally afraid to confront or face someone in a open debate or discussion because they simply can’t handle someone else’s perspective. Its unsurprising then that Tatchell would also come to Lowles defence, he said to the Independent that Lowles campaign has a ‘trail-blazing’ record of fighting fascism and racism. He suggests that ‘the idea that there should be any attempt to prevent them speaking is profoundly disturbing. It smacks of political sectarianism of the worst kind.’ The most important part of Tatchell’s statement is that he feels that the some within the NUS are more concerned with fighting their fellow activists than actually contesting real racists or fascists. It should be noted that very recently Mr Tatchell came to the defence of the person who attempted to censor him stating that whilst he was disappointed she wouldn’t debate him, he respects her right of choice. He is also asked if he would debate Manny Pacquiao over his controversial comments that gay people are “worse than animals.” His reply was:

“I’d be prepared to share a platform with any bigot in order to challenge and expose them. Not just Manny, but Vladimir Putin, Robert Mugabe. I would, and have, shared platforms with lots of bigoted people in the past, and I think successfully have exposed their prejudice.”

Tatchell showing the best way to overcome offensive speech is to challenge it head on, instead of conjuring up petitions to get a speaker cancelled.

Brendan O’Neil, a journalist has also been critical of No Platforming and the rising opposition to freedom of speech. He wrote in the Telegraph how these leaders think controversial ideas should be crushed rather than contested. He also aptly describes how students at Cardiff University have tried to erect a Greer-deflecting forcefield around their campus. Greer was No Platformed because she also believes transitioning men are not women. This led to the women’s officer at Cardiff University, Rachael Melhuish, petitioning specifically that a lecture Greer was booked to give be cancelled, stating in the petition;

‘While debate in a University should be encouraged, hosting a speaker with such problematic and hateful views towards marginalised and vulnerable groups is dangerous. Allowing Greer a platform endorses her views, and by extension, the transmisogyny which she continues to perpetuate.’

However, despite the university stating the debate would go on, Greer cancelled the event. Zoe William’s piece in the Guardian seems to show a mixed perspective, on the one hand she suggests as an example that nothing is gained by allowing those openly racist to express views on multiculturalism. But she does conclude that the best way to counter transphobia is not silence it, but by taking the person’s argument to task, by argument and persuasion. The truth of the matter is you win nothing by turning your back on those who hate, it merely stirs them to action. If you give them a platform you can show how perhaps absurd their ideas are, and therefore any support they would have had vanishes in that instant. Another example of censorship at Cardiff University is that Dapper Laughs was also banned from performing there. A petition was also used and declares that ‘Misogynistic humour should not be supported by an organisation that stands for equality.’ This is also one of many reasons by comedians refuse to go on stage at universities. Offensive humour is simply not allowed due to the sensitive nature of some students.

Furthermore, O’Neil’s article goes on to add that ‘it’s now commonplace to hear students describe certain ways of thinking as a threat to their “mental safety”. Where once students might have raged and blasphemed against The Man, now they set up “safe spaces” where no offensive word may be uttered or saucy image displayed.’ He uses many examples to show how the expression of students on campuses are being stamped on such as how ‘the student union at University College London banned a Nietzsche reading group, claiming it was encouraging students to dip into “fascist ideology”. Even philosophers aren’t safe from campus bans.’ and He also perfectly sums up this rising epidemic of ever more restrictive policies;

“Feminists, thinkers, songs, magazines, Israelis — nothing is safe from controversy-allergic student officials. Whenever they crush or hush things they deem offensive, they use the same justification: that it’s important to protect students’ self-esteem and “mental safety”.”

Another example of No Platforming this time relates to controversial journalist Milo Yiannopolous, a writer for the site Breitbart. Milo has seen controversy follow him wherever he walks, especially since he was one of the few journalists who didn’t outright condemn the Gamergate consumer revolt. In a Skype call done by the Independent Journal, Milo states how;

‘They are not banning just conservative voices – but anyone who exists outside progressive liberal bubble. The loss to well-rounded education is incalculable. They can’t get exposure to opposing views. At university, you ought to be challenged. You ought to be uncomfortable, you should be confronted with disagreement. It’s a place of education – challenging your beliefs.’

The whole point of going to University as he states is to challenge yourself, and that may mean you will meet people who see the world in a completely different light, and you gain no victory by protesting them. Because as the cases above show, feminists, comedians, conservatives, and liberals who don’t follow progressive dogma may face being met with a wall of pitchforks and torches by either the NUS or their fellow students.

Finally, recently Sean Faye at the Independent came out in defence of No Platforming, by attempting to point the hypocrisy of the likes of Julie Bindel. Faye also goes on to state that:

‘Perhaps it’s true that this is merely “special snowflake” behaviour. However, to my mind, the coddled and sanctimonious voices here are not the students engaging in “no-platforming” or withdrawing from debates but an elder generation of activists petulantly claiming they are being silenced from their “right” to be heard in the national press.’

In the article he misrepresents what free speech is. It’s a fundamental right of expression that goes beyond law and our human rights, and is nothing to do with the ‘eye of the beholder’. He also claims that its ‘the frequent defence of the oppressor who knows that minorities lack the same power to exercise their own free speech in approved ways.’ The issue with this statement is assuming that a person lacks power based on skin colour, when that is simply untrue especially on campus. Thus Faye concludes that the censorious nature of No Platforming is a part of Free Speech. And in that instant has condemned himself, and everyone around to him to this possibility of being silenced by student activists who they themselves feel entitled to control your right of expression.