Sentropy and Online Moderation

Sentropy is a company that was recently bought by Discord [1] who say they specialise in automated abuse detection and removal on social media sites. They have created a set of algorithms and machine learning software that can analyse human language and detect when abusive language, harassing tendencies and other harmful content is posted on their client's site and pre-emptively remove them.

It also briefly released a free consumer product, Sentropy Protect that took this technology and allowed individual users of Twitter (with plans to expand to other sites) to apply it to their own social media feeds [2]. Protect, however, was pulled from the market after Sentropy was purchased by Discord [3].

It probably goes without saying that having most information consumed by the general population filtered through a benevolent "anti-harassment" AI would have worrying implications. The CEO of Sentropy, John Redgrave has even acknowledged this potential crisis [4]:

Every week I was watching the impact that online conversations were having on the real world. They were creating a lasting impact. There is a link between URL and IRL. For the younger generation, the difference between their digital and physical selves will be indistinguishable. That is powerful, but also problematic.

Naturally, he feels the solution is not to reduce the power of the internet, but rather to double-down on it, and hand over control of what we see online to a set of computer algorithms. And again naturally we could only ever have a company's word that their AI is neutral (politically or otherwise) and only suppressing "harassment". In fact any implication that Sentropy's technology would not begin to be abused upon its implementation should come under fire after considering exactly who is backing the project:

Big Investors - For You

Redgrave a SPOOK

The Sentropy company has come out of nowhere as a start-up with some $13 million in funding from a group of backers including the co-founder of Reddit and his investment firm, executives of gaming companies and social media sites, an AI research group, streaming site Twitch (a subsidiary of Amazon), and a "former head of government" [4]. What the hell. This list glows like an African-American CIA agent after dark.

Sentropy CEO John Redgrave was formerly COO of Lattice Data, apparently some big data analysis company since sold to Apple [4][6], and... what's this? Oh, he was also Head of Strategy and Ops at Palantir - not that you'd hear it from him! Palantir is of course the US government's favourite big data analytics company, in bed with their Intelligence Community. The Intelligence connections don't stop there because it turns out Lattice Data brought Redgrave into contact with DARPA [5] where they worked on analysis of busy dark web forums.

Some of this could be coincidence, after all its hardly a surprise that the CEO of what is effectively a social media data-analyst company has a history of working in analytics, but - given the history of tech firms and the American intelligence community - I believe the lack of transparency on this matter from Sentropy, as well as this list of investors including a mysterious government official, the company itself's sudden rise due to investment funding and Redgrave's personal connections, seems to indicate Sentropy is, to a not insignificant degree, connected to the US government.

The Needs of the Few

Implementing these types of systems, that is, pre-emptive AI-based blocking and suppression, will greatly benefit already existing and large sites, pages and people at the expense of smaller and newer ones.

First consider the system itself. Clearly, content that people are familiar with is much less likely to be flagged as offensive or abusive. But new producers of content (remember this could be blogs, videos, music, even websites - whatever) typically are looking to exploit a gap in the market, so to speak. They create or post something new which draws an audience to them and they grow from there - this system will harm that process greatly.

As people grow more and more comfortable with the social media feed style of web browsing, they become more reliant on algorithms to select content for them. In the past new content could still enter a person's view via being linked by friends, or the person searching it out themselves. Now that will be impossible, the offending content can be blocked and removed before anyone besides its creator even knows about it.

Naturally this will also have a massive damping effect on the average social media user's speech freedoms. Given the policies practiced by social media giants today, products like Sentropy's will probably not be used to clamp down only on illegal or directly harmful speech - but they will want it to be seen that way. Much like many recent web developments, this appears to be just another piece of technology designed to benefit the companies that receive the biggest volumes of throughput on the internet, with little care given to how it will affect everyone else. Just for example we can quite easily suppose a future where Google and other search engines mark sites without this technology as "unsafe", or severely reduces their prominence in search results, maybe after much clamour from the media and activist groups demanding we but think of the children who may be irreversibly scarred by logging on to an unsafe site.

The Janitorial Revolution and its Consequences

Large social sites are a terrible thing. In these places, moderators are no longer community members entrusted with extra responsibility, but third-party specialists working at the behest of a corporation who care more about legal repercussions than any group's freedom of expression. Ultimately, as the moderation bureaucracy grows, we can expect to see more of this since fundamentally, from a corporate perspective, this must look like a future-cost-cutting endeavour as Sentropy VP of Product Dev Bala explains [2]:

I think abuse and harassment are rapidly evolving to be an existential challenge for the likes of Facebook, Reddit, YouTube and the rest. These companies will have a 10,000 person organization thinking just about trust and safety, and the world is seeing the ills of not doing that. What's not as obvious to people on the outside is that they are also taking a portfolio approach, with armies of moderators and a portfolio of technology. Not all is built in-house.

Redgrave is even more blunt [5]: The reality of this problem is it has a bottom line impact for these companies.

In the quest to make online spaces ever more safe and secure, we are going to destroy the very qualities that made them attractive to us as spaces in the first place. The fact is that moderation of any kind kills conversation, but then leaving some spaces unmoderated is even more hostile to it, so we try to strike a balance; for example spam is universally banned since its quite hard to talk if you can't even read what the other person has said. But this kind of moderation, its like if instead of just talking to your friends you had to first write a script of what you wanted to say which was then submitted to an official moderator who prevented you meeting if he deemed anything to be potentially abusive. It would be suffocating, and that's the future of online discussion if these people have their way.

Notes

Sentropy seems to have a pretty weird idea of what constitutes abuse, in their site's marketing they show us some of the material we can expect to be removed includes: "remove kebab" , off-colour pick-up lines [5] (bonus for the presumably doubly extreme "purge kebab") and apparently users who appear to be making genuine cries for help; if you can't read the image, the second row down reads "wish i could just take a bunch of pills and never w[obscured]"; presumably the obscured text reads "ake up".

The fact that "White Supremacist Extremism" is also listed as separate to "Identity Attack", "Insult", "Physical Violence" etc. indicates to me that it (or its substitute) will be used as their catch-all political suppression term since obviously real supremacist extremism would fall under those categories.

Prophylaxis against Botnet

And finally, despite the ending note of the article being a bit pessimistic, remember that this kind of suppression would be practically impossible over a decentralised service, whether its Pleroma, XMPP, or any other.

Bibliography/Further Reading

1: Sentropy x Discord: A Safer Tomorrow
2: Sentropy launches tool for people to protect themselves from social media abuse, starting with Twitter
3: Discord buys AI anti-harassment company
4: Sentropy emerges from stealth with an AI platform to tackle online abuse, backed by $13M from Initialized and more
5: Introducing Sentropy
6: John R. | LinkedIn