Be scared: Internet companies with the power to erase you from the internet

Jordan Carter, Group CEO of InternetNZ & Brent Carey, Domain Name Commissioner

A movie trailer that I once watched explored the fictional life of a man who had his online life wiped from the internet. He could no longer access his bank accounts, all his online access had been deleted and evidence of his existence such as online records of his birth certificate etc were all gone. He had been erased from the internet, which made him an offline non-person.

We have discussed before private companies’ right to censor and deplatform whomever they like. In the case of Tommy Robinson, he has been deplatformed by so many payment providers and social media companies that he has lost more than 70% of his income. His fans want to support him, but their free choice has been taken away by private companies.

If you think that is scary, what happens when private companies are pressured/controlled by socialist governments, who have no respect for free speech, to remove other companies from the internet?

One example is domain names. A company can purchase or rent a domain name, but if their domain name ends in .nz, their access to the internet is no longer guaranteed. Managers of the .nz internet domain are working on a policy that will see them taking orders from domain name commissioner Brent Carey. Carey has the power to order InternetNZ to temporarily transfer, suspend or lock a domain name registration. In turn he takes his orders from the Department of Internal Affairs, government cyber security agency CERT NZ or a court.

[…] It applies to cases where the use of the .nz domain space is causing, or may cause, irreparable harm to any person or the operation of the .nz domain space.

[…] “Some people think the domain name registry should take an interest in copyright enforcement, or hate speech, or scams,” Carter said. “Any issue raised by anyone will get considered.”


As always, this is a slippery slope. Once private companies start increasing the number of reasons why they should take a website down, they will be asked more and more often to do it by socialist governments. Note already the language that gives plenty of wiggle room: “may cause, irreparable harm to any person or the operation of the .nz domain space”. I could say that a young man with a new driver’s licence “may cause” an accident but no one would think it acceptable to take his licence off him because he “may cause” harm. This is punishing people for what might happen rather than for actually committing a crime. That is a dangerous line to cross.

Worryingly, hate speech is already being mentioned, and you know where that conversation is going to go.

It is not just InternetNZ, however, that we have to worry about. Other platforms are wanting centralised censorship.


IN THE IMMEDIATE aftermath of the horrific attacks at the Al Noor Mosque and Linwood Islamic Centre in Christchurch, New Zealand, internet companies faced intense scrutiny over their efforts to control the proliferation of the shooter’s propaganda.

[…] some of these responses have also included ideas that point in a disturbing direction: toward increasingly centralized and opaque censorship of the global internet.

Facebook, for example, describes plans for an expanded role for the Global Internet Forum to Counter Terrorism, or GIFCT. The GIFCT is an industry-led self-regulatory effort launched in 2017 by Facebook, Microsoft, Twitter, and YouTube.

[…] the GIFCT is “experimenting with […] creating a centralized (black)list of URLs that would facilitate widespread blocking of videos, accounts, and potentially entire websites or forums.

Microsoft president Brad Smith […] suggests a “joint virtual command center” that would enable tech companies to coordinate during major events and decide what content to block and what content is in “the public interest.” […]

One major problem with expanding the hash database is that the initiative has long-standing transparency and accountability deficits. No one outside of the consortium of companies knows what is in the database. There are no established mechanisms for an independent audit of the content, or an appeal process for removing content from the database.

People whose posts are removed or accounts disabled on participating sites aren’t even notified if the hash database was involved. So there’s no way to know, from the outside, whether content has been added inappropriately and no way to remedy the situation if it has.

The risk of overbroad censorship from automated filtering tools has been clear since the earliest days of the internet, and the hash database is undoubtedly vulnerable to the same risks. […]

The post-Christchurch push for centralizing censorship goes well beyond the GIFCT hash database. Smith raises the specter of browser-based filters that would prohibit users from accessing or downloading forbidden content; if these in-browser filters are mandatory or turned on by default, this pushes content control a level deeper into the web.

Three ISPs in Australia took the blunt step of blocking websites that hosted the shooting video until those sites removed the copies. While the ISPs acknowledged that this was an extraordinary circumstance, this decision was a stark reminder of the power of internet providers to exercise ultimate control over what users can access and post.

When policymakers and industry leaders talk about how to manage insidious content that takes advantage of virality for horrific aims, their focus typically falls on how to ensure that content removal is swift and comprehensive. But proposals for quick and widespread takedown, with no safeguards or even discussion of the risks of overbroad censorship, are incomplete and irresponsible.

[…] We’ve already seen governments, including the European Union, look to co-opt the hash database and transform it from a voluntary initiative into a legislative mandate, without meaningful safeguards for protected speech.

[…] there’s a fundamental threat posed by solutions that rely on centralizing content control: The strength of the internet for fostering free expression lies in its decentralized nature, which can support a diversity of platforms. This decentralization allows some sites to focus on providing an experience that feels safe, or entertaining, or suitable for kids, while others aim to foster debate, or create an objective encyclopedia, or maintain an archive of videos documenting war crimes […].