The long-standing debate around the control of social media in Nigeria last week took a new turn with the release by NITDA of the Draft Code of Practice for Interactive Computer Service Platforms/Internet Intermediaries. Predictably, the Code has launched a new controversy around the motives of the government for coming up with the code at this time in our history. One ground of suspicion about the intention of the Code is that it is coming at about the time we are entering an electioneering campaign period. Mischief says that this government that benefited greatly from the use of social media in the run-up to the 2015 general elections when it was in opposition, does not want to be hurt the same way it had used the same social media to hurt the campaign aspirations of the former ruling party.
But there have also been many initiatives in the last seven or so years by this government to control the use of social media. They include the anti-social media bill, the hate speech bills, and many other efforts, including the suspension of the operations of Twitter in the country for six months. These have fueled suspicions on the part of the public that this government is only too happy to make it difficult for citizens to make use of social media.
The expressed tone of the Code is to make social media safe for citizens, which is a noble objective. But it is important to make sure that in achieving a safer social media space, we do not make it impossible to use. There is no doubt that social media is like any other technology being misused in the country. This misuse manifests in various forms such as the spread of misinformation and disinformation, the proliferation of hate and dangerous speech, commodification of nudity, child pornography, sexual exploitation and human trafficking, as well as recruitment of young people to violent gangs such as terrorists and bandits. There are also other crimes such as scamming, impersonation, identity theft, etc. All of these make cyberspace to be a site which many fear to venture.
These are, however, not peculiar or unique to social media or even to Nigeria. Every technology is capable of being used and misused, and people are socialised into the socially useful uses of these technologies at an early contact with these technologies so that they grow to know how to use them for the benefit of society. These are not the products or the consequences of social media. They predate it. They are, in fact, the projection of the offline versions of these crimes. That for centuries we have not been able to stamp them out means that it will be naive to think that they can just be eradicated by certain Codes. Codes do help but not everything can be cured by Codes. And many of the ills of social media are of that nature. They require an entirely different approach.
Admittedly, the ills of social media have been counterproductive to the essence of social media. But it will not work by throwing the baby with the bathwater. Moreover, not all users of social media indulge in these anti-social uses. In reality, very few people engage in these. However, this is not to say that what the small minority does is not worrisome. In the place I work, we have spent a considerable length of time fighting many of those. For instance, since 2014, we have been running an observatory for monitoring and countering hate speech in the country. We have also engaged in sensitization programmes to enlighten and alert Nigerians about the dangers of hate speech and what we could collectively do to sanitize the cyberspace of these. We are also identifying and countering fake news, misinformation and disinformation as well as combating gender violence online. In all these, we have sought the partnership of all stakeholders, including the government, to develop national strategies to deal with these issues drawing from global best practices.
However, government discourse on the problem tends to focus on control than on education and empowering citizens to know the limits of their freedom, which would be most helpful; and it is from this perspective that I see the weaknesses of the Code as a solution. The Code is sweeping in many of its assumptions and prescriptions.
Take, for example, it wants to criminalize platforms providing space for the crimes of users. Had that road been taken, the internet as we know it today would not have existed. Following this logic of pushing the burden of misuse of the users on the platform providers, one of the provisions of the Code says that “A Platform must acknowledge the receipt of the complaint and take down the content within 24 hours.” This gives the government the challenged power to make unilateral determinations and classify items and to be the judge and prosecutor. Platform providers operate on a multi-layered architecture that requires escalation processing for a decision to be reached. Much of the issues that go to the top are those about interpretation, and most cannot be resolved within 24 hours unless the intention is to say that whatever the government says it does not want, becomes the law that cannot be contested nor be subject to independent and neutral interpretation. And this can easily lead to abuse. Even on the seemingly settled matter of deleting nudity, the Code does not make exceptions. For instance, certain levels of nudity are needed for educational purposes. Certain nudity could be used to mobilize against certain crimes and to raise awareness. So when you make a no-exception case, the government simply makes it difficult to use relevant images for these purposes.
There are provisions also that seek to outsource the functions of government to the platform providers. One of these says that they should “Exercise due diligence to ensure that no unlawful content is uploaded to their Platform.” Such a task is the responsibility of the police and other law enforcement agencies. Platform providers are not content providers or owners and cannot have the capacity to carry out such due diligence to ensure that the billions of users do not upload “unlawful” content. Another says that platform providers should “Make provision for verifying official government accounts and authorised government agencies.” This is the responsibility of the government through its relevant agencies. If the government is unable to come up with an enforceable guideline for the use of social media by its agents and officers, it should not push that burden onto third parties.
The elements of control-thinking can also be seen when vague terms are used. For instance, we all know that certain content could cause psychological harm to people. But there is no scale for psychological pains, and persons react differently, having different thresholds of being affected. Without certain rules to establish levels of pain, this can lead to arbitrariness. If I write that a minister has been involved in corrupt deals, he or she can plead “psychological harm” and both myself and the provider are in trouble.
The Code also deploys a stacking technique; thus loading offences of different nature over a single line. Take for instance article 2 of part 11 which requires platforms to inform users not to create, publish, promote, modify, transmit, store or share any content or information that “is defamatory, libellous, pornographic, revenge porn, bullying, harassing, obscene, encouraging money laundering, exploiting a child, fraud, violence, or inconsistent with Nigeria’s laws and public order.” Clearly, libel and defamation are offences that have clear laws, whether they are committed offline or online. So why add them here? They can be the herring to frighten users of social media.
Even innocuous terms such as “false or misleading” are difficult to define. Is an item misleading because of intent or due to the effect? If I put content and someone feels misled, will that be misleading simply because someone thinks it is misleading, perhaps due his (mis)interpretation? Or because he or she draws the wrong conclusion? Or because of the materiality of the item? Being misled is not a straight cause-and-effect logic. As for falsity, it can also be the test of limit to access to what counterfactuality comes to life after publication. In other words, a material could be true at the point of publication and become false after publication. In this case, there was no intent to publish a false item and in this case, no false information was published even if by the time it is read, it is no longer true.
Finally, it seeks to order platforms to preserve any information concerning a person that is no longer a user of a platform due to withdrawal or termination of registration, or for any other reason. This has the effort of preempting efforts to ensure the right to forget. When information about people who are no longer users of a platform is forced to be retained, it would be used somehow and breach not all their right to forget but also their privacy.
One way to think about the Code is to recognize that the digital space is an extension of our civic space. The civic space is what materialises our humanity and citizenship. It is the embodiment of our human rights. This being so, the digital space is also a concretization of these rights, their projection online. The notion of a digital civic space presupposes a regime of digital rights that are the projection of our offline human rights. They include the right to freedom of expression, the right to organisation, and more importantly, the right to privacy. Many of these crimes, we see are derogations of these rights. Many commit these infractions that the Code lists because they do not see their codification to protect citizens from digital abuse. Government itself has been guilty of abusing the digital rights of citizens through intrusive digital surveillance and failure to ensure that all citizens have access to digital space.
In this sense, the best government is best advised to get the Digital Rights Bill passed and signed. This has a wholesome provision of rights and responsibilities along with measures for enforcement rather than limits its gaze on criminalization, which seems to be the tone of the Code. This will help both government and citizens as well as the platform providers. In the end, it will cost less and achieve more when government focuses on educating users than on prosecuting them. There are useful parts to the Code but its underlying assumptions and prescriptions are suspect and subject to being abused. By all means, let the providers be corporate citizens of this country with clear responsibilities, but we as citizens also want our freedom to be respected.