A major tech company, Google, has become one of the first to express its views on Canada’s planned approach to dealing with damaging online content.
According to a blog post published by Google Canada, some components of the government’s proposal “may be open to manipulation and result in the removal of excessive amounts of valid information.”
The government originally suggested a new Digital Safety Commission (DSC) in July 2021, which would have the authority to police harmful online content from major platforms like Facebook, Twitter, Instagram, YouTube, and Pornhub, among others.
The government identified five categories of hateful content that the platforms would be required to monitor within 24 hours of receiving complaints:
- Hate speech
- Child sexual exploitation content
- Non-consensual sharing of intimate images
- Incitement to violence
- Terrorist content
The platforms would be required to monitor these categories within 24 hours of receiving complaints.
On the other hand, Google warns that others may exploit the proposed need for platforms to remove user-reported content within 24 hours to harass or restrict free expression online.
In the company’s words, “it is critical to strike the appropriate balance between speed and precision.” To be most effective, user flags should be used as “signals” of possibly violative content rather than as “definitive assertions of infractions.”
Google stated that approximately 300,000 videos were removed from its YouTube platform in the second quarter of 2021, out of a total of 17.2 million videos flagged by users. On the other hand, Google removed 6.2 million videos in total for breaching its community guidelines, demonstrating that flagging is not a comprehensive strategy for combating offensive content.
On top of that, Google explicitly cautioned against the practice of proactively monitoring content, which involves scanning content for anything that potentially fall into one of the five hostile content categories before it is posted.
“The imposition of proactive monitoring responsibilities could result in the repression of lawful expression… and would be inconsistent with international democratic values.”
According to the proposal, platforms would be required to notify the RCMP or other law enforcement agencies when they become aware of potentially hateful content. Additionally, the new regulator DSC could seek court orders ordering telecommunications companies to block access to platforms that refuse to remove content promoting child sexual exploitation or terrorist activity.
When the proposal was first announced, the government cited the violent attacks on a mosque in Quebec City in 2017 and the mosque attacks in Christchurch, New Zealand, in 2019. Individuals were radicalised by online content, and social media companies failed to remove content related to the attacks.
According to Michael Geist, the Canada Research Chair in Internet and E-Commerce Law at the University of Ottawa, the plan is “seriously flawed.” It has also been roundly opposed by anti-hate and civil rights organisations, which share many of the same worries as Google about the internet.
Using artificial intelligence (AI), he believes that Google will likely be able to proactively monitor information, which might then be reported to the authorities.
In light of the possibility for prejudice inside these AI systems, “[that] creates significant issues, particularly for disadvantaged communities,” he stated.
It’s also possible that hate groups will target anti-hate groups to remove their content if there isn’t adequate due process, which Geist believes the 24-hour response requirement restricts, especially if sanctions are imposed on corporations that fail to reply in time.
In his opinion, “Google suggests that it will lead to an increase in over-blocking and over-removal of information.” Company executives have expressed concern about a potential danger to the freedom of expression. The threat extends to the organisations that we are attempting to safeguard.”
Over the summer, Geist was one of the hundreds of people who shared their thoughts on the concept during a consultation process that began during the 2021 Canadian election and ended just four days after the election concluded.
His criticism is that the consultation has not been transparent. The government has not made the feedback it has received public because it “may contain confidential business information,” according to Heritage Canada, because it “may contain confidential business information.”
Nonetheless, the Liberal government has committed to introducing legislation to protect consumers online within the first 100 days following the election.
According to Geist, “trying to expedite a severely defective, widely-decried policy based on non-transparent consultation is not a step in the right direction.” Moreover, “It is possible that constitutional challenges will be filed in the future.”