Connect with us

Social Media

Kenya: Meta Sued for $1.6 Billion USD for Fuelling Ethiopia Ethnic Violence

Published

on

Facebook Meta

By Amnesty International 

Meta must reform its business practices to ensure Facebook’s algorithms do not amplify hatred and fuel ethnic conflict, Amnesty International said today in the wake of a landmark legal action against Meta submitted in Kenya’s High Court.

The legal action claims that Meta promoted speech that led to ethnic violence and killings in Ethiopia by utilizing an algorithm that prioritizes and recommends hateful and violent content on Facebook. The petitioners seek to stop Facebook’s algorithms from recommending such content to Facebook users and compel Meta to create a 200 billion ($1.6 billion USD) victims’ fund. Amnesty International joins six other human rights and legal organizations as interested parties in the case.

“The spread of dangerous content on Facebook lies at the heart of Meta’s pursuit of profit, as its systems are designed to keep people engaged. This legal action is a significant step in holding Meta to account for its harmful business model,” said Flavia Mwangovya, Amnesty International’s Deputy Regional Director of East Africa, Horn, and Great Lakes Region.

One of Amnesty’s staff members in the region was targeted as a result of posts on the social media platform.

“In Ethiopia, the people rely on social media for news and information. Because of the hate and disinformation on Facebook, human rights defenders have also become targets of threats and vitriol.    I saw first-hand how the dynamics on Facebook harmed my own human rights work and hope this case will redress the imbalance,” said Fisseha Tekle, legal advisor at Amnesty International.

Fisseha Tekle is one of the petitioners bringing the case, after being subjected to a stream of hateful posts on Facebook for his work exposing human rights violations in Ethiopia. An Ethiopian national, he now lives in Kenya, fears for his life and dare not return to Ethiopia to see his family because of the vitriol directed at him on Facebook.

Fatal failings

The legal action is also being brought by Abraham Meareg, the son of Meareg Amare, a University Professor at Bahir Dar University in northern Ethiopia, who was hunted down and killed in November 2021, weeks after posts inciting hatred and violence against him spread on Facebook. The case claims that Facebook only removed the hateful posts eight days after Professor Meareg’s killing, more than three weeks after his family had first alerted the company.

The Court has been informed that Abraham Meareg fears for his safety and is seeking asylum in the United States. His mother who fled to Addis Ababa is severely traumatized and screams every night in her sleep after witnessing her husband’s killing. The family had their home in Bahir Dar seized by regional police.

The harmful posts targeting Meareg Amare and Fisseha Tekle were not isolated cases.  The legal action alleges Facebook is awash with hateful, inciteful and dangerous posts in the context of the Ethiopia conflict.

Meta uses engagement-based algorithmic systems to power Facebook’s news feed, ranking, recommendations and groups features, shaping what is seen on the platform. Meta profits when Facebook users stay on the platform as long as possible, by selling more targeted advertising.

The display of inflammatory content – including that which advocates hatred, constituting incitement to violence, hostility and discrimination – is an effective way of keeping people on the platform longer. As such, the promotion and amplification of this type of content is key to the surveillance-based business model of Facebook.

Internal studies dating back to 2012 indicated that Meta knew its algorithms could result in serious real-world harms. In 2016, Meta’s own research clearly acknowledged that “our recommendation systems grow the problem” of extremism.

In September 2022, Amnesty International documented how Meta’s algorithms proactively amplified and promoted content which incited violence, hatred, and discrimination against the Rohingya in Myanmar and substantially increasing the risk of an outbreak of mass violence.

“From Ethiopia to Myanmar, Meta knew or should have known that its algorithmic systems were fuelling the spread of harmful content leading to serious real-world harms,” said Flavia Mwangovya.

“Meta has shown itself incapable to act to stem this tsunami of hate. Governments need to step up and enforce effective legislation to rein in the surveillance-based business models of tech companies.”

Deadly double standards

The legal action also claims that there is a disparity in Meta’s approach in crisis situations in Africa compared to elsewhere in the world, particularly North America. The company has the capability to implement special adjustments to its algorithms to quickly remove inflammatory content during a crisis. But despite being deployed elsewhere in the world, according to the petitioners none of these adjustments were made during the conflict in Ethiopia, ensuring harmful content continued to proliferate.

Internal Meta documents disclosed by whistle-blower Frances Haugen, known as the Facebook Papers, showed that the US $300 billion company also did not have sufficient content moderators who speak local languages. A report by Meta’s Oversight Board also raised concerns that Meta had not invested sufficient resources in moderating content in languages other than English.

“Meta has failed to adequately invest in content moderation in the Global South, meaning that the spread of hate, violence, and discrimination disproportionally impacts the most marginalized and oppressed communities across the world, and particularly in the Global South.”

Is the CEO/Founder of Investors King Limited. A proven foreign exchange research analyst and a published author on Yahoo Finance, Businessinsider, Nasdaq, Entrepreneur.com, Investorplace, and many more. He has over two decades of experience in global financial markets.

Continue Reading
Comments

Social Media

Meta Shuts Down 63,000 Nigerian Accounts in Sextortion Crackdown

Published

on

Facebook Meta

In a significant move to combat online crime, Meta Platforms Inc., the parent company of Facebook, Instagram, and WhatsApp, has removed 63,000 accounts in Nigeria linked to sextortion scams.

This sweeping action is part of Meta’s ongoing effort to address the growing threat of digital extortion on its platforms.

Unmasking the Scammers

The crackdown, which took place at the end of May, targeted accounts engaged in blackmail schemes.

These scammers posed as young women to coerce individuals into sharing intimate photos, which were then used to extort money from the victims.

The removal follows a Bloomberg Businessweek exposé highlighting the rise of such crimes, particularly affecting teenagers in the United States.

The Global Impact

The U.S. Federal Bureau of Investigation (FBI) has identified sextortion as one of the fastest-growing crimes targeting minors.

The schemes often lead to severe consequences, including the tragic suicides of more than two dozen teens.

In one high-profile case, the death of 17-year-old Jordan DeMay in Michigan led to the arrest of suspects traced back to Lagos, Nigeria.

The Role of the Yahoo Boys

Many of the dismantled accounts were linked to the “Yahoo Boys,” a notorious group known for orchestrating various online scams.

These individuals have been using social media to recruit and train new scammers, sharing blackmail scripts and fake account guides.

Meta’s Response

Meta’s spokesperson emphasized the company’s commitment to user safety, stating, “Financial sextortion is a horrific crime that can have devastating consequences.”

The company is continually improving its defenses and has reported offenders targeting minors to the National Center for Missing & Exploited Children.

To enhance protection, Meta has implemented stricter messaging settings for teen accounts and safety notices regarding sextortion.

They are also employing technology to blur potentially harmful images shared with minors.

Ongoing Efforts

Meta’s actions highlight the complex and evolving nature of online crime. The company has pledged to remain vigilant, adapting its strategies to counter new threats as they emerge.

“This is an adversarial space where criminals evolve to evade our defenses,” Meta noted.

Looking Forward

As digital platforms continue to grapple with issues of privacy and security, Meta’s recent actions demonstrate a proactive stance in safeguarding users.

By dismantling these networks, the company aims to reduce the prevalence of sextortion and foster a safer online environment for all.

The crackdown serves as a reminder of the need for continued vigilance and collaboration between tech companies and law enforcement to protect individuals from the harmful effects of digital exploitation.

Continue Reading

Social Media

Meta Expands Monetization Options for Nigerian Creators with In-Stream Ads

Published

on

Facebook Meta

Meta has launched in-stream advertisements for creators on its platforms in Nigeria, providing a significant new revenue stream for content creators.

This development allows creators to incorporate advertisements into their new or existing videos, including live content.

Meta’s automated system identifies natural breaks in videos to place ads, or creators can manually choose their ad placements.

In-stream and live ads encompass various formats, including pre-roll ads that play before a video starts, mid-roll ads that break into the video, and image ads that appear below the video.

There are also after-roll ads that play following the video content. Creators must meet certain eligibility requirements, such as having a minimum of 5,000 followers, to utilize in-stream ads.

This feature is a part of Meta’s broader effort to enhance monetization opportunities on its platforms. According to a report by NapoleonCat, Nigeria has over 50 million Facebook users.

With the introduction of in-stream ads, Nigerian content creators can now monetize their content more effectively, having previously been excluded from such monetization unless they operated from eligible countries.

Nick Clegg, Meta’s president of global affairs, announced the feature would go live in June during a visit to Nigeria in March.

“Monetization won’t be limited to just Instagram. Nigerian creators eligible to use our monetization products will be able to also monetize on Facebook as well,” Clegg stated.

Meta confirmed this development in a statement on Monday, saying in-stream ads on Facebook and Facebook ads on reels are the two new monetization features for eligible creators in Nigeria and Ghana.

These features will enable creators to earn money by crafting original videos and cultivating a community.

Moon Baz, Global Partnerships Lead for Africa, the Middle East, and Turkey at Meta, said “Every day, we’re inspired by the incredible African creators who use Facebook to tell their stories, connect with others, and bring people together.

“This expansion will empower eligible creators in the vibrant creative industry across Nigeria and Ghana to earn money while setting the bar high for creativity across the world and making Meta’s family of apps the one-stop-shop for all creators.”

In-stream ads can be played before, during, or after on-demand videos, whether pre-recorded content or a recording of a previous live stream.

Types of in-stream ads include pre-roll ads (which play before a video starts), mid-roll ads (which play during videos), image ads (static image ads that display beneath the content), and post-roll ads (ads which appear at the end of videos).

Meta also introduced ads on Facebook Reels, integrating seamlessly into original Reels and enabling creators to get paid based on the performance of their original reels while entertaining fans.

This move by Meta is set to revolutionize content creation in Nigeria, allowing creators to harness the power of their platforms more effectively for financial gain.

The introduction of these monetization features marks a significant step forward in supporting the creative economy in Nigeria and beyond.

Continue Reading

Social Media

Nigerian Blogger VeryDarkMan Arrested Again by Police

Published

on

Martins Vincent Otse, widely known as VeryDarkMan, a popular social media influencer and blogger, was rearrested by the Nigerian police on Sunday.

The arrest, orchestrated under the direction of Commissioner of Police Benneth Igwe, has sparked significant controversy and debate across social media platforms.

According to Otse’s lawyer, Deji Adeyanju, the arrest occurred after Otse exposed an individual allegedly involved in defrauding a Nigerian living abroad.

Adeyanju took to his X account to voice his concerns, stating, “Our client, @thatverydarkman, has just been arrested by the police on the instruction of CP Igwe for exposing someone who allegedly duped a Nigerian abroad. Instead of the police to arrest the person alleged to have duped someone, they arrested VDM on allegation of defamation.”

The influencer’s arrest follows a recent release from police custody. On June 10, 2024, a Federal High Court in Abuja granted Otse bail, which he satisfied, leading to his subsequent release.

This latest arrest marks a continuation of his legal troubles, stemming from his online activities and the content he shares with his followers.

Previously, on May 22, 2024, SaharaReporters revealed that VeryDarkMan had been arraigned on five counts related to cyberstalking.

The police prosecution team requested additional time to respond to the application, a request granted by Justice Mobolaji Olajuwon of the Federal High Court.

During the court proceedings, the police sought to have Otse remanded in prison.

However, his legal team successfully argued for him to remain in police custody, leading to his detainment at the National Cybercrime Centre.

This arrest raises important questions about the balance between freedom of speech and legal boundaries in Nigeria.

Many supporters argue that VeryDarkMan is being unjustly targeted for his efforts to highlight and expose fraudulent activities that affect Nigerians, both at home and abroad.

Critics, however, believe that the manner in which he conducts his exposes can sometimes verge on defamation, warranting legal scrutiny.

The controversy surrounding VeryDarkMan’s arrest highlights broader concerns about the use of legal mechanisms to silence voices of dissent and those who seek to hold others accountable.

As this case continues to unfold, it will be closely watched by both legal experts and the general public, keen to see how it impacts the landscape of social media influence and accountability in Nigeria.

For now, VeryDarkMan remains in police custody, and his legal team is expected to file for another bail application while preparing to defend against the allegations of defamation.

The outcome of this case could set a significant precedent for how digital activism and online speech are treated under Nigerian law.

Continue Reading
Advertisement




Advertisement
Advertisement
Advertisement

Trending