Business

Meta is still working on changes recommended during last year’s civil rights audit

Meta is still working on changes recommended during last year’s civil rights audit

More than a year after failing its first civil rights audit, Meta said it was still working on a number of changes recommended by the auditor. The company released updates that detailed their progress on overcoming many auditor recommendations.

According to the company, he has implemented 65 of 117 recommendations, with 42 others registered as “ongoing or ongoing.” However, there are six fields where the company said it still determines the “feasibility” of making changes and two recommendations in which the company has “decreased” to take further action. And, especially, some of these problems with the most controversial problems called in the original 2020 audit.

The original report was released in July 2020, found the company needed to do more to stop “pushing users to extremist echoes.” It is also said that companies need to overcome problems related to algorithmic bias, and criticize the handling of companies from the Donald Trump post. In his latest update, Meta said it was still not committed to all auditor changes called algorithmic bias. The company has applied several changes, such as involvement with external experts and increasing the diversity of the AI ​​team, but said other changes are still “under evaluation.”

In particular, the auditor calls for the process of compulsory and extensive companies to “to avoid, identify, and overcome potential sources of bias and discriminatory results when developing or using the AI ​​and machine learning model” and it “regularly testing existing algorithms and machines. Learning Model. “Meta said the recommendation was” under the evaluation. ” Likewise, the audit also recommends “mandatory training on understanding and sources of bias mitigation and discrimination in AI for all team building algorithms and machine learning models.” The suggestion was also registered as “Under evaluation,” according to Meta.

The company also said some updates related to moderation of content were also “under evaluation.” This includes recommendations to improve “transparency and consistency” decisions related to moderation bands, and recommendations that companies learn more aspects about how speech hatred spreads, and how it can use targeted data faster. The auditor also recommends that Meta “reveal additional data” about which users are targeted with the oppression of voters on the platform. The recommendation is also “under evaluation.”

The only two recommendations denied by Meta Outright are also related to the electoral and census policy. “The auditors recommend that all voter interference reports produced by users to be directed to content reviewers to make the determination whether the content violates our policy, and that the appeal option is added to the reported voter interference content,” wrote meta. But the company said it chose not to make that change because it would slow down the review process, and because “most of the content reported as voter disorders did not violate the company’s policy.”

Separately, Meta also said that it was working on “framework to study our platform and identify opportunities to improve justice when it comes to competing in the United States.” To achieve this, the company will do “off-platform surveys” and analyze the data itself using family names and zip codes.

Share this post

About the author

Leave a Reply

Your email address will not be published. Required fields are marked *