Meta will soon use AI for most product hazard estimates instead of human reviewers

According to a report from . The internal documents seen by the publication aim to fall into AI, NPR It also reports the use of AI reviews in fields such as violent content, false information and more youth who describe more and more. Talked to the unnamed current and former meta employees NPR Ai warned that the AI ​​could ignore the serious risks that a human team could detect.

Updates and new features for meta’s platforms, including Instagram and WhatsApp, have long been subject to human reviews before hitting people, but in the last two months the Meta has doubled on the use of AI. Now, according to NPR, Product teams must fill the questionnaire about their product and submit it for review AI systemIt usually provides a “immediate decision” in which it contains accidents that are identified. They need to solve the requirements that they should solve problems before the product is released.

The former Meta Executive said NPR Reducing the observation means “you are creating high dangers. The negative exitchers of product changes are less likely to prevent problems in the world.” In a statement NPRMeta says it still leaves “human skill” and “low-risk decisions” AI to assess “novel and complex problems”. Read the full report at .

It comes just days after the release of meta – First And and Early this year. According to the report, the amount of content removed in the wake of changes was reduced. But there is a small growth in threatening and harassment, as well as violent and graphic content.

Source link

Related Articles

Back to top button