An AI-powered system could soon take responsibility for evaluating the potential harms and privacy risks of up to 90% of updates made to Meta apps like Instagram and WhatsApp, according to internal documents reportedly viewed by NPR.
NPR says a 2012 agreement between Facebook (now Meta) and the Federal Trade Commission requires the company to conduct privacy reviews of its products, evaluating the risks of any potential updates. Until now, those reviews have been largely conducted by human evaluators.
Under the new system, Meta reportedly said product teams will be asked to fill out a questionaire about their work, then will usually receive an “instant decision” with AI-identified risks, along with requirements that an update or feature must meet before it launches.
This AI-centric approach would allow Meta to update its products more quickly, but one former executive told NPR it also creates “higher risks,” as “negative externalities of product changes are less likely to be prevented before they start causing problems in the world.”
In a statement, Meta seemed to confirm that it’s changing its review system, but it insisted that only “low-risk decisions” will be automated, while “human expertise” will still be used to examine “novel and complex issues.”
Read the full article here