An AI-powered system could soon take responsibility for evaluating the potential harms and privacy risks of up to 90% of updates made to Meta apps like Instagram and WhatsApp, according to internal documents reportedly viewed by NPR.
NPR says a 2012 agreement between Facebook (now Meta) and the Federal Trade Commission requires the company to conduct privacy reviews of its products, evaluating the risks of any potential updates. Until now, those reviews have been largely conducted by human evaluators.
Under the new system, Meta reportedly said product teams will be asked to fill out a questionaire about their work, then will usually receive an “instant decision” with AI-identified risks, along with requirements that an update or feature must meet before it launches.
This AI-centric approach would allow Meta to update its products more quickly, but one former executive told NPR it also creates “higher risks,” as “negative externalities of product changes are less likely to be prevented before they start causing problems in the world.”
In a statement, a Meta spokesperson said the company has “invested over $8 billion in our privacy program” and is committed to “deliver innovative products for people while meeting regulatory obligations.”
“As risks evolve and our program matures, we enhance our processes to better identify risks, streamline decision-making, and improve people’s experience,” the spokesperson said. “We leverage technology to add consistency and predictability to low-risk decisions and rely on human expertise for rigorous assessments and oversight of novel or complex issues.”
This post has been updated with additional quotes from Meta’s statement.