AI Weekly: Palantir, Twitter, and building public trust into the AI design process

The news cycle this week seemed to grab people by the collar and shake them violently. On Wednesday, Palantir went public. The secretive company with ties to the military, spy agencies, and ICE is reliant on government contracts and intent on racking up more sensitive data and contracts in the U.S. and overseas.

Following a surveillance-as-a-service blitz last week, Amazon introduced Amazon One, which allows touchless biometric scans of people’s palms for Amazon or third-party customers. The company claims palm scans are less invasive than other forms of biometric identifiers like facial recognition.

On Thursday afternoon, in the short break between an out-of-control presidential debate and the revelation that the president and his wife had contracted COVID-19, Twitter shared more details about how it created AI that appears to prefer white faces over black faces. In a blog post, Twitter chief technology officer Parag Agrawal and chief design officer Dantley Davis called failure to publish the bias analysis at the same time as the rollout of the algorithm years ago “an oversight.” The Twitter executives shared additional details about a bias assessment that took place in 2017, and Twitter says it’s working on moving away from the use of saliency algorithms. When the problem initially received attention, Davis said Twitter would consider getting rid of image cropping altogether.

There are still unanswered questions about how Twitter used its saliency algorithm, and in some ways the blog post shared late Thursday brings up more questions than it answers. The blog post simultaneously states that no AI can be completely free of bias, and that Twitter’s analysis of its saliency algorithm showed no racial or gender bias. A Twitter engineer said some evidence of bias was found during the initial assessment.

Twitter also continues to share none of the results from