Then there’s Eric Chong, a 37-year-old who has a background in dentistry and previously cofounded a startup that simplifies medical billing for dentists. He was placed on the “machine” team.
“I’m gonna be honest and say I’m extremely relieved to be on the machine team,” Chong says.
At the hackathon, Chong was building software that uses voice and face recognition to detect autism. Of course, my first question was: Wouldn’t there be a wealth of issues with this, like biased data leading to false positives?
“Short answer, yes,” Chong says. “I think that there are some false positives that may come out, but I think that with voice and with facial expression, I think we could actually improve the accuracy of early detection.”
The AGI ‘Tacover’
The coworking space, like many AI-related things in San Francisco, has ties to effective altruism.
If you’re not familiar with the movement through the bombshell fraud headlines, it seeks to maximize the good that can be done using participants’ time, money, and resources. The day after this event, the event space hosted a discussion about how to leverage YouTube “to communicate important ideas like why people should eat less meat.”
On the fourth floor of the building, flyers covered the walls—“AI 2027: Will AGI Tacover” shows a bulletin for a taco party that recently passed, another titled “Pro-Animal Coworking” provides no other context.
A half hour before the submission deadline, coders munched vegan meatball subs from Ike’s and rushed to finish up their projects. One floor down, the judges started to arrive: Brian Fioca and Shyamal Hitesh Anadkat from OpenAI’s Applied AI team, Marius Buleandra from Anthropic’s Applied AI team, and Varin Nair, an engineer from the AI startup Factory (which is also cohosting the event).
As the judging kicked off, a member of the METR team, Nate Rush, showed me an Excel table that tracked contestant scores, with AI-powered groups colored green and human projects colored red. Each group moved up and down the list as the judges entered their decisions. “Do you see it?” he asked me. No, I don’t—the mishmash of colors showed no clear winner even half an hour into the judging. That was his point. Much to everyone’s surprise, man versus machine was a close race.
Show Time
In the end, the finalists were evenly split: three from the “man” side and three from the “machine.” After each demo, the crowd was asked to raise their hands and guess whether the team had used AI.
First up was ViewSense, a tool designed to help visually impaired people navigate their surroundings by transcribing live videofeeds into text for a screen reader to read out loud. Given the short build time, it was technically impressive, and 60 percent of the room (by the emcee’s count) believed it used AI. It didn’t.
Next was a team that built a platform for designing websites with pen and paper, using a camera to track sketches in real time—no AI involved in the coding process. The pianist project advanced to the finals with a system that let users upload piano sessions for AI-generated feedback; it was on the machine side. Another team showcased a tool that generates heat maps of code changes: critical security issues show up in red, while routine edits appear in green. This one did use AI.