The Dark Side of Competition in AI
The nature of the game demands that you use things that you don’t even get a competitive advantage from anymore because everyone else is already using them too.
The characteristics that become mandatory, based on an unhealthy conception of competition. They represent side effects of unhealthy competition processes, they create new problems, being the result of poorly designed games, which push their players, whether individuals, companies or governments, to adopt strategies and tactics that defer costs and damage for the future.
Example: It’s not that packaging companies want to fill the oceans with plastic or that farmers want to worsen antibiotic resistance. Everyone is stuck in the same dilemma: “If I don’t use this tactic, I’ll be outdone by everyone who does. So I have to do it too.”
No, it is not capitalism. Which, yes, can cause problems, but it can also solve them is something much deeper. It is a force of misaligned incentives from game theory itself. Sometimes we get so lost in winning the game in front of us that we lose sight of the bigger picture and sacrifice too much in our quest for victory, the short-term incentives of the games themselves pushing, enticing their players. to increasingly sacrifice their future, trapping them in a death spiral where everyone loses in the end, a mechanism of unhealthy competition.
The extreme risks of rushed AI are related to the fact that almost all AI companies are focused on satisfying their investors, with short-term incentives that, over time, will inevitably begin to conflict with any benevolent mission, This requires the right balance between acceleration and safety, intelligent regulation can also help with AI, but ultimately it is the players within the game who have the most influence on it. Therefore, we need AI leaders to show us that they are not only aware of the risks posed by their technologies, but also the destructive nature of the incentives they are currently subject to. The three leading labs are showing some signs of doing this:
- Anthropic recently announced its responsible scaling policy, which commits to increasing capabilities only once certain safety criteria have been met.
- OpenAI has recently committed to dedicating 20 percent of its computing exclusively to alignment research.
- DeepMind has shown an approach to science a decade ahead of commerce, such as their development of AlphaFold, which they gave away to the scientific community.
Maybe companies can start competing over who can fit within these metrics of responsibility and ethics, over who can develop the best security criteria. A race to see who can devote the most calculations to the lineup.
Competition can be an incredible tool, as long as we handle it wisely, don’t hate the players, change the game.