In May 1896, New York City witnessed its first automobile accident when motorist Henry Wells collided with a bicyclist in his new Duryea gas-powered motor wagon. This incident, which resulted in a broken leg for the cyclist and a night in jail for Wells for causing the injury, marked the dawn of a new transportation era—and with it, novel challenges in managing risk and assigning responsibility for mishaps.
The solution for managing liability emerged swiftly. In 1897, Gilbert J. Loomis became the first person in the United States to purchase automotive liability insurance, paying $7.50 for $1,000 in liability coverage and introducing a new approach to dealing with the risks of nascent technology. This rapid adaptation—focusing on the actions of the users rather than the creators of the technology—set a precedent for how society could embrace innovation while mitigating its potential downsides.
Fast forward to today, and we find ourselves at a similar crossroads with emerging technologies such as artificial intelligence and decentralized networks. But unlike the early days of automobiles, where liability fell squarely on the shoulders of operators, a troubling trend is emerging: authorities are increasingly targeting the developers of new technologies for the unrelated actions of those who use that technology.
Read more: Regulators Are Limiting Banks Serving Crypto Clients. Does That Violate the Law?
This shift in liability focus raises critical questions about the future of the internet and software development, which forms the backbone of much of our modern lives. An estimated 80% of code in most software today can be traced to open source forums, highlighting the critical role of this collaborative ecosystem in driving innovation.
And it’s not just open source protocols that would be affected by such novel, punitive standards: any software provider would likely be caught in this expanding web of liability. The debate over safety standards for AI models covered in California’s SB1047 bill, which was recently vetoed by Governor Gavin Newsom, focuses on this very issue—who should be held liable when something goes wrong.
More broadly, as governments grapple with the need to secure digital infrastructure, a pressing question emerges: who bears responsibility when open source software is misused? In that context, it’s worth considering: if Duryea had been held liable for the cyclist’s injury, would we have still been able to build an innovative automotive industry in the US?
Contrasting Approaches to Open Source Software
To date, the United States and the European Union appear to be charting divergent courses in their approach to software regulation, especially when it comes to open source software (OSS). Their decisions will likely shape global policy on this issue for years to come.
For context, the open source model offers advantages that go far beyond mere cost savings, speed of development, or code transparency. The collaborative nature of OSS leads to more robust, secure, and innovative software. By leveraging the collective intelligence of a global community, open source projects can identify and fix vulnerabilities faster than any single organization could.
Read more: The One Senate Race That Could Give Elizabeth Warren More Power Over Crypto
The open source model also fosters innovation at an unprecedented scale. By providing a common foundation of code that anyone can build upon, OSS accelerates technological progress. It allows developers to focus on solving new problems rather than reinventing the wheel, giving birth to some of the most important technologies of our time.
So far, the European Union has taken a more heavy-handed approach to open source regulation with its proposed Cyber Resilience Act (CRA). The act’s early drafts in 2022 suggested making open source projects liable for uses of their software, a stance that drew sharp criticism from the OSS community. While the EU has since engaged more closely with developers to refine the CRA, its approach remains hostile; the Act is set to enter force in the second half of 2024, with manufacturers being required to place compliant products on the market by 2027.
The EU’s strategy reflects a broader trend in European digital policy, which often prioritizes control over innovation. However, critics argue that this approach could stifle the very ecosystem it aims to secure.
Read more: Gary Gensler Reiterates in Speech That Coinbase, DeFi Should Be Defined as ‘Exchanges’
By contrast, the US government seems, at least on paper, to be open to the idea of letting the open source ecosystem develop without inappropriate regulation. The Cybersecurity and Infrastructure Security Agency (CISA), the operational lead for federal cybersecurity as well as the national coordinator for “critical infrastructure security and resilience,” recently unveiled its “Open Source Software Security Roadmap,” pledging to work hand-in-hand with the OSS community to enhance software security.
This approach emphasizes support and assessment rather than imposing liability on developers. If implemented in good faith, this strategy aligns with America’s broader ethos of fostering innovation and protecting free speech. In light of novel software-based innovations, we must ensure the US government practices what it preaches.
The Serious Costs of Getting Regulation Wrong
Imposing stringent liability requirements on developers risks undermining domestic software development across the board. It could discourage contributions and push innovation behind closed doors, paradoxically leading to less secure software as we lose the benefits of widespread peer review and rapid patching. Instead, we should support and strengthen the software development ecosystem through better education about secure coding practices, as well as developing tools to help identify and mitigate vulnerabilities.
Indeed, software is increasingly involved in assessing risks and taking actions in the physical world. This trend extends beyond traditional computer applications, with autonomous and semi-autonomous programs now making decisions that have real-world consequences. The ongoing debates about liability for self-driving car accidents highlight a broader reconsideration of responsibility in our increasingly complex technological landscape.
Read more: Gensler Grilled in Congressional Hearing Over SEC’s Approach to Regulating Crypto
In California, SB1047 further muddies the waters. While primarily focused on regulating AI models and their derivatives, it raises questions about developer liability that could have far-reaching implications for the software developer and OSS communities. Fortunately, as noted above, Gov. Newsom vetoed the bill in favor of a risk analysis “rooted in science and fact.” However, he also expressed concern that the bill did not cast a wide enough net and should encompass smaller AI models. Future legislation on the matter should be watched closely, given the high profile debate around the bill’s aims and scope and its precedential positioning.
A Path Forward
As policymakers navigate this complex landscape, they would do well to remember the massive value that software developers create when they are not afraid to innovate. Rather than imposing blanket liability on developers, a more nuanced approach is needed. This approach should focus on holding bad actors accountable, rather than focusing on the tools they use, as well as promoting security practices, such as robust code auditing protocols, within the OSS community.
And while it warrants a longer discussion, the tension in trying to hold software developers liable for the actions of third-party bad actors is currently unfolding in the criminal cases against the developers of Tornado Cash, with the outcome of these cases potentially having significant implications for the ecosystem.
Ultimately, the goal should be to enhance security without stifling innovation and unjustly imposing liability on developers for others’ bad actions. As the US and EU continue to refine their approaches, finding this balance will be crucial not just for their digital economies, but for the future of global software development.
Just as we didn’t hold Duryea liable for the first automobile accident, we must be cautious about inadvertently putting a brake on the engine of digital innovation by overzealously and inappropriately pursuing its creators.
Miller Whitehouse-Levine is the CEO of the DeFi Education Fund (DEF), with overall strategic and operational responsibility for the execution of its mission and goals. Prior to joining DEF, Whitehouse-Levine led the Blockchain Association’s policy operation and worked at Goldstein Policy Solutions on a range of public policy issues, including crypto.
Lizandro Pieper serves as the DEF’s Policy Associate and is responsible for policy research and advocacy. Prior to joining DEF, Pieper has worked and led teams on five political campaigns including U.S. Senatorial and U.S. House campaigns. Pieper was first introduced to DeFi in Spring 2020 while conducting foreign policy research for his internship at a Washington-based think tank.