The report was written by 26 authors from 14 institutions, spanning academia, civil society, and industry. While the technical ramifications of AI have been speculated on for years, the legal implications and complications are only just emerging, and could leave us vulnerable to new types of crimes that have never been anticipated, leaving legal experts to try to navigate unchartered waters.
Within five years we could be seeing entirely new types of cybercrime, hacking of complex or vital systems, or even political disruption through fake broadcasts, known as ‘deep fakes.’ Politician’s voices could even be faked, or manipulated in ways that would be hard to disprove. These crimes would need entirely new laws that can only be developed and implemented by forward-thinking groups. Close relationships between those who are developing the technology and those within the legal community will help to keep us safe.
Elon Musk (Pictured), founder of Tesla and SpaceX, and co-founder of the OpenAI research group, has been quoted as saying that AI poses ‘our biggest existential threat.’ He added ‘that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.’
In light of this, the report calls for policy makers and technical experts to work together, helping each other to create new laws that can proactively protect us from a technology that could learn to overcome our current defences. Lawyers specialised in technology are already thinking about solutions to the AI problem, like Clive Halperin at GSC Solicitors, who advises businesses on how best to keep up with ever-changing technology.