Last week, the European Commission launched what promises to be a global, multi-year debate on the regulation of artificial intelligence. Several years in development, the proposed rules would ban some uses of AI, regulate others, and establish significant penalties for those that fail to abide by the rules. European leaders believe the initiative will place them at the forefront of AI, borrowing from the data protection framework of seeking to export EU solutions to the rest of the world. Céline Castets-Renard is a colleague at the University of Ottawa, where she holds the University Research Chair on Accountable Artificial Intelligence in a Global World. She joins the Law Bytes podcast to discuss the EU plans, their implications for Canadian AI policy, and the road ahead for the regulation of artificial intelligence.
Post Tagged with: "artificial intelligence"
The Law Bytes Podcast, Episode 85: Céline Castets-Renard on Europe’s Plan to Regulate Artificial Intelligence
The world has been focused for the past several weeks on racial justice and the Black Lives Matter movement, with millions around the world taking to the streets to speak out against inequality and racism. Technology and concerns about racism and bias have been part of the discussion, with some of the world’s leading technology companies changing longstanding policies and practices. IBM has put an end all research, development and production of facial recognition technologies, while both Amazon and Microsoft said they would no longer sell the technology to local police departments.
Mutale Nkonde is an artificial intelligence policy analyst and a fellow at both the Berkman Klein Center for Internet & Society at Harvard University and at Stanford University’s Digital Civil Society Lab. She joins me on the podcast this week from a busy home in Brooklyn, NY to talk about this moment in racial justice and technology, racial literacy, and the concerns about bias in artificial intelligence
Earlier this week, I traveled to Paris to attend the Global Forum on Artificial Intelligence for Humanity (GFIAH). The by-invitation event featured one day of workshops addressing issues such as AI and culture, followed by a two days of panels on developing trustworthy AI, data governance, the future of work, delegating decisions to machines, bias and AI, and future challenges. The event was a part of the French government’s effort to take the lead on developing a new AI regulatory framework that it describes as a “third way”, distinct from the approach to AI in China and the United States. The French initiative, named the Global Partnership on AI, is particularly notable from a Canadian perspective since Canada is an active participant in the initiative and will host the next global forum in 2020.
I appeared earlier this week before the House of Commons Standing Committee on Finance as part of its review of Bill C-86, the Budget Implementation Act. The bill features extensive intellectual property provisions arising out of the IP strategy referenced in Budget 2018. My comments were consistent with previous posts on the changes to notice-and-notice, patents, and the Copyright Board. My opening remarks are posted below.
The federal government placed a big bet in this year’s budget on Canada becoming a world leader in artificial intelligence (AI), investing millions of dollars on a national strategy to support research and commercialization. The hope is that by attracting high-profile talent and significant corporate support, the government can turn a strong AI research record into an economic powerhouse.
Funding and personnel have been the top policy priorities, yet other barriers to success remain. For example, Canada’s restrictive copyright rules may hamper the ability of companies and researchers to test and ultimately bring new AI services to market.
What does copyright have to do with AI?
My Globe and Mail column notes that making machines smart – whether engaging in automated translation, big data analytics, or new search capabilities – is dependent upon the data being fed into the system. Machines learn by scanning, reading, listening or viewing human created works. The better the inputs, the better the output and the reduced likelihood that results may be biased or inaccurate.