ChatGPT burst onto the public scene late last year, giving artificial intelligence its “aha moment” for many people. AI is now seemingly everywhere, attracting enormous attention and excitement alongside concerns, legal threats and talk of regulation. The potential of AI is evident to just about everyone, but the challenges associated with bias, copyright, privacy, misinformation and more can’t be ignored. Cohere AI is a Canadian-based AI firm that is widely viewed as one of Canada’s AI stars for its large language models that enable companies of all sizes to integrate AI technologies. Aidan Gomez, who worked on the “T” in ChatGPT, is the co-founder and CEO of Cohere AI. He joins the Law Bytes podcast to talk about AI and his views on the myriad of emerging legal and regulatory issues.
Post Tagged with: "ai"
The Law Bytes Podcast, Episode 163: Cohere AI CEO Aidan Gomez on the Emerging Legal and Regulatory Challenges for Artificial Intelligence
The Law Bytes Podcast, Episode 148: Christelle Tessono on Bringing a Human Rights Lens to AI Regulation in Bill C-27
Bill C-27, the government’s privacy and artificial intelligence bill is slowly making its way through the Parliamentary process. One of the emerging issues has been the mounting opposition to the AI portion of the bill, including a recent NDP motion to divide the bill for voting purposes, separating the privacy and AI portions. In fact, several studies have been released which place the spotlight on the concerns with the government’s plan for AI regulation, which is widely viewed as vague and ineffective. Christelle Tessono is a tech policy researcher based at Princeton University’s Center for Information Technology Policy (CITP). She was one of several authors of a joint report on the AI bill which brought together researchers from the Cybersecure Policy Exchange at Toronto Metropolitan University, McGill University’s Centre for Media, Technology and Democracy, and the Center for Information Technology Policy at Princeton University. Christelle joins the Law Bytes podcast to talk about the report and what she thinks needs to change in Bill C-27.
The Law Bytes Podcast, Episode 139: Florian Martin-Bariteau on the Artificial Intelligence and Data Act
Bill C-27, Canada’s privacy reform bill introduced in June by Innovation, Science and Industry Minister François-Philippe Champagne, was about more than just privacy. The bill also contains the Artificial Intelligence and Data Act (AIDA), the government’s attempt to begin to scope a regulatory environment around the use of AI technologies. Critics argue that regulations are long overdue, but have expressed concern about how much of the substance is left for regulations that are still to be developed. Florian Martin-Bariteau is a friend and colleague at the University of Ottawa, where he holds the University Research Chair in Technology and Society and serves as director of the Centre for Law, Technology and Society. He is currently a fellow at the Harvard’s Berkman Klein Center for Internet and Society and he joins the Law Bytes podcast to breakdown the AIDA.
The Law Bytes Podcast, Episode 85: Céline Castets-Renard on Europe’s Plan to Regulate Artificial Intelligence
Last week, the European Commission launched what promises to be a global, multi-year debate on the regulation of artificial intelligence. Several years in development, the proposed rules would ban some uses of AI, regulate others, and establish significant penalties for those that fail to abide by the rules. European leaders believe the initiative will place them at the forefront of AI, borrowing from the data protection framework of seeking to export EU solutions to the rest of the world. Céline Castets-Renard is a colleague at the University of Ottawa, where she holds the University Research Chair on Accountable Artificial Intelligence in a Global World. She joins the Law Bytes podcast to discuss the EU plans, their implications for Canadian AI policy, and the road ahead for the regulation of artificial intelligence.
The world has been focused for the past several weeks on racial justice and the Black Lives Matter movement, with millions around the world taking to the streets to speak out against inequality and racism. Technology and concerns about racism and bias have been part of the discussion, with some of the world’s leading technology companies changing longstanding policies and practices. IBM has put an end all research, development and production of facial recognition technologies, while both Amazon and Microsoft said they would no longer sell the technology to local police departments.
Mutale Nkonde is an artificial intelligence policy analyst and a fellow at both the Berkman Klein Center for Internet & Society at Harvard University and at Stanford University’s Digital Civil Society Lab. She joins me on the podcast this week from a busy home in Brooklyn, NY to talk about this moment in racial justice and technology, racial literacy, and the concerns about bias in artificial intelligence