It’s no surprise that digital technologies led by artificial intelligence have seeped into our everyday lives. The way we communicate, learn, and advance in society is increasingly governed through technology. Tech’s sister and brother (artificial intelligence and social media) not only guide our lives but create the playground in which we play.
Luckily, thanks to the Canadian government, there is an opportunity to help clean up this playground, or at least mitigate the risk that we experience while in it.
On October 1, 2025, the federal government issued an open call for recommendations on how to manage artificial intelligence in civil society, asking for actions to take when it comes to building trust, creating standards of practice, and ensuring AI is held accountable. How this framework looks will be discussed come November when the government-appointed AI Strategy Task Force share their responses.
News Media Canada has taken part in this open consultation, providing recommendations for a “nation-building AI and data infrastructure” that serves all Canadians.
The Carney government’s AI plan, with feedback from the task force and the public, will position Canada with a well-rounded approach to AI, not taking a backseat to its control, nor overregulating, instead investing in a hand-on-hand approach to its use.
Below is a summary of ideas and recommendations for Canada’s AI strategy, written by News Media Canada’s president and CEO, Paul Deegan.
In the face of threats to our economy and sovereignty, Canadians need to work together to help expand opportunities for individuals and businesses to compete and succeed. This includes protecting the business models of our news media and creative industries, which rely upon their ability to protect and monetize their intellectual property. At the same time, we need to safeguard our digital sovereignty, national identity, and culture.
We fully support building safe, trusted, and fair artificial intelligence systems and strengthening public trust in artificial intelligence, so long as those systems are not built and maintained at the expense of those who create the underlying intellectual property that feeds them. Canada’s creative industries, of which news media are a part of, accounted for nearly 670,000 jobs and $63.2 billion in GDP in 2023. Rather than undermining the economic foundation of these strategically important industries, Canada must seek a ‘win-win’ development path that strengthens the news media, creative and AI industries together.
We agree wholeheartedly with professor Taylor Owen, Beaverbrook Chair in Media, Ethics and Communications and the founding director of The Centre for Media, Technology and Democracy at McGill University, who posted the following on LinkedIn after he was appointed to Canada’s AI Task Force:
“Sovereignty requires governance. Digital sovereignty cannot simply mean scaling infrastructure, companies, or computers. True sovereignty means ensuring our technologies serve the interests of our citizens, and this demands the ability to govern. We failed to govern social media, and the costs have been clear,” said Owen, adding that, “This not only caused harm, it also weakened our sovereignty. We cannot afford to repeat that mistake with AI… The ways citizens access, understand, and trust information are changing quickly, and so far, the effects on democratic discourse have been troubling. AI has not improved our information environment; it has often made it worse.”
News Media Canada’s thoughts on building public trust in AI technologies:
Canada can build public trust in artificial intelligence by acknowledging and addressing the risks and actual harm being caused. In the case of news publishing, the behaviour of some large language models (LLMs) is causing harm to both publishers and Canadians who rely upon news content.
AI companies are flagrantly scraping and summarizing content directly from published news articles via retrieval-augmented generation. News media is by far the most frequently cited source of current information for LLMs, who are using news content without authorization or fair compensation. With the user staying within Big Tech’s increasingly taller walled garden, publishers are deprived of an audience and their ability to sell advertising and subscriptions is significantly diminished.
More recently, LLMs themselves are charging developers who use their web search features via API platforms on a “pay per scrape” basis (see, for example, details for OpenAI, Mistral, Perplexity, and Anthropic). Ironically, this revenue is not flowing through to those whose websites are actually scraped.
While there is an increasing move towards licensing and bot monetization in the U.S., the concern is that the same non-Canadian monopolistic players will capture the market, control the monetary exchange, and take an unfair portion of the revenue (which news publishers in the ad tech space already see with “middle players” popping up to monetize bot traffic). As a result, Canadian publishers will be forced into a race-to-the-bottom.
Federal policy should not increase the risk of market failure but rather create the conditions for market formation that works for both creators and AI providers. This can be achieved by ensuring IP rights remain protected – the basis for a value exchange to exist. Publishers must have control over access, pricing and distribution, as well as the option on whether to participate at all.
Canadians, too, are being harmed under the status quo. Large language models do not adhere to journalistic standards; they cannot perceive reality, truth, or facts without (mis-)using the work of human journalists and content creators.
As noted by the BBC, “AI assistants have significant issues with basic factual accuracy…The range of errors introduced by AI assistants is wider than just factual inaccuracies. The AI assistants we tested struggled to differentiate between opinion and fact, editorialised, and often failed to include essential context. Even when each statement in a response is accurate, these types of issues can result in responses which are misleading or biased.”
Let us be very clear: the ability for publishers to monetize their content is critical for the maintenance of a free, fair, and strong media ecosystem in Canada.
If publishers cannot monetize content, they cannot reinvest in the accurate and authoritative journalism readers rely upon to make informed decisions that empower them to participate effectively in democratic processes.
AI companies are stealing content on an industrial scale against copyright laws and using it to undermine the media in Canada, selling it for themselves as repackaged and less reliable content. They are effectively strip-mining proprietary content, freeriding on the backs of news publishers while unlawfully enriching themselves. That’s unfair. That’s anti-competitive. That’s illegal. It runs counter to the interests of the news media and the wider public, while undermining government’s drive to encourage AI development and adoption, as this depends on access to high quality data and information created by humans.
News Media Canada’s thoughts on how to create frameworks and regulate AI responsibly:
We need a balanced framework to scale Canadian innovation leaders, while ensuring the ethical, positive, and responsible use of AI through reasonable guardrails. First, intellectual property should be protected. Second, platforms should provide fair compensation to publishers. Third, platforms should provide clear attribution to source content. Fourth, publishers should be allowed to opt out of AI overviews without their websites being removed from search engines. Fifth, platforms should not discriminate in the ranking of search results.
We recommend the Government of Canada carefully consider the Global Principles for Artificial Intelligence, which were developed in 2023 by twenty-six organizations around the world, including News Media Canada. Those principles include:
- Respecting intellectual property rights, protecting the organizations’ investments in original content.
- Leveraging efficient licensing models that can facilitate innovation through training of trustworthy and high-quality AI systems.
- Providing granular transparency to allow publishers to enforce their rights where their content is included in training datasets.
- Clearly attributing content to the original publishers of the content.
Copyright protections must be properly enforced.
In a world of harmful misinformation and disinformation amplified by Big Tech platforms, we need fact-based, fact-checked journalism. Canadians require a strong and free media to access information and to make informed decisions that empower them to participate effectively in democratic processes. To ensure our free and plural press remains commercially viable, AI providers, whether foreign or domestic, should not use publishers’ content to build and run their products without consent, credit, and compensation.
Canada should align with other Western democracies through regulatory cooperation to support homegrown, decentralized, fair, and responsible tech development, while protecting intellectual property, so that news publishers can continue to invest in fact-based, fact-checked original high-quality news content produced by real journalists.
Trustworthy news is an antidote to the proliferation of misinformation online. With a framework backed up by the teeth of enforcement, it contributes to the sustainability of reliable, innovative AI models themselves.
News Media Canada represents 550 trusted news titles – from small local independents to large national news publishers – across Canada.
Sign up for News Media Canada’s weekly newsletter and stay connected.