The British government has now made clear that it won’t compel technology firms to disclose the copyrighted material they use to train their AI systems. The House of Lords last month voted 221 to 116 for an amendment to the Data (Use and Access) Bill demanding openness from AI manufacturers. Peers argued that writers, artists, and creators should be protected against unlicensed use of their work. But ministers rejected the amendment and instead decided to publish economic and technical reports on the issue later.
The Central Conflict: Transparency vs. Trade Secrets
At the heart of this row is a simple question: must AI companies be compelled to supply the books, articles, music, and other creative works they feed into their algorithms?
The Case for Transparency: Protecting Creators
Opponents say they must. They warn that without transparency, dominant AI technologies can devour huge quantities of copyrighted text, images, and sound with impunity. Creators fear their work being exploited to create new content or to train language models, basically allowing corporations to reap someone else’s work without ever having to pay the license price.
Crossbench peer and film director Beeban Kidron, who has campaigned relentlessly on behalf of the creative sector, also made a powerful speech in the Lords. She illustrated that musicians, writers, and visual artists still struggle to earn a living. For them, AI training is not an abstract legal argument but a very real danger to profits and jobs. Kidron’s amendment would have forced AI developers to at least make public what copyrighted material they used for training. Her belief was that making it transparent would allow creators to find out whether or not their material had been used and receive payment if it had.
Several campaigners and trade associations responded indignantly. Publishers and newspapers, as the News Media Association, accused ministers of ignoring legitimate concerns. Celebrities weighed in. Paul McCartney and the National Theatre sounded an alarm that the future of the creative sector was at stake. Elton John called the issue “existential,” arguing that if proper protections were not established, industries could collapse. Others pointed out that developers of AI currently have free access to digitized books, films, and music. If training materials are hidden from public view, creators have little chance of being aware if their work is being used and what to ask for in licensing discussions.
The Government’s Position: Protecting Competitiveness and Innovation
The ministers were not perceiving this, however. The government’s digital minister in the Lords noted that forcing tech firms to publish the entirety of their training data could threaten national competitiveness. Ministers claimed that forced disclosure risked spilling out trade secrets, stifling innovation, and chasing business away. Instead of openness now, the government will formally consult on AI and copyright and propose four policy options: full licensing, an opt-out system, no change, or a “copyright waiver plus opt-out” system. Deliberations will conclude later this year.
Conversely, those who are pushing for the government’s stance anticipate greater research and consultation in order to come up with a fairer solution. They supplement that AI is still immature and that imposing a mandatory disclosure requirement now would prove to be against them. To them, carefully considered regulation that has been the outcome of the consultation can create a licensing system that safeguards creators but allows AI creators to innovate. For example, a opt-out or opt-in licensing website might be an intermediary solution. With millions of creators consenting to terms, AI companies would then be able to tap into more content initially without the fear of lawsuits, and those who oppose can opt out of having their work included.
What Happens Next?
The defeat of the amendment means the Data Bill will return to the Commons, where MPs will decide what happens next. If they approve the Lords’ amendments, the bill goes back to the House of Lords for formal approval. If MPs are stubborn about the government’s wording, a project is at a “ping-pong” stage, in which give-and-take concessions and compromises are battled out between the two Houses. Either way, the issue remains unresolved. Activists are counting on a public outcry, exacerbated by condemnations from high-profile performers, enforcing a change of heart among ministers before the bill passes into law.
Creators are kept in suspense for now. Most are too limited in size to be involved in long-term negotiations, while others do not possess the law enforcement tools to fight big tech companies. The promise of future stories means little when the bill might get ratified without any openness requirement. Conversely, AI creators are coming under greater pressure globally. Other countries are experimenting with their own laws: some will mandate open disclosure of training data, others rigorous licensing regimes. The UK decision will be part of a global patchwork of regulations that will decide where companies choose to site their AI research and development.
As the summer progresses, attention will then turn to the government’s intended consultation outcome. Those four policy options, from full licensing to an opt-out system, will decide the future of copyright and AI in the UK. If a licenses framework can be created that works to meet creators’ rights and developers’ needs, it can be used as an example by other nations grappling with the same concerns. But if negotiations are stuck or result in weak guidelines, the creative economy can still be stuck in neutral, with AI firms free to train models on copyrighted content behind closed doors.
Industry commentators, politicians, and creatives will meanwhile keep their eyes glued to the Data Bill as it makes its way through the Parliament. The failure of the amendment is far from the final word. Regardless of whether by future amendment, judicial action, or industry-wide collaboration, the fight to make AI copyright clear has a long journey ahead. Meanwhile, creators will simply have to wait and trust that future consultations introduce meaningful safeguards they require, so the next generation of AI tools will respect the rights and accomplishments of the artists whose work helps train them.
Visit our News page for related articles