Displaying items by tag: monitoring
UK and US to partner on safety testing AI models
The UK and the USA have embarked on a landmark partnership for AI safety testing, with technology secretary Michelle Donelan formalising the collaboration. This will align the efforts of both nations' AI safety institutes to test and evaluate emerging AI models. Key elements include sharing scientific strategies, exchanging experts, and conducting joint AI model testing exercises. The move follows commitments made at last November's AI Safety Summit at Bletchley Park, where major firms like OpenAI and Google DeepMind agreed to voluntary testing of new models by safety institutes. The Department for Science, confirming the immediate start of this partnership, has stressed its role in addressing the rapid development and potential risks of AI. The Government has also announced a £100 million investment for AI regulation and safe usage, opting to use existing regulators for AI monitoring rather than creating a new central body.
MPs to take action on missing clinical trial results
The Science and Technology Committee has decided to monitor reporting of clinical trials by universities, and will question those that don’t improve. Clinical trials are the best way to test whether a medicine is safe and effective. They can involve thousands of people, patients and healthy volunteers, and take years to complete. Results from around half of all clinical trials remain hidden. Trials with negative results are twice as likely to remain unreported as those with positive results. This means that people who make decisions about medicines do not have full information about the benefits and risks of treatments we use every day and can dramatically alter how a drug is perceived, leading to unnecessary spending in the NHS. See