UK competition watchdog is investigating the AI ​​market over security concerns

Britain’s competition watchdog has launched an assessment of the artificial intelligence market, in an effort to weigh up the potential opportunities and risks of a technology Bill Gates describes as “weighing”.revolutionary as mobile phones and the internet.”

The Competition and Markets Authority (CMA) said it would examine the systems underpinning tools like ChatGPT to determine the competition rules and consumer protection that may be required. This, according to the CMA, is to ensure that the development and deployment of AI tools is done in a safe, secure and responsible manner.

“It is critical that the potential benefits of this transformative technology are easily accessible to UK businesses and consumers, while protecting people from things like false or misleading information,” said CMA chief executive Sarah Cardell.

The CMA has set a deadline for submitting views and evidence by June 2, with plans to report its findings in September.

Join the TNW conference in June and save 50% now

Take advantage of our 2for1 sale and bring your friend

The announcement comes as regulators around the world tighten their grip on the development of generative AI – a technology that can generate text, images and audio that are virtually indistinguishable from human output. The hype around this type of AI was quickly followed by fears about its impact on jobs, industry, education, privacy – virtually all aspects of everyday life.

In late March, more than 2,000 industry experts and executives in North America — including DeepMind researchers, computer scientist Yoshua Bengio, and Elon Musk — signed an open letter calling for a six-month pause in training of systems more powerful than GPT -4, the successor to ChatGPT. The signatories warned against this “Powerful AI systems should not be developed until we are sure that their effects will be positive and that their risks will be manageable.”

Meanwhile, Dr. Geoffry Hinton, commonly referred to as the “godfather” of AI, has has resigned at Google this week to talk about the dangers of the technology he helped develop. Hinton fears that generative AI tools could flood the internet with fake photos, videos and texts to the extent that an average person will no longer be able to “tell what is true”.

And yesterday, former Chief Science Adviser to the British Government, Sir Patrick Vallance told MPs of the Science, Innovation and Technology Committee that AI can have as big an impact on jobs as the industrial revolution.

Anita Schjoll AbildgaardCEO and co-founder of Norwegian startup Iris.ai, is optimistic the research will allay some of these fears and “enforce consumer protections and safely advance AI development,” she told TNW. Abildgaard also hopes the review will help address the “competitive imbalance” and “lack of disclosure” present in Big Tech’s proprietary data and training models.

While the CMA and many others are clearly concerned about the impact of AI tools developed by companies like OpenAI, Microsoft and Google, Cardell is inflexible that the assessment would not target specific companies. Rather, she said the “fact finding mission” would engage “a range of different interested stakeholders, [including] companies, academics and others, to collect a rich and broad set of information”.

Cardell is also clear that the CMA does not want to suppress, but promote, the growth of the rapidly emerging AI industry, albeit with a few safeguards. “It is a technology that has the potential to transform the way businesses compete and drive substantial economic growth,” she said.

A UK government white paper published in March follows a similar trend, indicating that ministers prefer not to set bespoke rules (or oversight bodies) to regulate the use of AI at this stage. This differs from the EU which is currently in the later stages of complete its groundbreaking AI law – the world’s first AI law by a major regulatory body.

While the EU is first out of the gate, according to a new report from the Center for Data InnovationPpoliticians should avoid being swept up in the “hysteria” and should “not rush to regulate AI before someone else does, as that is likely to be a bad forecast and lead to missed opportunities for society.”

Be that as it may, the meteoric rise of generative AI has clearly left governments scrambling to figure out if and how to regulate it.