Italy's competition and consumer protection authority, the AGCM, announced on Thursday that it has concluded inquiries into three companies working with generative artificial intelligence following acceptance of binding undertakings from the firms.
The investigations had focused on alleged unfair commercial practices connected to the possibility that AI systems produce so-called hallucinations, meaning inaccurate or misleading content. The AGCM named the three companies as China-based DeepSeek, France's Mistral AI SAS and Turkey's Scaleup Yazilim Hizmetleri Anonim Şirketi.
According to the regulator, each company committed to improving how they inform users about the risk that their AI systems can generate incorrect or deceptive information. Those measures include adding permanent disclaimers about hallucination risk to the companies' chatbot services and enhancing information available on their websites and within their applications.
The AGCM said that DeepSeek additionally pledged to invest in technology aimed at lowering the likelihood of hallucinations, while acknowledging that current technological solutions cannot remove the risk entirely. This statement underlines that mitigation efforts will be pursued even though complete prevention is not feasible with present systems.
As part of its commitments, the regulator said that NOVA AI - the cross-platform chatbot service provided by Scaleup - will explicitly inform consumers that the service functions as a single interface for accessing multiple chatbots and does not aggregate or further process the individual chatbots' responses. That clarification addresses how the service presents its functionality to end users.
The AGCM has regulatory responsibility for both competition issues and consumer protection, and its announcement indicates the authority used binding commitments to resolve concerns without pursuing further formal measures. The company-specific undertakings are intended to increase transparency for users of chatbot services and to reduce the practical risks associated with incorrect AI-generated content.