As artificial intelligence becomes an integral part of daily life across Europe, Microsoft is committed to ensuring that AI technologies serve the public good while protecting citizens from potential abuses. With the start of a new EU mandate, there is a critical opportunity to reflect on how to leverage AI for innovation while taking proportionate steps to protect vulnerable populations.
The Promise and Challenge of AI
AI is no longer a distant prospect but a present reality, reshaping business, revolutionizing healthcare, and accelerating scientific discovery across the EU. Yet, as with any transformative technology, AI brings potentially significant challenges alongside immense opportunities. As a technology company providing AI services, Microsoft bears a responsibility to ensure that its solutions are deserving of public trust.
As European Commission President Ursula von der Leyen stated, "Europe is leading the way in making AI safer and more trustworthy, and in tackling the risks stemming from its misuse." However, the EU must not lose sight of AI's central role in driving digital transformation and economic growth, working to "focus on becoming a global leader in AI innovation."
A Balanced Approach to AI Governance
Advancing innovation and safety requires a balanced, whole-of-society approach that recognizes the respective roles of government, civil society, and industry. The EU has already positioned itself at the forefront of creating robust legal and regulatory frameworks that make industry players accountable for developing safe online products, including AI.
Microsoft recognizes the legislative developments undertaken during the 2019-2024 mandate and stands ready to engage in dialogue with EU stakeholders on implementing these frameworks effectively and proportionately. The company also sees a need for modernized criminal and other laws to help address AI misuse as the technology continues to evolve.
Protecting Vulnerable Groups
Microsoft's annual safety research reveals that certain societal groups are disproportionately at risk from deliberate misuse of AI technology. The company advocates for practical steps to protect people—most notably children, women, and older adults—from the harms that arise from abusive AI-generated content.
In a comprehensive white paper, Microsoft outlines steps the company is taking to address these harms, along with policy recommendations to build on existing efforts. Central to these recommendations is the need to establish clear and proportionate rules that protect individuals while enabling Europe to continue innovating.
Three Key Risk Areas
Microsoft's recommendations focus on enhancing the response to AI misuse through three key risk areas:
1. Protecting Children from Online Exploitation
Children are particularly vulnerable to AI-generated content that could be used for exploitation or grooming. Strong protections must be in place to prevent AI from facilitating harm to minors.2. Safeguarding Women from Non-Consensual Intimate Imagery
AI-generated deepfakes and synthetic media pose serious risks to women, who are disproportionately targeted by non-consensual intimate imagery. Clear legal frameworks and technological safeguards are essential.3. Safeguarding Older Adults from AI-Enabled Fraud
Older adults face heightened risks from AI-enabled fraud schemes that exploit trust and unfamiliarity with emerging technologies. Enhanced protections and education are crucial.Microsoft's Safety Architecture
As a company, Microsoft has built a strong safety architecture for its services, grounded in safety by design and incorporating durable media provenance and watermarking. The company continues to safeguard its services from abusive content through robust collaboration across industry and with governments and civil society, supported by ongoing education and public awareness efforts.
Key recommendations include:
- Integrating provenance tools to trace content origin
- Strengthening appropriate existing legal frameworks
- Enhancing measures that put victim-based decision making at the forefront
- Building trust in AI across society for its benefits to be fully realized
A Call to Collective Action
The challenges are significant, but so is the opportunity. By proactively addressing these issues, Europe can build a future where AI enhances human creativity, protects individual privacy, and strengthens the foundations of democracy.
"At Microsoft, we are committed to playing our part, but we recognize that we cannot do it alone," the company states. "We welcome engagement and feedback from stakeholders across the EU's digital ecosystem. It is essential that we get this right, and that means working together."
Microsoft stands for technology that is a positive force in society and people's lives, aligned with its mission to empower every person and organization on the planet to achieve more. The time for action is now.
Source: Microsoft EU Policy Blog