Written by Annastiina Papunen.
Enhancing the EU’s competitiveness is a key priority for the European Council in the current legislative cycle. In a complex geopolitical environment, in which the international rules-based order is increasingly undermined and core alliances are questioned, it is essential for Europe to be able to stand firmly on its own feet. Strengthening the single market and the EU economic base is ‘an urgent strategic imperative’ in the words of European Council President António Costa, to improve the EU’s competitiveness and develop its strategic autonomy.
On 12 February 2026, EU leaders will meet for an informal leaders’ retreat – ‘a strategic brainstorming session’, according to President Costa – in Alden Biesen, Belgium, to discuss EU competitiveness. This meeting, which 19 EU leaders requested in a letter in October 2025, builds on previous discussions on the topic, notably 1) the informal meeting of 22 January 2026 on transatlantic relations and trade, 2) the strategic discussion on geoeconomy and competitiveness at the December 2025 European Council meeting, and 3) the October 2025 regular meeting on simplification and twin transition. Mario Draghi and Enrico Letta have been invited to join the retreat to share their visions and highlight developments since their groundbreaking reports. European Parliament President Roberta Metsola will also address the meeting; President Costa has met Parliament’s Conference of Presidents ahead of the retreat. No formal conclusions are expected from the strategic debate, but the reflections are likely to feed into the March 2026 European Council conclusions.
Read the complete briefing on ‘Outlook for the 12 February 2026 retreat: Work on competitiveness in the European Council‘ in the Think Tank pages of the European Parliament.
Millions of children are at risk of facing exploitation and abuse through exposure to and having their images being manipulated through generative AI tools. Credit: Ludovic Toinel/Unsplash
By Oritro Karim
UNITED NATIONS, Feb 10 2026 (IPS)
New findings from the United Nations Children’s Fund (UNICEF) reveal that millions of children are having their images manipulated into sexualized content through the use of generative artificial intelligence (AI), fueling a fast-growing and deeply harmful form of online abuse. The agency warns that without strong regulatory frameworks and meaningful cooperation between governments and tech platforms, this escalating threat could have devastating consequences for the next generation.
A 2025 report from The Childlight Global Child Safety Institute—an independent organization that tracks child sexual exploitation and abuse—shows a staggering rise in technology-facilitated child abuse in recent years, growing from 4,700 cases in the United States in 2023 to over 67,000 in 2024. A significant share of these incidents involved deepfakes: AI-generated images, videos, and audio engineered to appear realistic and often used to create sexualized content. This includes widespread “nudification”, where AI tools strip or alter clothing in photos to produce fabricated nude images.
A joint study from UNICEF, Interpol, and End Child Prostitution in Asian Tourism (ECPAT) International examined the rates of child sexual abuse material (CSAM) circulated online across 11 countries found that at least 1.2 million children had their images manipulated into sexually explicit deepfakes in the past year alone. This means roughly one in every 25 children—or one child in every classroom—has already been victimized by this emerging form of digital abuse.
“When a child’s image or identity is used, that child is directly victimised,” a UNICEF representative said. “Even without an identifiable victim, AI-generated child sexual abuse material normalises the sexual exploitation of children, fuels demand for abusive content and presents significant challenges for law enforcement in identifying and protecting children that need help. Deepfake abuse is abuse, and there is nothing fake about the harm it causes.”
A 2025 survey from National Police Chiefs’ Council (NPCC) studied the public’s attitudes toward deepfake abuse, finding that deepfake abuse had surged by 1,780 percent between 2019 and 2024. In a UK-wide representative survey conducted by Crest Advisory, nearly three in five respondents reported feeling worried about becoming victims of deepfake abuse.
Additionally, 34 percent admitted to creating a sexual or intimate deepfake of someone they knew, while 14 percent had created deepfakes of someone they did not know. The research also found that women and girls are disproportionately targeted, with social media identified as the most common place where these deepfakes are spread.
The study also presented respondents with a scenario in which a person creates an intimate deepfake of their partner, discloses it to them, and later distributes it to others following an argument. Alarmingly, 13 percent of respondents said this behavior should be both morally and legally acceptable, while an additional 9 percent expressed neutrality. NPCC also reported that those who considered this behavior to be acceptable were more likely to be younger men who actively consume pornography and agree with beliefs that would “commonly be regarded as misogynistic”.
“We live in very worrying times, the futures of our daughters (and sons) are at stake if we don’t start to take decisive action in the digital space soon,” award-winning activist and internet personality Cally-Jane Beech told NPCC. “We are looking at a whole generation of kids who grew up with no safeguards, laws or rules in place about this, and now seeing the dark ripple effect of that freedom.”
Deepfake abuse can have severe and lasting psychological and social consequences for children, often triggering intense shame, anxiety, depression, and fear. In a new report, UNICEF notes that a child’s “body, identity, and reputation can be violated remotely, invisibly, and permanently” through deepfake abuse, alongside risks of threats, blackmailing, and extortion from perpetrators. Feelings of violation – paired with the permanence and viral spread of digital content – can leave victims with long-term trauma, mistrust, and disrupted social development.
“Many experience acute distress and fear upon discovering that their image has been manipulated into sexualised content,” Afrooz Kaviani Johnson, a Child Protection Specialist at UNICEF told IPS. “Children report feelings of shame and stigma, compounded by the loss of control over their own identity. These harms are real and lasting: being depicted in sexualised deepfakes can severely impact a child’s wellbeing, erode their trust in digital spaces, and leave them feeling unsafe even in their everyday ‘offline’ lives.”
Cosmas Zavazava, Director of the Telecommunication Development Bureau at the International Telecommunications Union (ITU), added that online abuse can also translate to physical harm.
In a joint statement on Artificial Intelligence and the Rights of the Child, key UN entities, including UNICEF, ITU, the Office of the UN High Commissioner for Human Rights (OHCHR) and the UN Commission of the Rights of the Child (CRC) warned that among children, parents, caregivers and teachers, there was a widespread lack of AI literacy. This refers to the basic ability to understand how AI systems work and how to engage with them critically and effectively. This knowledge gap leaves young people especially vulnerable, making it harder for victims and their support systems to recognize when a child is being targeted, to report abuse, or to access adequate protections and support services.
The UN also emphasized that a substantial share of responsibility lies with tech platforms, noting that most generative AI tools lack meaningful safeguards to prevent digital child exploitation.
“From UNICEF’s perspective, deepfake abuse thrives in part because legal and regulatory frameworks have not kept pace with technology. In many countries, laws do not explicitly recognise AI‑generated sexualised images of children as child sexual abuse material (CSAM),” Johnson said.
UNICEF is urging governments to ensure that definitions of CSAM are updated to include AI-generated content and “explicitly criminalise both its creation and distribution”. According to Johnson, technology companies should be required to adopt what she called “safety-by-design measures” and “child-rights impact assessments”.
She stressed however that while essential, laws and regulations alone would not be enough. “Social norms that tolerate or minimise sexual abuse and exploitation must also change. Protecting children effectively will require not only better laws, but real shifts in attitudes, enforcement, and support for those who are harmed.”
Commercial incentives further compound the problem, with platforms benefitting from increased user engagement, subscriptions, and publicity generated by AI image tools, creating little motivation to adopt stricter protection measures.
As a result, tech companies often introduce guardrails only after major public controversies — long after children have already been affected. One such example is Grok, the AI chatbot for X (formerly Twitter), which was found generating large volumes of nonconsensual, sexualized deepfake images in response to user prompts. Facing widespread, international backlash, X announced in January that Grok’s image generator tool would only be limited to X’s paid subscribers.
Investigations into Grok are ongoing, however. The United Kingdom and the European Union have opened investigations since January, and on February 3, prosecutors in France raided X’s offices as part of its investigation into the platform’s alleged role in circulating CSAM and deepfakes. X’s owner Elon Musk was summoned for questioning.
UN officials have stressed the need for regulatory frameworks that protect children online while still allowing AI systems to grow and generate revenue. “Initially, we got the feeling that they were concerned about stifling innovation, but our message is very clear: with responsible deployment of AI, you can still make a profit, you can still do business, you can still get market share,” said a senior UN official. “The private sector is a partner, but we have to raise a red flag when we see something that is going to lead to unwanted outcomes.”
IPS UN Bureau
Follow @IPSNewsUNBureau
Les Kurdes de Turquie se mobilisent alors que l'armée syrienne a repris la quasi-totalité des territoires administrés par les forces kurdes dans le nord-est de la Syrie. Manifestations, appels humanitaires et prises de parole politiques se heurtent à une répression accrue, visant aussi les journalistes.
- Articles / Courrier des Balkans, Balkans Syrie, Turquie, Erdogan, Médias, Relations régionales, Populations, minorités et migrations, Relations internationalesIMF Managing Director Kristalina Georgieva at the World Government Summit, Dubai, UAE 3-5 February 2026. Credit: International Monetary Fund (IMF)
By Kristalina Georgieva
DUBAI, United Arab Emirates, Feb 10 2026 (IPS)
It is a pleasure for me to join His Excellency, Minister Al Hussaini in welcoming you to this important dialogue here in the United Arab Emirates—a fast-growing global AI hub. A recent Microsoft study reports that 64 percent of the UAE’s working age population uses AI, which is the highest rate globally.
This illustrates the dynamism we see in the region—and the major investments and partnerships that some of the world’s biggest tech companies are making here.
Why such a huge commitment to this region? Because the UAE and the members of the GCC all understand just how transformative AI can be. They have made systemically significant investments in human capital over the last decades. IMF estimates show that, with the right measures in place, AI could fuel a boost to global productivity of up to 0.8 percentage points per year. This could raise global growth to levels exceeding those of the pre-pandemic period.
Here in the Gulf region, AI could boost non-oil GDP in Gulf countries by up to 2.8 percent. For economies that have long been dependent on hydrocarbon exports, this presents an enormous opportunity to diversify and build new sources of growth.
Now, major technology changes often bring disruption. And sure enough, we can expect disruption from AI. Especially to labor markets. On average, 40 percent of jobs globally will be impacted by AI—either upgraded or eliminated or transformed. For advanced economies, 60 percent of jobs will be affected. This is like a tsunami hitting the labor market.
We are already seeing the evidence: about one in 10 job postings in advanced economies now require at least one new skill. Workers with in-demand skills will likely see productivity and wage gains. This will create more demand for services, and increase employment and wages among low-skilled workers. But middle-skilled jobs will be squeezed.
That means that young people and the middle class will be hit hardest.
We can expect to see a similar divergence between countries. Those with an economic structure conducive to AI adoption—that is, strong digital infrastructure, more skilled labor forces, and robust regulatory frameworks—are likely to experience the largest and fastest benefits. Countries that don’t may get left behind. This is why we gathered here today. AI looks unstoppable.
But whether or not countries can successfully capitalize on AI’s enormous promise is yet to be determined. And this will largely depend on the policy regimes they put in place. So then, what must be done to ensure AI translates into broad-based prosperity for this region?
First, macro policies. Investment and innovation in AI will boost growth. Fiscal policies can support this by strengthening tax systems and by funding research, reskilling, or sector-based training programs. However, tax systems should not encourage automation at the expense of people. Likewise, effective financial regulation will be essential to ensure financial market efficiency and improved risk management.
Second, guardrails. AI needs to be regulated to ensure it’s safe, fair, and trustworthy—but without stifling innovation. Different countries are taking different approaches, ranging from risk-based frameworks to high-level principles. Whatever approach they take, it’s critical that countries coordinate.
That brings me to my third point: cooperation and partnerships. Scale is a big advantage in AI. But you can’t get scale without cooperation among governments, AI researchers and developers, including when it comes to data sharing and knowledge transfer.
Let me conclude. AI will transform our economies. It will present immense opportunities and pose significant risks. And it falls to you, the world’s policymakers, to ensure that the opportunities are maximized for your countries and the risks controlled.
IPS UN Bureau
Follow @IPSNewsUNBureau