There were 356 Category A, 'self-generated' images or videos of 3–6-year-olds hashed this year. Where Category B material was seen, the children were typically rubbing genitals (categorised as masturbation) using their hands/fingers or, less often, another object, such as a pen or hairbrush. Of the 2,401 'self-generated' images and videos of 3–6-year-olds that we hashed this year, 91% were of girls and most (62%) were assessed as Category C by our analysts. We are now seeing much younger children appearing in this type of abuse imagery.
Europol said the operation had been the largest ever handled by its experts in fighting child sexual exploitation and one of the biggest cases the agency had supported in recent years. The police busted one of the largest child abuse networks in the world, operating in nearly 35 countries. Matthew Falder was sentenced to 25 years in jail in 2017 after admitting 137 counts of online abuse, including the encouragement of child rape and even the abuse of a baby. In Washington, the US Department of Justice said separately the site operated “the largest child sexual exploitation market by volume of content” when it was taken down. UK and US authorities investigating a “dark web” child pornography site run from South Korea on Wednesday announced the arrest of 337 suspects in 38 countries. The suspects allegedly acted as administrators for the website and gave advice to its users on how to evade law enforcement while using the platform.
More than nine in ten people in the UK say they are concerned at how images and videos of children being sexually abused are shared through end-to-end encrypted (E2EE) messaging services. But there are currently no laws that make it an offence to possess the technology to create AI child sexual abuse images. The Utah group expects to arrest nearly twice as many people this year as last year for crimes related to child sexual abuse material, but federal funding has not kept pace with the surge. The anonymity offered by the sites emboldens members to post images of very young children being sexually abused, and in increasingly extreme and violent forms. Facebook announced in March plans to encrypt Messenger, which last year was responsible for nearly 12 million of the 18.4 million worldwide reports of child sexual abuse material, according to people familiar with the reports.
- The laws will also make it illegal for anyone to possess so-called “paedophile manuals” which teach people how to use AI to sexually abuse children.
- Viewers could use a search engine to pick videos by age, including "preteen hardcore" and "pedophile."
- He "assisted with managing and maintaining" four separate websites on the dark web between at least December of 2021 and November of 2022, when he was arrested, the DOJ said in a written statement.
- The Hidden Wiki and its mirrors and forks hold some of the largest directories of content at any given time.
Outcome From The International Cooperation Investigation
Tlhako said young adults need to be more aware and responsible, as innocently taken videos and pictures could end up in the wrong hands and later appear on websites. Some of this material is self-generated but what happens when the device needs to go for repairs? He warned that many children unknowingly expose themselves to danger simply by sharing explicit pictures either with a partner or friend. Before these children realise it, they are trapped in a world they could never imagine.

Former FBI Official Says Bureau Missed Clear Signs Before Trump Shooting
Data obtained through a public records request suggests Facebook’s plans to encrypt Messenger in the coming years will lead to vast numbers of images of child abuse going undetected. “Historically, you would never have gone to a black market shop and asked, ‘I want real hard-core with 3-year-olds,’” said Yolanda Lippert, a prosecutor in Cook County, Ill., who leads a team investigating online child abuse. A private section of the forum was available only to members who shared imagery of children they abused themselves. Now an advocate for laws preventing crimes against children, she had testified in support of the 2008 legislation. With so many reports of the abuse coming their way, law enforcement agencies across the country said they were often besieged. The groups use encrypted technologies and the dark web, the vast underbelly of the internet, to teach pedophiles how to carry out the crimes and how to record and share images of the abuse worldwide.
October 2023 Report Summary
Twenty years ago, the online images were a problem; 10 years ago, an epidemic. Online predators create and share the illegal material, which is increasingly cloaked by technology. The Internet Watch Foundation has documented concerning statistics about the rapid increase in the number of AI-generated images they encounter as part of their work. Investigators viewed the hundreds of arrests as a success for authorities in their battle against illegal sites, which use cryptocurrencies because of their presumed anonymity and ability to render purchasers untraceable. He is already serving an 18-month sentence in South Korea on convictions related to child pornography.

More From CBS News
He added that agents who raided the men’s homes found tens of thousands of images of child pornography in some. He told investigators he didn't think exchanging the images was harmful "because the harmful acts have already taken place." Websites like theirs, he said, were "essential for people to express themselves." In his 30s, and with the internet still in its infancy, he traded floppy discs of child pornography in the mail. Many children in the videos have not yet been identified. Law enforcement officials from the US, Britain and South Korea described the network as one of the largest child pornography operations they have encountered to date. FBI Director Kash Patel said, “This operation represents one of the most significant strikes ever made against online child exploitation networks.
Update: Understanding The Rapid Evolution Of AI-Generated Child Abuse Imagery
- "Garrell engaged in an extremely complex and technologically sophisticated conspiracy that far exceeds the typical child-exploitation offenses," prosecutors said.
- In March, the police conducted coordinated raids across 31 nations in what Europol’s Guido Limmer described as the “largest operation ever" of its kind.
- These markets have attracted significant media coverage, starting with the popularity of Silk Road and its subsequent seizure by legal authorities.
- "Since AI-generated images became possible, there has been this huge flood… it's not just very young girls, they're paedophiles talking about toddlers," she said.
Our first report in October 2023 revealed the presence of over 20,000 AI-generated images on a dark web forum in one month where more than 3,000 depicted criminal child sexual abuse activities. The Internet Watch Foundation (IWF) has identified a significant and growing threat where AI technology is being exploited to produce child sexual abuse material (CSAM). When it was shut down in March 2015, the site had over 215,000 users and hosted 23,000 sexually explicit images and videos of children as young as toddlers. Unlike other websites hosting images of child sex abuse, ‘Kidflix’ allowed users to stream videos as well as download files.

One Warrant Used To Target Thousands Of Child Porn Suspects In 120 Countries
As AI continues developing rapidly, questions have been raised about the future risks it could pose to people's privacy, their human rights or their safety. But last year, an earlier "open source" version was released to the public which allowed users to remove any filters and train it to produce any image – including illegal ones. Several versions have been released, with restrictions written into the code that control the kind of content that can be made. AI image generator Stable Diffusion was created as a global collaboration between academics and a number of companies, led by UK company Stability AI. "Within those groups, which will have 100 members, people will be sharing, 'Oh here's a link to real stuff,'" she says.

Victims and survivors have a right not to live in fear of revictimisation by technology which should be safe by design.” They are revictimised every time these are viewed, and this is no different with AI images. It’s vital that tech companies and politicians do more to address these dangers as a matter of urgency. Most of these (90%) were so convincing that they could be assessed under the same law as real CSAM3. “Without proper controls, generative AI tools provide a playground for online predators to realise their most perverse and sickening fantasies.

And parents of the abused, struggling to cope with the guilt of not having prevented it and their powerlessness over stopping its online spread. And the group tasked with serving as a federal clearinghouse for the imagery — the go-between for the tech companies and the authorities — was ill equipped for the expanding demands. Children, some just 3 or 4 years old, being sexually abused and in some cases tortured.
The project and its operations are not fully declassified, but there are documents that show it was running at least from 2018 to 2021. “I think that people were always there, but the access is so easy,” said Lt. John Pizzuro, a task force commander in New Jersey. While any child at imminent risk remains a priority, the volume of work has also forced the task forces to make difficult choices.
The child pornographic content had various categories including "Art" (which included shotacon among other subcategories), "hardcore", "kindergarten", and "toddler". The illegal images and videos were posted on the forums and the chats where members were able to communicate with one another. Boystown (stylized in logo as BOYS TOWN) was a child pornography website run through the Tor network as an onion service. The BBC is not responsible for the content of external sites. It was discovered during an investigation into paedophile Matthew Falder from England, who was jailed for 25 years for sharing abuse tips and images on the dark web.
Comments by users on individual images in Pixiv make it clear they have a sexual interest in children, with some users even offering to provide images and videos of abuse that were not AI-generated. The site's vast library – nearly half of it consisting of images never seen before by law enforcement – is an illustration of what authorities say is an explosion of sexual abuse content online. All 'self-generated' child sexual abuse imagery is horrific, and our analysts sadly see it every day, but seeing so many very young children in these images and videos is particularly distressing.