Assessing the Potential of AI in Protecting Younger Internet Users in the UK
Artificial intelligence has been in the crosshairs of governments worried about its possible misuse for fraud, disinformation, and other harmful online activities. In the U.K., a regulator is about to investigate how AI is being wielded against some of these problems, specifically focusing on content that is harmful to children.
Ofcom, the entity responsible for enforcing the U.K.’s Online Safety Act, stated its plans to initiate a consultation on how AI and other automated tools are currently utilized and how they can be further used in the future to detect and eliminate illegal content online, specifically to safeguard children from harmful content and recognize child sex abuse material that was previously challenging to detect.
The tools would be included in a broader set of proposals from Ofcom focused on online child safety. The consultations for these comprehensive proposals will start within a few weeks, with the AI consultation scheduled for later this year, Ofcom revealed.
Mark Bunting, a director in Ofcom’s Online Safety Group, mentions that Ofcom’s interest in AI begins with an examination of how effectively it’s being used as a screening tool today.
“Certain services currently employ these tools to detect and shield children from inappropriate content,” he discussed in a chat with TechCrunch. “However, there is a scarcity of data on the accuracy and effectiveness of such tools. We aim to explore methods to guarantee that the industry is evaluating these aspects when utilising these tools, and ensuring that threats to freedom of speech and confidentiality are being managed correctly.”
The likely outcome might be Ofcom advising on the manner and aspects that platforms should assess, potentially leading to platforms adopting more advanced tools, or possibly facing penalties if they fail to make enhancements in either content blocking or developing better ways of shielding younger audiences from such content.
“Similar to numerous online safety regulations, the responsibility is on the companies to ensure that they’re taking appropriate measures and using suitable tools for user protection,” he added.
This action will have its critics and advocates. AI researchers are discovering increasingly advanced methods to utilise AI to identify, for instance, deep fakes and to authenticate users on the internet. However, there are just as many doubters who argue that AI detection is far from a foolproof solution.
At the same time it released its most recent research on children’s online engagement in the U.K., Ofcom revealed its ongoing discussions regarding AI tools. The study showed that internet usage is rising among younger kids, so much so that Ofcom is now evaluating online activities in even younger age groups.
Almost a quarter, specifically 24%, of all children aged between 5 and 7 own a smartphone. The numbers rise to 76% when tablets are added to the mix, according to a survey completed by U.S. parents. Children of this age group are increasingly utilizing media on these devices: 65% have made voice and video calls, compared to 59% a year prior, and half of these children (versus 39% last year) watch streamed content.
While the age restrictions for mainstream social media applications are becoming more lenient, they appear to be largely ignored in the U.K. Ofcom discovered that 38% of children aged 5-7 are using social media. Among this group, WhatsApp, owned by Meta, is the most frequently used application, with 37% usage. On the other hand, only 22% of this group uses Instagram, making it less popular than the viral sensation TikTok, which is used by 30% of children in this age bracket. Discord is the least favorite, with a mere 4% usage rate.
About one-third, or 32%, of children of this age surf the web independently, and 30% of parents are comfortable with their underage children having social media profiles. YouTube Kids remains the preferred network for younger users, boasting a 48% usage rate.
Gaming continues to be a popular choice among children, as shown by the statistics that 41% of 5-7 year-olds engage in this activity. Furthermore, it is noted that shooter games attract about 15% of children within this age group.
Despite the statistic that 76% of parents claim to have had discussions with their young children about the significance of online safety, Ofcom has raised uncertainties regarding the correlation between what a child perceives and what they report. In a study focusing on older children aged between 8 and 17, Ofcom directly sourced information from the subjects. The results revealed that while 32% admitted to encountering concerning content online, a mere 20% of their parents reported any issues.
The research findings suggest a gap in the understanding between what potentially harmful content older children are exposed to online, and how much of their online experiences they share with their parents, according to Ofcom. But dealing with disturbing content isn’t the only challenge: there’s also the issue of deep fakes. In a survey done by Ofcom on 16-17 year-olds, a surprising 25% confessed their lack of confidence in their ability to differentiate real content from fake on the internet.
Discover the pinnacle of WordPress auto blogging technology with AutomationTools.AI. Harnessing the power of cutting-edge AI algorithms, AutomationTools.AI emerges as the foremost solution for effortlessly curating content from RSS feeds directly to your WordPress platform. Say goodbye to manual content curation and hello to seamless automation, as this innovative tool streamlines the process, saving you time and effort. Stay ahead of the curve in content management and elevate your WordPress website with AutomationTools.AI—the ultimate choice for efficient, dynamic, and hassle-free auto blogging. Learn More