Author: Online Safety 4 Schools
Online Safety: KCSiE 2024 vs 2025
Online Safety: KCSiE 2024 vs 2025
Author: Jonathan Taylor MSc – Social Media & Online Safety Consultant
Purpose: Understanding the Changes in Keeping Children Safe in Education 2025 compared to KCSiE 2024 – Specific to ‘Online Safety’
Online Safety: KCSIE 2024 vs 2025 Comparison (At a Glance)
| Online Safety Topic | KCSIE 2024 | KCSIE 2025 | Notes |
| Misinformation / Disinformation / Conspiracy Theories | ❌ Not mentioned | ✅ Explicitly included. Online Fakeness and its impact on Online Exploitation (Deep Fake) (para 135) | New online harm category |
| Filtering & Monitoring – Self-assessment support | ❌ Not included | ✅ ‘Plan technology for your school’ should include discussion of School Internet Access & Data, plus Website & App restriction. (para 142) | Aids technical evaluation |
| Generative AI Guidance | ❌ Not included | ✅ Now includes importance of AI training & guidance. (para 143) | Emphasis on emerging tech |
| Cybersecurity – Emphasis on Resilience | ⚠️ Mentioned broadly | ✅ Highlights ‘cyber resilience’. should also include understanding the importance of Online Competence. (para 144) | Aligns with DfE standards |
| Annual Review of Online Safety Approach Recommended | ❌ Not mentioned | ✅ Online Safety Audit Encouraged (para 145) | Structured evaluation approach |
| Governor Oversight Tools – UKCIS ‘Questions’ Resource | ❌ Not included | ✅ Recommended. Schools should look to include Governor / Trustee training. (para 146) | Helps governors assess policies |
| 2025 -2026 Online Safety Workshops ‘At A Glance’ |
Online Safety for Schools – Staff
🔐 Key Online Safety–Related Changes from 2024 to 2025
1. Greater Emphasis on Concurrent Online and Offline Abuse
- 2025 update reinforces that online abuse often occurs simultaneously with offline abuse, highlighting how these are not isolated risks.
- It notes that children may experience abuse via chat groups, including the non-consensual sharing of indecent or abusive content, which was previously less emphasized.
2. Clarity on Filtering and Monitoring Responsibilities
- While both versions state that training must include online safety, the 2025 version puts clearer emphasis on understanding roles and responsibilities related to filtering and monitoring technologies in schools.
- References to paragraph 140 (2025) (previously para 140 in 2024) are maintained but with clearer instruction around ensuring this understanding during induction and refresher training.
3. Terminology and Links Updates
- The 2025 version standardizes references to “nude and semi-nude images and videos” (previously varied between “sexting” or “youth-produced sexual imagery”) in line with UKCIS guidance.
- More direct links are provided to updated government and UKCIS guidance documents, making it easier for staff to access practical help.
4. Online Safety in Child-on-Child Abuse
- The 2025 version more explicitly discusses online elements in child-on-child abuse, e.g.:
- Online bullying
- Harassment and misogynistic/misandrist messages
- Distribution of indecent imagery without consent
5. Online Misuse Highlighted as a Standalone Risk
- Online safety is referenced more frequently in the context of signs of abuse, reinforcing it as a core safeguarding concern rather than just an add-on.
| Category | KCSiE 2024 Guidance | KCSiE 2025 Guidance |
| Concurrent Online & Offline Abuse | Mentions online risks separately from offline abuse. | Emphasises abuse often occurs both online and offline simultaneously. |
| Filtering & Monitoring Responsibilities | Mentions filtering/monitoring in training but with less emphasis. | Clearer training requirements around filtering & monitoring responsibilities. |
| Terminology & Links | Terminology varies; ‘sexting’ and ‘youth-produced imagery’ used. | Consistent terminology: ‘nude & semi-nude images/videos’; links to UKCIS. |
| Child-on-Child Online Abuse | General mention of online bullying and harassment. | Expanded detail on online elements in peer abuse and chat groups. |
| Online Misuse as Safeguarding Risk | Online misuse is addressed but not always explicitly. | Explicitly framed as a core safeguarding issue. |
🌐 Presented by: Jonathan Taylor MSc
www.onlinesafety4schools.co.uk
📧 onlinesafety4schools@ymail.com
Self-Perpetual Radicalisation (Misogyny) in Relation to Social Media Influencers
The Influencer Ecosystem:
Influencers who operate in the male self-improvement space can subtly—or overtly—promote harmful gender stereotypes and misogynistic narratives. Framed as “truth-telling” or “breaking societal taboos,” their content often criticises feminism, portrays women as manipulative or inferior, and blames women for perceived male disempowerment.
Introduction:
Self-perpetual radicalisation refers to a process whereby individuals adopt increasingly extreme beliefs through self-directed online consumption, often without real-world contact with extremist groups. One of the most alarming areas of this phenomenon is the rise of misogynistic ideologies fuelled by certain social media influencers, particularly in the realms of male self-help, fitness, dating advice, and “alpha male” content.
Radicalisation Pathways:
- Echo Chambers: Followers are drawn in by surface-level advice (e.g., fitness or financial success) and gradually exposed to more extreme content, creating an ideological funnel.
- Algorithmic Exposure: Social media platforms recommend increasingly provocative content, reinforcing biased worldviews and reducing exposure to dissenting perspectives.
- Us vs Them Narratives: Misogynistic influencers frame gender dynamics as a ‘zero-sum game’, cultivating resentment toward women and championing male dominance.
- Parasocial Loyalty: Young, impressionable males may form one-sided emotional bonds with influencers, accepting their views without scrutiny.
Mechanisms of Radicalisation:
- Algorithmic Exposure &v Reinforcement: Platforms such as Snapchat, YouTube, TikTok, and Instagram reward engagement, promoting controversial or emotionally charged content that often leans toward extremism.
- Us vs. Them Narratives: Misogynistic influencers frame gender dynamics as a ‘zero-sum game’, cultivating resentment toward women and championing male dominance
- Community Formation: Followers of influencers may form tightly knit online communities, reinforcing in-group/out-group dynamics and insulating members from counterarguments.
- Narrative Control: Influencers create compelling, oversimplified narratives that blame complex problems on scapegoats or conspiracies.
- Identity Fusion: Followers often develop parasocial (online ‘hero’) relationships with influencers, aligning their identity and values with them. This fosters loyalty and receptiveness to radical views.

Impacts:
- Normalisation of Misogyny: Repeated exposure to demeaning or hostile views toward women desensitizes audiences and makes sexism socially acceptable in some circles.
- Real-World Harm: Online misogyny is linked to increased harassment, abusive behaviour, and in extreme cases, gender-based violence and hate crimes.
- Incel and Red Pill Subcultures: These online movements thrive on influencer rhetoric that vilifies women and idealizes patriarchal dominance, often wrapped in pseudoscientific or self-help language.
Case Examples:
- Influencers promoting the “Red Pill” ideology, which asserts women are inherently deceitful and that men must reclaim power through control and emotional detachment.
- Lifestyle gurus attacking feminism and encouraging followers to pursue “high-value man” status by subordinating women.
- Content creators minimizing consent or mocking female autonomy under the guise of humour or “free speech.”
Conclusion:
The intersection of influencer culture and online misogyny is a potent driver of self-perpetual radicalisation. Addressing it requires a multi-pronged approach: stronger platform accountability, early media literacy education, and the promotion of healthy, inclusive models of masculinity. Left unchecked, influencer-driven misogyny will continue to breed resentment, polarisation, and violence under the guise of entertainment or empowerment.
QR CODE FOR DETAILS CURRENT ONLINE SAFETY WORKSHOPS 
🌐 Presented by: Jonathan Taylor MSc
www.onlinesafety4schools.co.uk
📧 onlinesafety4schools@ymail.com
Social Media Influencers Are Fuelling Online Prejudice
Social Media Influencers Are Fuelling Online Prejudice
Jonathan Taylor MSc ( Onlinesafety4schools@gmail.com )

In a world where scrolling has become second nature, the influence of social media personalities goes far beyond product endorsements and viral dances. These influencers shape opinions, define norms, and—worryingly—can perpetuate deeply harmful ideas. One of the most pressing concerns today is how influencers, often unknowingly or under the guise of humour, are fuelling online prejudice.
From Influence to Intolerance

Influencers—especially those on platforms like Snapchat, TikTok, Instagram, Telegram, Discord and Twitch (etc)—command immense reach. They shape how young people think about relationships, gender, and identity. Many present misogynistic, prejudiced, or extreme content wrapped in humour or irony. This makes such content appear harmless or “edgy,” when in reality, it’s laying the groundwork for a culture of online intolerance.
The algorithms don’t help. Designed to maximise engagement, they feed users increasingly polarised content. What starts as a casual interest can quickly spiral into dangerous echo chambers filled with sexism, hate, or misinformation.
The Real-World Impact in Schools
In classrooms and school corridors, this influence is showing its teeth. Group chats filled with sexist jokes, “rating” female students, or sharing inappropriate images have become increasingly common. What’s worse, these behaviours are often dismissed as “just banter”—until the emotional or psychological harm becomes too severe to ignore.
Victims of online harassment may suffer in silence out of fear: fear of being labelled a snitch, fear of retaliation, or simply fear of not being believed. Meanwhile, those participating often fail to realise the legal and moral weight of their actions.
It’s Not Just About Gender

Online prejudice extends to race, religion, disability, sexual orientation, and more. Under the UK Equality Act 2010 and international human rights laws, these are protected characteristics. But in the wild west of digital spaces, discrimination and hate speech are frequently unmoderated—and sometimes even encouraged.
When young people engage in or witness online discrimination, it sets a precedent. Silence can look like approval. Participation becomes normalised.
Technology Isn’t Neutral
Apps and platforms play a big role in this ecosystem. From deepfake software to anonymous messaging and virtual reality spaces, the technology often enables—and even accelerates—harmful behaviour. AI-driven face swaps, “undress” apps, and sextortion scams are just a few examples of how innovation can be misused.
Even the tools meant to protect—like reporting features—are underused or distrusted by the very people they’re meant to help.
What Can Be Done?
- Think Before You Share: Ask—is it kind? Is it respectful? Is it necessary?
- Challenge Harmful Norms: Speak up in group chats. Don’t stay silent.
- Use Reporting Tools: Most platforms have them—use them.
- Support One Another: Be the friend who says, “That’s not okay.”
- Educate Continuously: Schools, parents, and communities must work together to build digital resilience.

To use the brilliant Gareth Southgate quote, “If sexist hate starts with us—it must end with boys and men.” It’s a reminder that silence can be complicity, and action—even a small one—can make a difference.
Final Thoughts
Social media can be a tool for empowerment and connection, but only if we use it with awareness and integrity. Influencers aren’t going away, nor should they—but we must hold them, and ourselves, accountable for the messages we amplify.
The digital world reflects the real one—and both deserve our respect.
Sextortion 2000 – 2025
The vile, illegal and hurtful online crime that is ‘Sextortion’ has been used to bully harass and extort many young people and would often start out with teenagers acting silly in front of a camera sending each other flirty sexy texts and pictures but should the friendship or relationship break down then the same images could be used to exploit, upset and humiliate.
Definition

Sextortion, a combination of “sex” and “extortion,” refers to a form of blackmail where sexual information or images are used to extort money or other benefits from the victim. Whilst sextortion is seen as a modern crime due to the digital age, has roots that can be traced back to earlier forms of blackmail.
Sextortion has transformed from traditional forms of blackmail to a complex and pervasive issue in the digital age. The increasing sophistication of technology continues to pose challenges in combating this form of exploitation, necessitating ongoing legal, social, and technological efforts to protect potential victims and hold perpetrators accountable.
Teenagers and Young Adults: Studies have shown that teenagers and young adults are particularly vulnerable to sextortion, often due to their high engagement with social media and dating apps.
Is Sextortion New ?
Before the advent of the internet, sextortion primarily involved physical letters, photographs, or coercive sexual acts. During the Cold War, intelligence agencies would use compromising photographs to blackmail individuals into providing information or conducting espionage activities. Known as “honey traps,” these tactics often involved seduction and subsequent blackmail. Hollywood in the mid-20th century saw various forms of sextortion where private photographs or information about illicit affairs were used to manipulate celebrities or extort money from them.
Emergence of Social Media and Online Games: The rise of the internet and more importantly the use of Social Media, Online Games in the late 20th century significantly altered the landscape of sextortion:
Email and Online Messaging: As early as the 1990s, with the widespread use of email and instant messaging, perpetrators began exploiting these platforms to threaten victims with the release of compromising photos or videos.
Webcams and Cybersex: The early 2000s saw the explosion of webcams, which facilitated a new form of sextortion where victims were coerced into performing sexual acts on camera, only to be blackmailed with recordings of those acts.
Technological Advancements
With the advancement of technology, sextortion has evolved into a more sophisticated and widespread threat:
Social Media and Dating Apps: Platforms like Facebook, Instagram, Snapchat, Tik Tok and various dating apps have become common venues for sextortion. Perpetrators create fake profiles to lure victims into sharing intimate images or videos.
Online Games: Fortnight, Roblox and Minecraft have also become venues for meeting victims of sextortion. Contact and messaging victims to allow for viruses to infect devices and retrieve personal information can occur through the platforms messaging service.
Ransomware and Malware: Cybercriminals deploy malware to gain access to victims’ personal devices, obtaining private data and images to use for extortion.
Deep Fakes: The advent of Deep Fake technology, which allows for the creation of realistic but fake videos, has introduced new dangers in the realm of sextortion. Perpetrators can fabricate compromising videos, images and headshots and use them to extort victims.
Legal and Social Responses : In response to the rise of sextortion, legal systems and advocacy groups have taken various steps:
Legislation: Many countries have introduced specific laws to combat sextortion, recognising it as a distinct criminal offense. Penalties for perpetrators have become increasingly severe.
Awareness Campaigns: NGO’s and law enforcement agencies have launched awareness campaigns to educate the public about the risks of sextortion and how to protect oneself.
Support Systems: Victim support systems, including hotlines and counselling services, have been established to help those affected by sextortion.
Sextortion Update 2024 – 2025
The US National Centre for Missing and Exploited Children reported a more than 100% increase in sextortion cases from 2022-23 – rising to 26,718 from 10,731. Furthermore, while both sexes and all age groups were targeted, a large proportion of cases involved male victims aged between 14 and 18. All age groups and genders are being targeted, but a large proportion of cases have involved male victims aged between 14-18 (91% of victims in UK 2023 were male.
Sextortion is now perpetrated by Organised Crime Groups (OCG’s) based overseas, predominantly in some West African countries, but some are also known to be located in Southeast Asia.
Victimology & Methodology – Typically, boys are extorted for MONEY, Girls are extorted for MORE IMAGES.
Perpetrator known to Victim – Spurned or rejected boyfriends who in their bitterness or desperation used threats to punish former lovers or coerce them back into relationships. These men stalked, harassed, and badgered their former partners in ways that could be terrifying and overwhelming. Furthermore, malicious seducers, used friendship, deception and promises of romance to acquire compromising pictures from targets they met online, and then used these pictures to extract more images and sex. The personal and psychological toll on victims was intense, with almost one-quarter seeking medical or mental health assistance and 12 percent moving from their homes as a result.
Unknown Online Perpetrator Callous sextorters, usually disguising themselves as ‘young and attractive’ women will often follow a young person’s social media accounts or send them friend requests’ pretending to be a ‘young and attractive’ woman. They will usually try and identify with the victim through private messages, such as mentioning they are studying at a particular university and school. The conversation can last for days before it takes a dark turn.
Sextorters will gain their trust and attempt to progress things to the next level, often sending intimate pictures of the person they think they are talking to, encouraging them to do the same. However, once they have an intimate picture or video of the victim, they will immediately use this material against them. A typical message will say: ‘Stay calm, don’t panic, I have recorded that video / saved that picture of you and I will ruin your life if you don’t follow my instructions.’ They will then send a screen grab of all the victim’s social media contacts such as friends, family and colleagues and threaten to send it to all of them unless they are paid. They will even attempt to call the victim to heighten the threat. On most occasions, the voice is that of a male and not a female.
OCG’s (Organised Crime Groups) using Ai (Artificial Intelligence) Deep Fakes involve videos, images, or audio recordings that look or sound completely realistic but have been altered using AI. Faces can be super-imposed, expressions can be manipulated, and separate elements can be combined to produce something entirely new. These are all hoaxes that are commonly used to show someone doing or saying something they did not do or say. Sexually explicit Deep Fakes are used to trick children into sending nudes or livestreaming sexual acts.
Methodology – Online Games like Fortnight, Roblox and Minecraft are mostly played by young boys, this could be the reason that we have seen a complete change in victims of Sextortion, and of course this type of sextortion involves sending money to the sextorter, but how?
V-Bucks are an in-game currency that can be spent in Fortnite. In-game purchases that can be made with V-Bucks include customization items like New Outfits, Gliders, Pickaxes, Emotes, and Wraps, or even the latest Battle pass!
Robux is the virtual currency used in Roblox gaming, players can earn or purchase Robux to buy in-game items, accessories, and other virtual assets.
Minecoin is the digital currency for Minecraft, which can be spent on avatar ‘skins’, new ‘textures’ for blocks or entire virtual worlds created by the game developers or other members of the Minecraft community.
Users / gamers of these games predominantly boys, and 9 – 16 year old boys find it necessary to have this online currency for more game enjoyment and online credibility, hence requiring a ‘Bank Card’ to facilitate payment
Online Payments: Approximately 50% of all the online gaming students (aged 9 – 16) I have spoken to admit to having a parents payment card readily available to use and are not quizzed as to what they are buying , many simply have weekly budgets of £50. This allows for extorted money to be paid, furthermore if the amount requested is £50 then absolutely no reason for parents to question the payment.
Malware Viruses: OCG’s use software and ‘Bots, to send thousand of notifications or requests to young gamers via Fortnight, Roblox and Minecraft, these contain ‘malware’ that allow for access to contact lists, personal information from the device they are using (their smart phone). This gives the OCG ‘Bot’ information to pressurise and extort money from the victims.
Bots: Computer bots can be used for the large-scale exploitation and demands (1,000s), allowing for a percentage of victims to simply ignore the threat, however a large number of victims will make the small financial payment. It makes sense for the OCG to make smaller demands (£50) because if many less resilient and vulnerable young children can make easy only payments due to fear, then hundreds pay, and the financial rewards are huge.
Images : Sextortion originally relied on the existence of real images or videos however, Deepfake sexually explicit images or videos that do not exist, but are created, are being used by the OCG’s as a result of personal data being retrieved as a result of viruses. There are 3 techniques.
In the 1st Technique, it is possible to replace the face of a naked person with that of another person, with the aim of creating a deepnude. This technique is called Face Swap.
The 2nd Technique is the Undress technology. This technology uses artificial intelligence to predict how a depicted person would look naked based on just one photo. In this case, no additional image is used for the manipulation, but the image of the person is the source material on which the artificial intelligence is programmed.
The 3rd Technique no longer starts from a specific image as the source material to create a new naked image, but trains through a huge quantity of images to create a completely virtual naked person.
Conclusion
Sextortion is not new, it has been used to exploit financially or sexually, shame, hurt, harass, or extort for many years, what has changed over time is the victimology and the methodology. As already stated, typically, boys are extorted for money, girls are extorted for more images. Whilst girls have been victims of sextortion since the use of the internet and social media began, we now see that young boys are being targeted because of technological advancements, namely Artificial Intelligence, malware and ‘Bots.’
Sextortion will continue to evolve, schools, parents and trusted adults must be aware of the different ways sextortion can occur and how their students or children can fall victim to bitter friends, online strangers, or Organised Crime Groups.
Advice and Tips
I do provide workshops, advice and tips for professionals, parents, and trusted adults, if anyone would like to receive training and advice, please do message or email.
onlinesafety4schools@gmail.com
Online Safety 2024-2025

‘Achieving Best Practice in Online Safety in Schools’

February 6th, marked Safer Internet Day (2024 Theme – ‘Inspiring change? Making a difference, managing influence, and navigating change online.’), a day in which we are all encouraged to pay particular attention to the importance of keeping children and young people safe online. this year Safer Internet Day focused on technological changes, this includes covering.
- Young people’s perspective on new and emerging technology
I thought I would embrace ‘new technology’ and use Artificial Intelligence and ask a Chat GPT how it would define Online Safety for Schools. Chat GPT defined it as follows:
However, with far more emphasis being put on Online Safety within ‘Keeping Children Safe in Education’, the new requirement of specific Online Safety education adds to the importance of receiving the most up-to-date, relevant, and appropriate Online Safety Training. Whilst KCSiE 2024 will see no changes to ‘Online Safety’ within the document, this does not take away from what are the Legal and Moral expectations of all Schools – Hence Online Safety ‘Best Practice’

Regardless of any school’s location around the world, students grow up with access to computers, gaming devices & smart phones, all with access to the internet. This makes the Internet more accessible and became so important for remote learning during the pandemic, that has seen an increase in usage and screen time. Furthermore, most students love to explore and experiment, with Social Media, Apps, Games, and Gaming Communities, therefore British and International Schools now require specific bespoke Online Safety workshops for Students, Staff, and Parents, along with guidance on creating and maintaining Policies to keep Students, Staff, and the School safe.
So, Online Safety for Schools is far more involved and needs far more clarification. Online Safety within Safeguarding is certainly now very much ‘standalone’ and requires specific training/workshops for the Whole School, Staff, Students & Parents. Keeping Children Safe in Education (UK), prioritises Online Safety Training for all schools, and robust Filtering & Monitoring in Schools, therefore schools should now consider Online Safety as a statutory requirement as well as ‘Best Practice.’ Online Safety workshops are of paramount importance irrespective of country or location. All children, teenagers, and young adults use devices, social media/apps / online games which can have a massive impact on their safety, future goals, and targets (Cyber Vetting for Employment & Universities). Bespoke Online Safety training/workshops should be considered a safeguarding necessity as well as a legal priority pending any inspection by the Schools Inspectorate.
Achieving Best Practice in Online Safety in Schools
Chat GPT is quite correct Online Safety is about creating a secure digital environment for students and staff – educating and enforcing measures to prevent exploitation, and promoting responsible online behaviour, but how?
Whilst the responsibility of Online Safety falls on the Designated Safeguarding Lead (DSL), or the Online Safety Coordinator (OSC), achieving best practice must involve, Governors / Executive / Directors – Senior Leadership Team – DSL / OSC – Staff – IT – Parents – Students, ‘The Whole School Approach.’ Creating and implementing an effective Whole School Approach to Online Safety is not easy, and requires dedication, commitment, and understanding of the current Online Safety guidelines, statutory requirements, and recommendations.

The Whole School Approach can be described as; creating a culture that incorporates the principles of online safety across all facets of school life. Schools must seek to achieve best practice in supplying online safety within schools from workshops for students, staff, and parents to robust filtering monitoring and reporting practices. Employing the online safety principles consistently will allow for best practice to be exhibited. This includes expecting the same standards of behaviour whenever a pupil is online at school – be it in class or using their own device.
The Promotion of ‘Best Practices’ in schools must include ‘Online Safety by incorporating.
1. Online Safety Audit – an annual Online Safety audit is expected and extremely vital, it allows schools to identify areas of strength and those that require development.
2. Whole Staff – Governor Executive Training – Online Safety CPD training must be available for all staff and school governors / executives. Student & Parent Workshops – These must be age relevant and updated to be of any value to students and parents.

3. Online Safety Policies – Acceptable Use Policies – A standalone Online Safety policy must be created and available to the whole school community
4. Acceptable use policies, for all (staff, students, parents, visitors), must be clear, concise, robust, and enforceable
5. Robust & Integrated Reporting Routines – students and staff must have a clear reporting process; the Designated Safeguarding Lead and Online Safety Lead details must be published and accessible.
6. Monitoring & Filtering – Schools now have a statutory requirement to monitor and filter students internet usage to prevent inappropriate access. This can be through local authorities or private companies offering monitoring software.
7. Involvement in Themed Days – Schools should use, Safer Internet Day, Anti Bullying Week and Mental Health Week as a way of prioritising Online Safety to keep the whole school community involved.
8. Online Safety Coordinator – to work in parallel with the DSL and monitor staff online safety and the implementation of the ‘Whole School Approach.’
9. Online Safety Governor / Trustee / Executive – to be appointed and to oversee the Whole School Approach to Online Safety and authorise policies.
10. Online Safety Committee – Governors, Students, and Staff to meet and discuss the schools’ Online Safety approach, from policies to use of devices within school.
………………………………………………………………..
Like the Internet, Online Safety is here to stay, so Schools should not wait for the next School Inspection, or the next Keeping Children Safe in Education document, schools should be seeking the most accurate and relevant Online Safety Training to ensure they are keeping the Whole School Community safe.

Jonathan Taylor MSc (OnlineSafety4Schools)
www.onlinesafety4schools.co.uk
email : onlinesafety4schools@gmail.com

Online Safety 4 Schools Current Popular App List
Identify Good and Bad Apps:

http://onlinesafety4schools.co.uk/wp-content/uploads/2022/02/Apps-List.pdf




















Whilst all these challenges are physical not technological, without the use of technology (mobile phones / tablets etc), the message could not be spread, therefore the online validation sought, the online badge of honour received, and the online motivation and justification for behaving this way would not be warranted or ‘go viral’.









