Why Online Safety Matters Now More Than Ever

The online environment is no longer a separate part of our everyday lives, whether working, learning, or socialising, the lines between “real” and “online” life are often blurred.

According to Ofcom’s Online Nation 2024 report, the average UK adult spends 4 hours and 20 minutes online daily, spread across smartphones, tablets, and computers. Young adults (18–24) spend even more, an average of 6 hours and 1 minute daily.

With so much of our lives taking place in digital spaces, this reality brings with it a responsibility to protect, educate, and safeguard those navigating the online world, as the experiences people have online can affect their mental health, self-esteem, safety, and development just as much as in-person interactions.

The Online Safety Act, first introduced in 2023 and updated through 2024 and 2025, represents the UK government’s commitment to addressing this need. But what does it mean for us, as educators, as a tech-first organisation, and as people who care about wellbeing?

The Rise of Digital Harm

Online safety is not a new conversation, but it has become more urgent and complex.

Research data from Ofcom reveals that, in 2023, “almost three-quarters of teenagers between 13 and 17 have encountered one or more potential harms online, and three in five secondary school-aged children have been contacted online in a way that potentially made them feel uncomfortable.”

Tragic cases of online harm have brought national attention to the dangers that exist in the digital space, pushing campaigners and families to call for greater accountability from tech companies. These calls were answered with the introduction of the Online Safety Act – an ambitious but essential step forward.

What is the Online Safety Act?

First and foremost, the Act places a legal duty on online services to prevent and remove illegal and harmful content. However, it goes further than that – it challenges platforms to rethink the way they design their systems, requiring them to proactively reduce the risk of harm rather than simply react when something goes wrong. That includes:

  • Preventing criminal content (such as child sexual abuse material or terrorist content) from appearing in the first place
  • Tackling both illegal activity and legal but harmful content, such as self-harm promotion or cyberbullying
  • Making user experiences safer by design, particularly for children and vulnerable users
  • Providing age-appropriate controls, content filters, and clear reporting mechanisms for users

Importantly, the Act acknowledges that harm is subjective; what feels harmful to one person might not to another, and this nuance makes the role of education, ethics, and safeguarding even more vital.

Our Approach to Safeguarding: Foresight, Not Hindsight

At Baltic, we’re strong advocates for the power of technology and the positive impact it can have. We don’t believe digital spaces are inherently dangerous, but we do recognise the responsibility that comes with building and supporting them.

We partner with employers across the country to deliver high-quality apprenticeship training, combining our fully online learning model with on-site, real-world experience in the apprentice’s place of work. This dual setting means our safeguarding responsibilities span both physical and digital environments.

While apprentices are physically located within their employer’s workplace, all their learning and coaching support is delivered remotely. This makes it essential for robust digital safeguarding to be embedded not only in our systems but in our culture and day-to-day practice.

Safeguarding in online environments is integral to how we work, teach, and connect. From filter systems and secure platforms to learner education and staff training, we are continuously developing our approach to meet the evolving needs of online safety.

How We Keep Learners Safe

To ensure our learners are protected from online harm, we uphold a strong and proactive safeguarding framework that promotes safe digital behaviour, prevents exposure to inappropriate content, and supports early intervention when concerns arise.

This includes a combination of technical safeguards, learner education, and ongoing staff training to respond to the evolving nature of online risks.

Ongoing Staff Safeguarding Training

Our Safeguarding Team is continually developing their knowledge through regular training, external updates, and sector insights. They stay informed about emerging trends and risks in online harm, and just as importantly, they proactively educate the wider Baltic team.

Smoothwall Monitoring

Our internal systems are protected by Smoothwall filtering technology. This monitors and flags potentially harmful content, ensuring that both learners and staff are protected during their time online with us.

Personal & Social Responsibility (PSR) Modules

Our PSR modules include training on online behaviour, digital etiquette, and how to identify and report harmful content. It’s about equipping learners with the tools to navigate the online world confidently and safely.

The Baltic Community

Our learners have access to the Baltic Community – a digital platform designed to promote peer-to-peer support, collaboration, and knowledge sharing. It’s a moderated space that operates under a clear code of conduct to ensure respectful, safe, and inclusive interactions.

Useful Resources

If you want to learn more about online safety or report harmful content, please explore the resources below.

If you’re a learner, employer, or educator and have questions about online safety, the support we offer learners, or our safeguarding approach, don’t hesitate to get in touch with our Safeguarding team via telephone: 01325 638142 or email: safeguarding@balticapprenticeships.com 😊