Hi, we’re Illuminate tech

We exist to help foster a safer, trusted internet, by making sure tech does what it says on the tin. The idea is simple: conduct research into the novel technologies that are shaping our lives online, and leverage this expertise to provide advice to organisations on how to keep their users safe effectively and responsibly. All the while, we’ll endeavour to work in language that non-expert audiences can understand. Read on to learn what drives us. 

The stakes are high 

The first generation of children to grow up with the internet have had access to social connection, educational resources, and entertainment on a scale never before seen in human history. But at times, this has come at a high price. From early exposure to inappropriate content to opening up new avenues for contact-based harm, the risks of life online are now common knowledge. And it’s not just children who are at risk. As we gear up for a year in which over 1.5 billion people will head to the polls, the potential for mis-information (unintentional) and dis-information (intentional) supercharged by AI underlines the important role digital regulation will play in maintaining trust in the democratic process. 

From playing catch-up to setting standards 

The recent news that Omegle has shut down for good feels like the end of an era in which online services can operate with little regard for the safety of their users. Around the world, actors across the public sector, civil society, industry and academia are working hard to try and ensure the internet of the future works better for citizens. 

Legislation is playing a key role. The UK’s digital regulation landscape is getting busier, with the Online Safety Act being followed up by other key pieces of legislation that will influence the way online services operate, including the Digital Markets, Competition and Consumers Bill and the Data Protection and Digital Information Bill

Over the Channel , the EU’s Digital Services Act is set to come into force in February, and Australia, Singapore, and Ireland have also all taken steps to protect citizens online. Add to the mix important legislation to safeguard user privacy, alongside the recent impetus to develop regulation fit for AI, and you have an increasingly complex regulatory landscape. 

“Safety tech”: silver bullet or false friend? 

A number of new technologies (“safety tech”) have emerged in response to this new regulatory landscape. For example, as policy interventions designed to protect children online require services to understand at least something about the age of their users, the number of age assurance technologies has proliferated. These unlock the ability to provide users with age-appropriate experiences.

But the infrastructure surrounding these technologies is still developing, and questions have been raised regarding the impact of these technologies on privacy. How effective are they? What new risks might they introduce, and how can these be mitigated? How can we meaningfully compare the effectiveness of different age assurance technologies across a range of criteria – from technical metrics, to the impact on privacy, to user experience? 

Important efforts are being made to provide citizens with confidence that children can be empowered to live a safer life online while protecting the fundamental rights of adults. International standards are being developed, policy documents are being published by regulators, and industry is also stepping up (see the Digital Trust & Safety Partnership's Guiding Principles and Best Practices). However, the task of building trust among citizens that safety tech can successfully achieve its intended purpose is an ongoing one, and requires further research and analysis of the impact of these technologies in real-world deployment scenarios.

In walks AI 

These questions apply to any novel technologies being deployed online. AI is already being used to improve user safety – from its incorporation into age assurance technologies powered by computer vision, to automated content moderation systems which leverage large-language models (LLMs). The hype around AI is inescapable, and it undoubtedly holds great potential to boost productivity. But it operates in a “black box,” in which even leading engineers do not understand how decisions are made. Can we really trust the outputs of AI systems? Do they perpetuate biases that we should be working collectively to break down? And what policy interventions might mitigate these risks? 

Bridging the gap 

These are difficult questions, and no one organisation or individual has the answer. But we want to help. Our ambition is to build an organisation that cultivates deep expertise at the intersection of online safety tech, AI, and digital regulation. But we won’t stop there: we want to make sure our expertise has real-world impact, by working with organisations to build policies and frameworks which ensure new technologies are deployed in a way which is responsible, effective, and rights-respecting.

What does this look like in practice? Our work will range from deep-dive research into one aspect of a novel technology, to creating bespoke audit & evaluation frameworks to analyse the impact of novel technologies in real-world scenarios. This is facilitated by ensuring our expertise is explainable, breaking down the barriers that often exist between technical experts and policy professionals. 

We hope to contribute towards a rich ecosystem of organisations from the private sector, the public sector, civil society, and academia, working together to build a safer, more trusted internet. 

First steps 

We are just getting started. If you’re interested in finding out more about illuminate tech, exploring avenues for collaboration, or joining us in our mission to foster a safer, more trusted internet, please get in touch today.

Previous
Previous

Digital Identity in the UK: the story so far.