AI for good: Imperial graduates explore path to sustainable tech innovation

by

Alumni panellists with Professor Alessandra Russo and Jo Gardner

Alumni panellists with Professor Alessandra Russo (left) and Jo Gardner (right)

How can we ensure that artificial intelligence (AI) doesn’t just disrupt the world, but improves it?

Imperial College London recently hosted an alumni panel event exploring how AI can be developed and deployed responsibly. 'Alumni Insights: Shaping a Responsible Future with AI for Good' brought together alumni leaders working across AI governance, healthcare and digital innovation.

Chaired by Professor Alessandra Russo, Head of the Department of Computing and Co-Director of Imperial’s new School for Convergence Science in Human and Artificial Intelligence, the evening was part of the university's broader Science for Humanity strategy, which aims to maximise Imperial’s potential as a force for good in the world. 

“AI will undoubtedly reshape our life, our world, our future,” Professor Russo said in her opening remarks. “It's this convergence between artificial intelligence and human intelligence that can unlock the potentials and the power of AI technology.”

Defining responsible AI

The panel agreed that responsible AI must go beyond technical performance, embedding ethical principles into every stage of development, from data collection and model design to deployment and oversight.

"[Responsible AI] is not a checkbox exercise. It’s not something that you do after the fact, it’s something you build into the core, the DNA of your data science pipeline." Raj Bharat Patel (MSci Physics 2009) VP of AI Transformation at Holistic AI

For Sachin Jogia (Physics 2008), Non-Executive Director at DEFRA and former CTO at Ofcom, it starts with human-centric design: “It’s really important that technology is built in such a way that it’s centred around humans and […] that is has the right type of governance, the right type of ethics included right from the outset.”

Arjun Panesar (MEng Computing with AI 2006), CEO of DDM Health, outlined three core pillars of responsible AI in healthcare: “Accountability: you want to be able to hold someone accountable if things go wrong. Transparency: we need to know what’s actually happening in these black boxes. And equity: it needs to work for everyone.”

Raj Bharat Patel (MSci Physics 2009), VP of AI Transformation at Holistic AI, framed responsible AI as a shift “from intent to engineered accountability.” He explained: “Understanding what you are trying to do versus what the risks are is a key component of responsible AI. For every metric you're trying to optimise you will de-optimise another one, so fairness and accuracy go hand in hand. You need to find the balance that is right for your company and for society.”

“It’s not a checkbox exercise. It’s not something that you do after the fact, it’s something you build into the core, the DNA of your data science pipeline.”

Potential and pitfalls 

Healthcare emerged as a key area where AI holds significant promise, offering the potential for earlier diagnoses, personalised treatments and more efficient care – if implemented responsibly.

"We don’t want AI to be something that deepens the inequalities that already exist." Arjun Panesar (MEng Computing 2006) Founding CEO and Head of AI and Ethics at DDM Health

Dr Wareed Alenaini (MRes Chemistry 2014), Founder of Twinn Health, described a future where AI could help individuals make daily health decisions based on real-time data from their own bodies: “Your fridge is going to tell you, ‘This is to make your liver better. This is to make your heart today better,’” she said. “That will be all picked up from things that are in our actual bodies, not on averages of numbers.”

Sachin envisioned smart home systems that support independent living for the elderly, while Arjun spoke about using AI to reduce health inequalities by tailoring interventions to underserved populations.

Despite many positive use cases, the panel warned that risks must not be overlooked. Without proper oversight, AI could deepen existing inequalities, especially if access to tools and data remains limited to privileged groups. “We don’t want AI to be something that deepens the inequalities that already exist,” said Arjun.

Raj also warned of a “brain drain,” as growing reliance on AI may erode human skills, particularly among younger generations.

There was also concern about the use of AI in high-stakes environments without sufficient safeguards, with examples like biased policing algorithms and flawed predictive models illustrating the risks of untested and poorly governed systems.

Trust as the cornerstone

Trust emerged as a central theme throughout the discussion. Without it, panellists agreed, AI adoption will falter.

“The currency of AI in the future [...] is going to be trust,” said Raj. “If we lose trust in this environment, then we’re dead in the water with all of our programmes.”

Arjun echoed this, citing user feedback: “People were more than happy to use [AI tools], but they wanted to know how they were built and what they were actually tasked to do.”

Balancing innovation and ethics

The discussion also tackled the tension between rapid innovation and the need for regulation. While there was recognition that strict rules could stifle progress, the panellists argued that clear, enforceable standards are essential for long-term trust and sustainability.

"If you let the market drive itself through monetary gain or through business lenses without effective oversight, without responsible AI across jurisdiction, you're going to see problems." Raj Bharat Patel (MSci Physics 2009) VP of AI Transformation at Holistic AI

“[Businesses] are putting the guardrails in themselves at the moment, and it's not directed by regulation. It's directed by business incentive. We've seen this happen already once in our lifetimes with social media, where business controlled the use of this new technology, and it's taken humanity to an interesting place,” said Raj.

“AI has that same potential roadmap. If you let the market drive itself through monetary gain or through business lenses without effective oversight, without responsible AI across jurisdiction, you're going to see problems.”

They highlighted the EU AI Act as a promising step, emphasising that regulation should serve not as a barrier but as a foundation for experimentation, while also calling for greater global coordination to ensure consistent standards. 

A multidisciplinary approach

Solving the challenges of AI, the panellists agreed, requires collaboration across disciplines. Technologists must work alongside ethicists, social scientists, clinicians and policymakers to ensure that AI systems reflect a broad range of perspectives and values.

Universities like Imperial play a vital role in enabling this convergence, bringing together academic and industry experts to tackle global challenges through meaningful cross-sector partnerships.

Listen in on the event recording here.

Connecting minds: introducing Imperial’s AI Alumni Network

The event coincided with the launch of the Imperial AI Alumni Network, which aims to cultivate collaboration between alumni, industry leaders and Imperial academics in the field of artificial intelligence.

"This network will help advocate for Imperial’s mission and values, and demonstrate new opportunities to the broader alumni community." Sachin Jogia (Physics 2008) Non-Executive Director, DEFRA, and Co-Chair of the Network

Led by a committee of alumni volunteers interested or working in artificial intelligence, the network will support professional development, lifelong learning and networking opportunities for alumni.

Panellist and network Co-Chair Sachin Jogia (Physics 2008), encourages graduates interested in expanding their AI expertise to join the network: "This is a truly unique chance to steer like-minded leaders towards an outcome that will step-change what Imperial is able to do for its entire membership - staff, alumni and students. This network will help advocate for Imperial’s mission and values, and demonstrate new opportunities to the broader alumni community, both inspiring them and rallying them to join the cause."

The new network launches alongside Imperial's School for Convergence Science in Human and Artificial Intelligence, which will bring together academics from across all departments at Imperial to share expertise, engage stakeholders and convene ambitious new projects.

University startups showcase AI innovation

The sold-out event also offered a first glimpse at the latest innovation stemming from Imperial. Three startups took to the stage to pitch their AI-driven ventures, each focused on tackling real-world challenges.

Aleta Index

Founded by Anna Tsiganchuk and Xiaoqin Zhao (MSc Innovation Design Engineering 2024), Aleta Index is the first bias-aware intelligence and data provider. Their mission is to enhance responsible, transparent decision-making by identifying and labelling bias in news, social media and other data sources.

“We’re imagining a future where the hidden biases of AI outputs are transparent to decision-makers,” said Anna. Their platform supports real-time misinformation risk mitigation, bias-aware machine learning and more equitable AI training. By empowering financial institutions with behaviour-analysed datasets, Aleta Index is helping to build smarter, fairer and more accountable investment systems.

DoQcheck

DoQcheck, founded by Miguel Castillo (MSc Physics 2018), is transforming food manufacturing with AI-powered operational precision. Their platform reduces costly errors, cuts food waste and improves sustainability by streamlining how factory workers access and act on complex documentation.

“Imagine an operator at 2am needing to solve a problem. Our AI retrieves the right information instantly,” said Miguel. DoQcheck also promotes inclusivity by removing language barriers for non-native speakers, making critical knowledge accessible in multilingual environments, resulting in safer, more efficient production lines.

OptIX

OptIX, founded by Austin Mroz (Imperial Eric and Wendy Schmidt AI in Science Postdoctoral Fellow), is using AI to accelerate the discovery of new materials and chemicals. Their platform combines interpretable AI with intuitive interfaces to guide scientists through complex lab decisions, reducing the number of experiments needed by up to 80%.

“We’re revealing the scientific rules that govern performance and making them accessible to researchers,” said Austin. With potential applications in climate tech, energy storage and drug development, OptIX aims to unlock the complex chemistry behind new materials.

Reporter

Tina Schmechel

Tina Schmechel
Advancement

Click to expand or contract

Contact details

Tel: +44 (0)20 7594 0979
Email: [email protected]

Show all stories by this author

Tags:

Events, Artificial-intelligence, Alumni, Strategy-alumni, Healthcare
See more tags

Leave a comment

Your comment may be published, displaying your name as you provide it, unless you request otherwise. Your contact details will never be published.

OSZAR »