Podcast: Your LLM stack is a security nightmare 🤖

Lessons from Hacken’s Stephen Ajayi on blockchain and AI security, LLM threats, prompt injections, and how to build safer AI systems.

Back button

VIDEO

AI

Introduction

In this episode, we sit down with Stephen Ajayi, Technical Lead for dApp & AI Audits at Hacken, to talk about the dark side of large language models. From prompt injection attacks to red teaming, Stephen breaks down why your shiny new AI tools might be wide open to exploitation. If you’re building with LLMs and not thinking about security, this one’s your wake-up call.

Timestamps

01:07 Steven Ajayi's Background and Career Journey

05:09 Indirect Prompt Injection

08:31 Defending Against Prompt Injection

14:43 Building Resilient LLM Systems

17:50 Guardian AIs and Verifier AIs

20:27 AI Guardrails

23:46 Red Teaming for LLMs

29:59 Future of AI Security

38:13 Conclusion and Final Thoughts