Agenda
Agenda for BSidesHBG 2026
May 29, 2026
9:00AM - 4:00PM
Caroline Wong
Chief Strategy Officer at Axari, Cybersecurity Canon
Hall of Fame author,
and longtime advocate for making security actually work
for humans.
Conference Agenda
Agenda for BSidesHBG 2026
May 29, 2026
10:00AM - 4:00PM
Joel Prentice
Phishing: Same Watering Holes, New Lures
For years, the industry’s answer to phishing was simple:
"Enable MFA." But as defenders leveled up, so
did the adversaries. Today’s sophisticated campaigns have
moved beyond mere credential harvesting; they are now
focused on Session Theft. By utilizing
Adversary-in-the-Middle (AiTM) frameworks and "living
off the land" in the browser, attackers are
successfully bypassing traditional MFA to gain persistent
access to enterprise environments.
This talk
explores the evolution of the "Watering
Hole"—from the traditional compromised industry site
to modern, high-trust digital environments. We will peel
back the layers on how attackers use tools like Evilginx,
Muraena, and Sneaky 2FA to intercept session cookies in
real-time. Through live demonstrations and case studies
(including the infamous Uber and EA breaches), we will
analyze how a single stolen cookie can render the most
complex password and SMS/Push-based MFA irrelevant.
Attendees
will leave with a deep understanding of the current
"Session-as-a-Service" economy and practical
strategies to implement phishing-resistant authentication,
such as FIDO2 and hardware keys.
Alex Robbins
Tabletops Without Tantrums: How to Run Exercises Teams Won’t Hide From
Have you ever presented a tabletop exercise and found
yourself frustrated by vague responses, blank stares, or
irrelevant tangents from that one co-worker?
Maybe
you outsource your tabletop exercise program, but are
looking to expand in-house to better address real
organizational risks?
Or perhaps your organization
just started conducting exercises, and you’re looking for
ways to improve?
Maybe the answer is simple:
you’re doing too much.
This talk, best suited
for security practitioners who run or participate in
response exercises, will provide ways to improve existing
tabletop exercise programs without burning out their
teams.
Attendees will leave with the following
takeaways:
- Why you have to conduct tabletop
exercises vs. why you should conduct them
- A
step-by-step process for building a scenario
-
Guidance for conducting tabletop exercises
- Real
case studies and success stories that you can apply to
your own program
Annas Mirza
From Building a Local SOC to Building a Global Security Framework - Lessons from My First 6 Months at Hansen Technologies
Last year, I shared how I built Boscov's first SOC from
the ground up. This year, I'm back with the next
chapter: scaling from a single-site operation to a global
security framework spanning four countries and multiple time
zones. In my first six months as Global Manager of SecOps at
Hansen Technologies, I've learned that building a
global SOC isn't just about replicating what worked
locally, it requires fundamentally rethinking incident
response, team coordination, and vendor relationships across
North America, South America, Vietnam, Australia, India, EU
and UK operations. This talk covers the unexpected
challenges of 24/7 global coverage, the reality of
inheriting vs. building security operations, integrating
AI-enabled incident response at enterprise scale, and the
leadership lessons that only come from managing teams you
rarely see in person. Whether you're considering a
similar transition or planning global security operations,
I'll share what worked, what failed, and what I wish
I'd known on day one.
Chris Maenner
Securing AI Workloads in Kubernetes: Lessons from Scaling Startups
Startups ship fast, often faster than their security
practices can keep up. As someone who's built and
secured platforms at growth-stage companies, I've
watched teams accumulate risk while chasing product-market
fit. Then they add AI workloads, and the attack surface
explodes.
This talk bridges two worlds: the
pragmatic security challenges of scaling startups and the
technical reality of securing AI workloads in Kubernetes.
We'll
cover common failure modes: identity sprawl, over
permissioned service accounts, implicit trust between
services and how security practitioners can enable velocity
instead of blocking it.
Then we'll dive into
service mesh patterns for AI workloads:
-
Identity-first security with mTLS and SPIFFE
-
East-west traffic controls and fine-grained authorization
-
Model access isolation and prompt protection
-
Observability for detecting AI service abuse
All
examples come from production Kubernetes environments.
Attendees will leave with patterns they can implement.
12PM - 1PM
LUNCH
Provided by BSidesHBG
Kevin Thomas
Your Web Application Is Made of Worms:
What the
Shai-Hulud Supply Chain Worm Teaches Us About Modern
Application Security
Modern applications are built from cloud services, CI/CD
pipelines, and thousands of third-party dependencies — and
in 2025, the Shai-Hulud worm exposed how attackers can
exploit that reality. In a groundbreaking supply chain
attack, malicious versions of npm packages executed
worm-like propagation through the JavaScript ecosystem,
harvesting developer and cloud credentials and creating tens
of thousands of public repositories containing leaked
secrets.
This talk uses Shai-Hulud as a
real-world case study to explore how cloud and application
security intersects with software supply chains. Rather than
focusing on malware internals, we’ll focus on practical
understanding — how dependency-based worms spread, why
common defenses like scanners and WAFs don’t help, and how
build pipelines and developer environments have quietly
become part of the application attack surface.
Martin Voelk
Agentic AI Kill Chain
AI agents integrated with enterprise messaging platforms
represent an emerging and underdefended attack surface. This
session demonstrates two real attack chains: RAG prompt
injection via poisoned documents (visible or invisible
prompt injection), and silent data exfiltration through
weaponized link unfurling in Slack or other messaging apps.
Another vector is a single malicious link posted in a shared
channel that triggers Claude to extract confidential
messages and send them to an attacker-controlled server -
zero user interaction required. Attendees will learn to
identify, test, and defend against these agentic attack
vectors before adversaries exploit them in production.
Kayla Underkoffler
Through the Defender’s Eyes: The Untold Story of Grit and Guardrails
For too long, the spotlight has been on the attacker, their
clever exploits, their relentless innovation, their starring
role in the breach. But every story has another side. This
session retells the familiar tale of real-world attacks on
AI Agents not as a triumph of offense, but as a chronicle of
defense in depth, the unseen work of those who guard the
frontier of AI agent security.
We will retrace
the same path: Reconnaissance, Initial Access, Execution,
and Exfiltration, but through the defender lens. At each
turn, we uncover the breadcrumbs left behind: the anomalies
that suggest intrusion, the subtle signals of manipulation,
the moments when a well-placed guardrail could have changed
the outcome of the attack entirely.
Using the MITRE
Atlas Matrix as our map, we explore how defenders can
operationalize clues left by attackers, integrate them into
detection and response capabilities, and adapt defensive
playbooks to a new kind of adversary.
This isn’t
just the attacker’s narrative on repeat; it’s the defenders’
turn in the spotlight. The Blue Team’s narrative. The story
of how we build resilience, one breadcrumb and one guardrail
at a time.
Michael Sage
Growing, Advancing, and Staying Well in Cybersecurity
Cybersecurity careers are built in high-pressure, always-on
environments where constant learning, threat exposure, and
urgency can easily lead to burnout. This session focuses on
how cybersecurity professionals can grow, advance, and
remain effective without sacrificing their mental health
along the way.
Pavan Reddy
AI Security 101: A Practical Roadmap from AppSec to LLM Threat Modeling
AI security isn’t about the model itself, it’s securing the
system around the model: prompts, retrieval, tools, and the
permissions those components quietly inherit. This talk
gives attendees a practical mental model for AI attack
surfaces (prompt injection, indirect injection via untrusted
content, insecure tool use, data leakage, and excessive
agency) using a threat-model-first approach aligned with
community frameworks like the OWASP Top 10 for LLM
Applications.
What attendees will learn (key
takeaways):
- A simple “authority boundary” model for
LLM apps (where trust breaks in real deployments)
- How
OWASP LLM risks map to common product architectures (chat,
RAG, agents, tool use)
- A concrete “first 30 days / 90
days” skills-and-project roadmap into AI security
- A
starter resource list: OWASP LLM Top 10, MITRE ATLAS, NIST
AI RMF, plus red-teaming/testing references
This
talk is targeted towards beginners who are looking to get
into AI Security or AI Engineers looking to build secure
systems.
Nathalie Baker
Allison DiPietro
The Governance Operating System: Transforming cATO into a Business Asset
We have all lived the compliance scramble, those weeks of
frantic evidence hunting that grind business operations to a
halt. We call this Continuous Monitoring, but it is often
just Episodic Panicking. Static risk analysis is a dinosaur
that cannot keep pace with dynamic, modern environments. It
does not track, it does not scale, and it rarely reflects
reality. This talk explores how to move past the
compliance-first, risk-second mindset and truly implement
Continuous Authority to Operate (cATO) as a process maturity
model, not a checkbox.
This talk explores cATO as
a framework for governance, transforming monitoring from a
periodic exercise into an ongoing operational state.
Attendees will walk away with a toolkit to mature their
governance feedback loops. You will learn how to transition
from viewing evidence as a burden to treating it as an
indicator of process health.
Learn how to shift
to real-time resilience, aligning monitoring cadence with
actual risk, and building a governance operating system that
reflects your true security posture in an era of constant
change.

Paul Brownridge
Flirting with AI: Pwning web sites through their AI chatbot agents and politely breaking guard rails
Pen testing AI. Everyone is implementing AI chatbots to
improve their customer experience and journey, without
increasing call centre costs. But this comes with risk: get
the configuration wrong and that chatbot can be convinced to
part with data that it shouldn't.