Resilient X Design

Resilient X Design

Home
Podcast
Notes
Archive
About
When Machine Learning Models Stop Seeing Clearly
Drift Happens!
Nov 11 • 
Brain Aboze

October 2025

Concepts of Design Assurance for Neural Networks (CoDANN)
Building Trust in AI for Flight Systems
Oct 21 • 
Brain Aboze
AI Engineer Paris 2025 Recap
Through the Lens of Trustworthy AI
Oct 1 • 
Brain Aboze

September 2025

VNN-COMP: Benchmarking the Verification of Neural Networks
A competition that's building trust in AI
Sep 16 • 
Brain Aboze

August 2025

Deploying Machine Learning Models
From Validation to Production
Aug 12 • 
Brain Aboze
Formal Verification Of ML
Towards Provable Guarantees
Aug 1 • 
Brain Aboze

July 2025

ML Benchmarking Primer
Measure, Compare, and Improve Your ML Systems
Jul 25 • 
Brain Aboze
Validation Begins with Test Design
Why Your Test Set Is More Than Just a Data Split
Jul 17 • 
Brain Aboze
Safety at Scale: High-Reliability ML Round-up, Jan–Jun 2025
A round-up of key developments in AI regulation, aviation, and finance
Jul 14 • 
Brain Aboze
ML Testing Refresher
Why Testing ML Feels Like Geology, Not Geometry
Jul 3 • 
Brain Aboze
© 2025 Safe Intelligence
Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture