AI in Financial Services
Cybersecurity and AI Risk in Mid-Sized Banks: Why Operating Discipline Matters Now
March 18, 20267 min read
The cybersecurity and AI risk conversations in mid-sized banks are converging. Both depend on the same operating discipline: clear ownership, evidenced controls, and honest documentation of what is actually deployed.
Adapted from a piece originally published on LinkedIn. Full editorial version pending review.
Mid-sized banks are running two parallel risk conversations that are increasingly the same conversation. Cybersecurity teams are tightening identity, access, and detection. Risk and compliance teams are scoping how AI tools enter the institution. Both depend on the same underlying capability: knowing what is deployed, who owns it, what controls govern it, and how those controls are evidenced.
The convergence in plain terms
An AI tool that touches customer data is a third-party technology dependency with elevated risk. The cyber team needs visibility into it. The risk team needs governance around it. The compliance team needs policy alignment. Without operating discipline, each function builds its own incomplete view.
Where governance gaps become exam findings
Examiners are starting to ask AI-specific questions: model inventory, use case approval, vendor due diligence on AI features, monitoring of model output. Institutions that have a clean, current cyber control environment have a foundation to extend. Institutions that don't are building both at once.
Operating discipline as the common substrate
The mid-sized banks moving fastest on AI risk are the ones that already had disciplined cybersecurity operations. Same control taxonomy. Same ownership model. Same evidence cadence. The work is incremental, not net new.
Where to start
Start with inventory. Know what AI is actually in the environment, sanctioned and unsanctioned. Then map ownership. Then define the control expectations. Then evidence them. The order matters.
Back to all insights