A black box doesn't hold up before a CFO.
You can't ask your leadership to allocate an additional $500,000 based on a score you can't explain. Platforms that produce closed proprietary scores leave you with no ammunition before a board that asks the right question: "Why this one, and not that one?" The AI Risk Engine answers for you. Each score is decomposable. Each recommendation is traceable. Each arbitration is defensible.
Three capabilities. One continuous, multi-framework posture.
The method, in five visible contributions.
Each pillar (Posture, CTI, EASM, TPRM) emits structured signals. The AI Risk Engine ingests them through a 5-level relationship graph: asset → service → vulnerability → exploit → actor. Correlation runs continuously, not at fixed intervals.
Three moments. Three different uses of the engine.
Case 1
The CISO's morning (8:00 AM)
You arrive at the office. You open the Action Feed. You see 7 prioritized actions for today. First one: "Patch CVE-2025-XXXX on frontend-prod-01.acme.ca. Actor: BlackBasta targeting your sector. Estimated effort: 3 hours. Risk reduction: 12 points." You assign to your team. You move to the second one. In 15 minutes, your day is framed.
Case 2
The day before the board (monthly meeting)
You ask the copilot: "Generate this month's board briefing." The copilot produces an 8-page PDF in 30 seconds: global risk score decomposed per pillar, 90-day trajectory, top 5 executed actions, top 3 upcoming, estimated avoided cost. Each figure sourced in the platform. You arrive at the board with a document you can defend line by line.
Case 3
Threat pivot (CISA alert overnight)
CISA publishes a new KEV at 2:00 AM. At 2:15, the AI Risk Engine automatically recalculates scores for affected assets. At 2:20, your Slack webhook receives the alert with the list of vulnerable assets, their exposure, their owners. At 8:00 AM when you arrive, the day's Action Feed already integrates the new priority. You didn't wait for the weekly report.
The AI Risk Engine is useless without the other 4 pillars.
That's what differentiates it from a standalone scoring engine. Prioritization quality depends directly on the quality, freshness, and correlation of feeding signals. Here's what each pillar brings to the engine.
CTI → AI Risk Engine
The engine receives the 30M+ signals/day, the 1,500+ tracked actors, the enriched CVEs (CVSS + EPSS + KEV). Without this signal, the AI doesn't know if a CVE is being actively exploited. It would treat an unexploited CVSS 9.8 as a CVSS 6.5 in CISA KEV.
EASM → AI Risk Engine
The engine receives the 100+ finding types, the OT/ICS scanner, the external exposure mapping. Without this signal, the AI doesn't know if the vulnerable asset is exposed on the Internet. It can't distinguish a theoretical risk from a tomorrow-morning exploitable risk.
TPRM → AI Risk Engine
The engine receives the continuous score of each critical third party, their drift, their alerts. Without this signal, the AI only sees your direct assets. It misses the 30-60% of cyber risk that comes through your supply chain.
Posture → AI Risk Engine
The engine receives the CMM maturity of each control, alignment to 16+ frameworks, coverage per asset category. Without this signal, the AI doesn't know if the asset is defended. It would prioritize a critical CVE on an already well-protected asset at the bottom of the list.
FAQs
1/ Concretely, how does the AI Risk Engine correlate the 5 pillars?
Each pillar produces structured signals. The engine ingests them through a 5-level relationship graph: asset → exposed service → vulnerability → available exploit → active threat actor. For each signal, it computes a combined score CVSS + EPSS + KEV + actual exposure + sector targeting. Scores are decomposable: you see exactly which contribution comes from which pillar, with which weight.
2/ Is the scoring methodology proprietary?
No. All factors entering the score are published in our technical documentation. Pillar contribution weights are client-configurable. Each calculation is exportable to CSV with full decomposition. The AI algorithms used (actor clustering, TTP inference, SSVC scoring) are documented. No black box.
3/ What AI model do you use for the conversational copilot?
The conversational copilot relies on a state-of-the-art LLM (exact model disclosed under NDA in demo) for query understanding. Responses are generated through a RAG (Retrieval-Augmented Generation) on your risk graph, asset catalog, framework assessments. The LLM never sees your raw data: it accesses only the results of structured queries on your graph. Your data stays in Canada.
4/ How often is the score recalculated?
Incremental recalculation on each incoming event: new published CVE (every 2-6h), new EASM finding (every 24h), TPRM drift (continuous), Posture maturity change (on user action). Global daily recalculation at 2:00 UTC. For critical alerts (CISA KEV, TPRM drift grade ≥ 1 level), recalculation is triggered immediately and notified within 15 minutes.
