CASTOR’s key objective is to identify thetrustworthy path in the constantly shifting Compute Continuum. In a previous article, we established that a secure route is no longer defined just by network performance, but by a demonstrably trustworthy path from start to finish. However, achieving this is nearly impossible with traditional security models that rely on static properties. As we spread computational workloads across the Computing Continuum (where drones, sensors, and edge devices are constantly shifting), the limitations of existing protocols like IETF’s Trusted Path Routing (TPR) have become a significant liability.
The fundamental problem with current industry standards is their reliance on binary, boot-time trust. In a typical IETF TPR model, the security claims revolve around static properties that relate to the trustworthiness of a network element during its initial enrolment or boot phase. If the network device passes remote attestation with respect to these static properties, it is issued a “trust passport” and admitted into the domain. Using this passport, the network element is able to exchange trustworthiness evidence with its adjacency, allowing the formation of “Trusted Topologies”. Even though the system model discusses the concept of maintaining trust, it does not address the challenges of continuously evaluating the runtime behaviour of network devices. This creates a dangerous blind spot: if a router is misconfigured during runtime, compromised by a zero-day vulnerability, or begins exhibiting suspicious behavioural anomalies ten minutes after booting, the static model has no mechanism to detect or respond to that change in real-time.
CASTOR’s core innovation is the elimination of this blind spot by moving from binary, point-in-time evaluations to a continuous, runtime, and quantifiable trust assessment. Rather than treating trust as a static “yes or no” label, CASTOR introduces a dual-layer Trust Assessment Framework (TAF) that never stops verifying. This system differentiates between the Actual Trust Level (ATL) – what a network device is proving through current behaviour, and the Required Trust Level (RTL) – needed to accommodate a highly sensitive workload traffic. By monitoring granular behavioural metrics and configuration integrity every second the device is operational, the framework can detect a trust degradation instantly.
This shift to dynamic assessment redefines network resilience. In the CASTOR architecture, a drop in a router’s trust score is treated with the same priority as a physical link failure. If a node deemed “trustworthy” at 9:00 AM becomes “untrustworthy” at 9:05 AM due to a detected anomaly, the system automatically triggers a reroute, steering mission-critical traffic away from the compromised node before data integrity is lost.
By closing these trust blind spots, CASTOR provides the necessary foundation for the next generation of collaborative, cognitive computing. It transforms network security from a static perimeter check into a living, breathing process that mirrors the high-stakes nature of the services it carries. Treating trust as a dynamic metric that evolves with fresh evidence, CASTOR ensures that the “Computing Continuum” remains a secure environment for even the most sensitive autonomous operations.
In an era where a single compromised router could compromise end-to-en service continuity of critical workloads – be it an entire fleet of emergency vehicles or autonomous drones – the move to continuous, quantifiable verification isn’t just an upgrade, it is a mandatory evolution for a secure digital future.