Thales
General Presentation
Thales aims to lead in AI-dependent critical systems, emphasizing international collaboration, particularly in formal methods for trustworthy AI and autonomy, with significant implications for safety regulations. The anticipated application of RobustifAI verification techniques bolsters Thales GenAI developments, expediting integration into engineering processes and enhancing products like satellites, UAVs, UGVs, and UUVs, thereby boosting cybersecurity, reducing cognitive overload for analysts, and expanding coverage of critical unmonitored assets. Moreover, once validated, the GenAI security engine from one of the project use cases will support and be industrialized for Thales’ operations to benefit 2500 security professionals, ensuring Thales maintains a competitive edge and contributes to a secure European AI supply chain.
Role in RobustifAI
THALES aims to improve retrieval robustness through multi-source aggregation via a meta-search engine combined with RAG-fusion techniques, and by using the LLM-as-a-Judge for post-validation. To ensure LLM robustness, standard operational procedures will allow an LLM-augmented agent to address user queries by executing trusted procedures when applicable. For RobustifAI, THALES implements functionalities such as threat information synthesis, intrusion detection rule generation, incident analysis reporting, and response plan generation. THALES also specifies how LLMs can enhance adversarial scenarios from STPA risk analyses, focusing on human-centric robotics like autonomous vehicles and service robots.
Core expertise for RobustifAI implementation
- Formal Methods and Model-Based System Theoretic Process Analysis (STPA)
- Machine Learning (ML) Development Assurance Standards
- Operational Design Domain (ODD) and Scenario-Based Verification
- Safety-Critical Systems and Topological Data Analysis
- AI and Cybersecurity