Enhancing Space Launcher Safety with Telemetry Anomaly Detection
(in collaboration with the European Space Agency)
The Imperative for Anomaly Detection in Space Applications, particularly in launcher systems, is critical. Space missions represent high-stakes endeavors where even minor anomalies can lead to mission failure, causing significant financial losses and scientific setbacks. Furthermore, anomalies within launcher systems pose substantial safety hazards, potentially leading to catastrophic events. Within the domain of launchers, telemetry data serves as a cornerstone, providing real-time insights into the vehicle's operational status. The Avio telemetry avionics, meticulously designed to capture crucial data, can transmit to ground stations by monitoring the communication bus or acquiring I/O signals. This setup ensures comprehensive data retrieval, addressing both software and hardware elements of aircraft systems. Anomalies within this data stream can indicate potential issues, ranging from equipment malfunctions to trajectory deviations or unforeseen complications. Timely identifying these anomalies allows for proactive intervention, mitigating the risk of mission failure or catastrophic incidents. Integrating Artificial Intelligence (AI) into Anomaly Detection presents a promising avenue for overcoming these challenges. AI algorithms can analyze vast quantities of telemetry data in real time, identifying patterns and anomalies that may elude human operators. This capability not only enhances the safety and success rate of space missions but also propels advancements in space exploration technology. This proposal outlines a project to harness AI for telemetry anomaly detection in Space Transportation systems, focusing on both the identification of suitable AI models and the hardware implementation on platforms. This proposal outlines a two-part project to detect launcher telemetry anomalies using Artificial Intelligence (AI). The first part involves the identification of the most suitable AI model and the consequent design and development of an AI-based engine for anomaly detection. The second part focuses on the hardware implementation of the previously designed AI-based engine on an embedded architecture, e.g., a RISC-V platform.
Deep Learning-based attack detection in RISC-V microprocessors
Modern microprocessors have advanced features like cache hierarchies, acceleration units, out-of-order and speculative execution: on the one hand, all these features dramatically increase systems performance but, on the other hand, they expose the system to a new menace: the so-called Microarchitectural Side-Channel Attacks (MSCAs), such as Spectre and Meltdown. Protecting a system from these attacks is extremely challenging, and this becomes even harder in the embedded scenario, where Operating System support and multiple cores may be unavailable.
This thesis aims at exploring the feasibility of adopting hardware performance counters (HPCs) monitoring and deep learning (DL)-based anomaly detection (for example Recurrent Neural Networks) to identify the execution of MSCAs in embedded microprocessors. The basic idea is to add a Security Checking module between the microprocessor and the main memory to observe the fetching activity and the HPCs. The introduced checker shall neither interferes with the nominal activity of the microprocessor nor requires any modification of the microprocessor itself. Existing RISC-V microprocessors could be considered as a target hardware platform.
Designing reliable DL applications against hardware faults
Hardware faults are either permanent or transient faults that affect the hardware platform running (Deep Learning as a case study) applications so that the output differs from the expected, fault-free one. The nature of the effects of the faults depend on i) the hardware architecture, ii) the kind of fault, and iii) the running application. The specific nature of DL applications allows to leverage their inherent tolerance to tailor both analysis and hardening solutions. The goal of this thesis is to develop methods and tools to make the overall system resilient to hardware faults; techniques can work either at software or hardware level, and can take into consideration different hardware platforms.
Analysis of the Effects of Single Event Upset (SEU) Faults in Deep Neural Networks Accelerated on RISC-V Cores
The great quest for adopting Deep Learning-based computations for safety-/mission-critical applications motivates the interest towards methods for assessing the robustness of the application w.r.t. not only its training/tuning but also errors due to faults, in particular soft errors, affecting the underlying hardware. The RISC-V open source Instruction Set Architecture is nowadays gaining more and more interest due to its openess, extendability and flexibility; indeed, there is great interest in employing RISC-V accelerators to run Deep Learning Applications in embedded systems. The thesis will focus on analysing how SEU fauilts occurring in the HW architecture of the RISC-V microprocessor would affect the accuracy of the Deep Learning application under execution.
Designing Secure RISC-V Microprocessors
(in collaboration with the European Space Agency)
Modern integrated circuits for are produced following a distributed design-flow where will modules designed in-house are integrated with other modules coming from third party entities, either in the form of Third-Party IP cores (3PIPs) or in the form of Commercial Off-the-Shelf (COTS) components. Moreover, the final fabrication of the silicon device will rely on outsourced foundries. While ensuring high-performance and reduced cost, such globalized design process exposes the obtained system to several security threats both at design time and at runtime. Integrated circuits (ICs) may be overproduced by the foundry and sold in the black market, defective or dismissed ICs may be delivered as good ones, IP core licenses may be violated, and IP cores may be overused, designs may be maliciously modified to insert stealthy unwanted functionalities in the final product, the so-called Hardware Trojan Horses (HTHs). We envision to integrate Intelligent Security Checkers (ISCs), based on embedded machinelearning and probabilistic data structures, within a SoC where several microprocessors and HW accelerators interact. The goal of such ISCs is to monitor the activity carried out at runtime by the components in the SoC to prevent the activation of HTHs and to limit their dangerousness once they have been activated. The overall goal of this thesis is to enable trusted execution over a system composed of both trusted and untrusted components. As a beneficial additional side-effect, such ISCs would also allow to detect anomalous behaviors due to random faults (e.g. Soft Errors in memories, SEUs in registers) instead of malicious attacks. Of course, it is mandatory for the introduced security checker not to interfere with the nominal functioning of the system, i.e., not to introduce working frequency slow-down, and to bring the smallest possible silicon area and power consumption overhead.
Unveiling Security Vulnerabilities of RISC-V Microprocessors
(in collaboration with the European Space Agency)
RISC-V Microprocessors are becoming more and more popular due to their openess and extendability. On the other hand, no extensive security assessment of such computing platform has been carried out. In particular, we aim at studying whether popular Microarchitectural Side-Channel Attacks (MSCAs) like Spectre, Meltdown and similar attacks, can represent potential threats for RISC-V processors. Indeed, MSCAs have been demonstrated to be very effective against x86 and ARM processors, while few studies have been conducted to evaluate their dangerousness against RISC-V.
Home Page