LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

A Randomized Filtering Strategy Against Inference Attacks on Active Steering Control Systems

Photo from wikipedia

In this paper, we develop a framework against inference attacks aimed at inferring the values of the controller gains of an active steering control system (ASCS). We first show that… Click to show full abstract

In this paper, we develop a framework against inference attacks aimed at inferring the values of the controller gains of an active steering control system (ASCS). We first show that an adversary with access to the shared information by a vehicle, via a vehicular ad hoc network (VANET), can reliably infer the values of the controller gains of an ASCS. This vulnerability may expose the driver as well as the manufacturer of the ASCS to severe financial and safety risks. To protect controller gains of an ASCS against inference attacks, we propose a randomized filtering framework wherein the lateral velocity and yaw rate states of a vehicle are processed by a filter consisting of two components: a nonlinear mapping and a randomizer. The randomizer randomly generates a pair of pseudo gains which are different from the true gains of the ASCS. The nonlinear mapping performs a nonlinear transformation on the lateral velocity and yaw rate states. The nonlinear transformation is in the form of a dynamical system with a feedforward-feedback structure which allows real-time and causal implementation of the proposed privacy filter. The output of the filter is then shared via the VANET. The optimal design of randomizer is studied under a privacy constraint that determines the protection level of controller gains against inference attacks, and is in terms of mutual information. It is shown that the optimal randomizer is the solution of a convex optimization problem. By characterizing the distribution of the output of the filter, it is shown that the statistical distribution of the filter’s output depends on the pseudo gains rather than the true gains. Using information-theoretic inequalities, we analyze the inference ability of an adversary in estimating the control gains based on the output of the filter. Our analysis shows that the performance of any estimator in recovering the controller gains of an ASCS based on the output of the filter is limited by the privacy constraint. The performance of the proposed privacy filter is compared with that of an additive noise privacy mechanism. Our numerical results show that the proposed privacy filter significantly outperforms the additive noise mechanism, especially in the low distortion regime.

Keywords: inference attacks; controller gains; control; output; privacy; inference

Journal Title: IEEE Transactions on Information Forensics and Security
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.