Previous work on ethical machine reasoning has largely been theoretical, and where such systems have been implemented, it has, in general, been only initial proofs of principle. Here, we address… Click to show full abstract
Previous work on ethical machine reasoning has largely been theoretical, and where such systems have been implemented, it has, in general, been only initial proofs of principle. Here, we address the question of desirable attributes for such systems to improve their real world utility, and how controllers with these attributes might be implemented. We propose that ethically critical machine reasoning should be proactive, transparent, and verifiable. We describe an architecture where the ethical reasoning is handled by a separate layer, augmenting a typical layered control architecture, ethically moderating the robot actions. It makes use of a simulation-based internal model and supports proactive, transparent, and verifiable ethical reasoning. To do so, the reasoning component of the ethical layer uses our Python-based belief–desire–intention (BDI) implementation. The declarative logic structure of BDI facilitates both transparency, through logging of the reasoning cycle, and formal verification methods. To prove the principles of our approach, we use a case study implementation to experimentally demonstrate its operation. Importantly, it is the first such robot controller where the ethical machine reasoning has been formally verified.
               
Click one of the above tabs to view related content.