In this paper, I argue that there is no theoretical bar to the development of autonomous weapon systems, and that their practical benefits must be considered. Further, I argue that meaningful human control, as a guiding principle for the development of smart weapons, is based on fundamentally flawed arguments. I also argue that human control, in itself, is not an adequate check on such autonomous weapons. I finally argue that it is possible to hold an autonomous weapon system responsible for its actions, and that there is a way to enforce punishment and reward schemas.