Morality seems to be irrational. Moral agents spread co-operation - this is good for all, but even better for the amoral. If `the virtuous' finish last, one cannot defend morality as rational. Artificial Morality addresses and answers this objection, by showing how to build moral agents that succeed in competition with amoral agents. Professor Danielson's agents deviate from the received theory of rational choice. They are bound by moral principles and communicate their principles to others. The central thesis of the book is that these moral agents are more successful in crucial tests, and therefore rational.Why design agents? Human agents and the situations they create are too complex for an investigation of the most elementary aspects of rationality and morality. Danielson uses instead robots paired in abstract games that model social problems, such as environmental pollution, which reward co-operators but even more those that benefit from others' constraint. It is shown that virtuous, not vicious, robots do better in these virtual games. Artificial Morality is inspired by artificial intelligence. The solution presented to the problem of rationality and morality is constructive: the building of better moral robots. Artificial intelligence furnishes the means as well: the high level language Prolog (for Programming in Logic) facilitates the construction of engaging robots in a few lines.