Agentic AI systems—AI systems that can pursue complex goals with limited direct oversight—could be broadly useful if integrated responsibly into society. Such systems have great potential to help people achieve their goals more efficiently and effectively, but they also create risks of harm. This paper proposes a definition of agent AI systems and the parties to the agent AI system lifecycle, and emphasizes the importance of agreeing on baseline responsibilities and safety best practices for each of these parties. Our main contribution is to provide a set of initial practices for keeping agents’ operations safe and responsible. We hope this will serve as a building block in the development of an agreed baseline best practice. We enumerate questions and uncertainties regarding the operationalization of these practices that need to be addressed before they can be codified into law. Next, we focus on categories of indirect impacts from large-scale deployment of agent AI systems that are likely to require additional governance frameworks.