As artificial intelligence (AI), autonomous agents, and other computational technologies develop rapidly, they have begun to change the way industries and organizations operate. In safety-critical sectors such as healthcare, ensuring the safety of both the process and the outcome when new technologies such as this are introduced continues to be a significant challenge. There are multiple stakeholders involved in creating, implementing, operating, and managing such systems– including developers, healthcare professionals, managers, and decision-makers– all of which share the responsibility of ensuring the safety of system. In this setting, safety links directly to social responsibility; ensuring that a healthcare system treats patients fairly and equitably directly links to safety considerations. In this work, we will explore applications of AI technologies and autonomous agents in which the human and machine work collaboratively to achieve a shared goal in healthcare. We will review approaches for assuring safety in such systems using complexity science and identify further directions for research on the topic.