Artificial intelligence is making progress, enabling automation of tasks previously the privilege of humans. This brings many benefits but also entails challenges, in particular with respect to ‘black box’ machine learning algorithms. Therefore, questions of transparency and explainability in these systems receive much attention. However, most organizations do not build their software from scratch, but rather procure it from others. Thus, it becomes imperative to consider not only requirements on but also procurement of explainable algorithms and decision support systems. This article offers a first systematic literature review of this area. Following construction of appropriate search queries, 503 unique items from Scopus, ACM Digital Library, and IEEE Xplore were screened for relevance. 37 items remained in the final analysis. An overview and a synthesis of the literature is offered, and it is concluded that more research is needed, in particular on procurement, human-computer interaction aspects, and different purposes of explainability.