In a study of users' interactions with Siri, the iPhone personal assistant application, we noticed the emergence of overlaps and blurrings between explanatory categories such as "human" and "machine." We found that users work to purify these categories, thus resolving the tensions related to the overlaps. This "purification work" demonstrates how such categories are always in flux and are redrawn even as they are kept separate. Drawing on STS analytic techniques, we demonstrate the mechanisms of such "purification work." We also describe how such category work remained invisible to us during initial data analysis, due to our own forms of latent purification, and outline the particular analytic techniques that helped lead to this discovery. We thus provide an illustrative case of how categories come to matter in HCI research and design.