This paper examines the structural effects of algorithmic censorship by social platforms to assist in developing a fuller understanding of the risks of such approaches to content moderation. Attempting to scale moderation, social platforms are increasingly adopting automated approaches to suppressing communications that they deem undesirable. Effective content moderation by social platforms is both important and difficult numerous issues arise from the volume of information, the culturally sensitive and contextual nature of that information, and the nuances of human communication.