Over the last years, a wide spread of Machine Learning in increasingly more, especially sensitive areas like criminal justice or healthcare has been observed. Popular cases of algorithmic bias illustrate the potential of Machine Learning to reproduce and reinforce biases present in the analogous world and thus lead to discrimination. The realisation of this potential has led to the creation of the research stream on fair, accountable and transparent Machine Learning. One aspect of this research field is the development of fairness tools, algorithmic toolkits that aim to assist developers of Machine Learning in identifying and eliminating bias in their models and thus ensuring fairness. The literature review on fairness tools has revealed a research gap in the impact of these on the understanding of fairness and the processes within a development team. Thus, the aim of this research was to investigate the impact that fairness tools can have on the notion of fairness and the processes in a development team. Therefore, a case study with a development team of a large, globally operating corporation has been conducted. Applying Kallinikos´ theory of technology as a regulative regime and Oudshoorn and Pinch´s idea of the co-construction of users and technologies on the empirical findings revealed two important conclusions. Firstly, it shows that fairness tools act as regulative regimes by shaping the understanding of fairness and the processes within a development team. Secondly, this character of fairness tools as regulative regimes needs to be understood as part of the coconstruction process between the technology and the developer.