Abstract
It has been argued that our currently most satisfactory social epistemology of science can’t account for science that is based on artificial intelligence (AI) because this social epistemology requires trust between scientists that can take full responsibility for the research tools they use, and scientists can’t take full responsibility for the AI tools they use since these systems are epistemically opaque. I think this argument overlooks that much AI-based science can be done without opaque models, and that agents can take full responsibility for the systems they use even if these systems are opaque. Requiring that an agent fully understand how a system works is an untenably strong condition for that agent to take full responsibility for the system and risks absolving AI developers from responsibility for their products. AI-based science need not create trust-related social epistemological problems if we keep in mind that what makes both individual scientists and their use of AI systems trustworthy isn’t full transparency of the internal processing but their adherence to social and institutional norms that ensure that scientific claims can be trusted.