Library Science Research & Work. 2025, 0(1): 13.
Although there is significant potential for libraries to use ChatGPT-like tools, the risks of algorithmic
discrimination they introduce cannot be overlooked. Across the entire lifecycle—from data collection and training to
service provision, user interaction, and result presentation—algorithmic bias can arise, often with new characteristics
such as hidden and intersectional biases. The main causes of algorithmic discrimination include designer biases, user
biases, and technological barriers. Designer bias can lead algorithmic models to inherit and amplify societal biases; user
bias may exacerbate unequal service access due to variations in users’ digital literacy; and technological barriers may
prevent disadvantaged groups from accessing and utilizing intelligent technologies. The adverse effects of algorithmic
discrimination are significant, potentially transforming information inequality into social status inequality, leading to
unequal distribution of discourse power in knowledge production, and diminishing individual agency. To mitigate these
risks, libraries should first analyze the sources of discrimination, then implement ethical algorithm interventions to
counteract designer bias, enhance users’ digital literacy to reduce user bias, and foster an inclusive digital environment to
overcome technological service barriers, ensuring the responsible use of ChatGPT-like tools in libraries.