Skip to main content

The Shadows That Follow AI Generative Models

Generative AI models are a buzz in recent years, from stable diffusion to ChatGPT our social media threads are flooded with people trying them out. However, following these models are issues that we have to be aware of. Including biases, plagiarism and false information.

In this talk, we will go through the most popular AI generative models recently, just so we can be on the same page. Then for each of them, we will explore some issues that arise with those models - including biases within the model that could possibly further reinforce stereotypes, copyright issues for articles or images that are generated with the models, the potential spreading of false information etc.

We cannot provide definite solutions to those problems but we will conclude the talk with some efforts to potentially solve the problem. Hopefully, by spreading awareness we can use these powerful models in an ethical way and get the most benefit from them while staying away from the potential harm.

Cheuk Ting Ho

Cheuk Ting Ho

OpenSSF, UK
A Data Scientist and Developer Advocate, Cheuk dedicated her work to the open-source community. She co-founded Humble Data, a global beginner Python workshop, and served the EuroPython Society board for two years. Now a fellow and director of the Python Software Foundation.

To make this website run properly and to improve your experience, we use cookies. For more detailed information, please check our Cookie Policy.

  • Necessary cookies enable core functionality. The website cannot function properly without these cookies, and can only be disabled by changing your browser preferences.