FAQs

Why focus only on consent, rather than compensation and credit?

Fairly Trained AI certification is focused on consent from training data providers because we believe related improvements for rights-holders flow from consent: fair compensation, credit for inclusion in datasets, and more.

We’re mindful that different rights-holders will likely have different demands of AI companies. If there is a consent step for rights-holders, there is an opportunity for rights-holders to secure any other requirements they have.

Is consent from content companies really enough? Shouldn’t you get consent from individual creators?

We’re conscious that there are different views in the creator community regarding what consent generative AI companies should be required to seek. Can content aggregators who work with creators license creators’ content to AI companies, or is individual creator consent required for each license? Is giving creators an opt-out enough, or is an opt-in needed?

We’ve launched the L certification as our first certification because we believe that AI companies getting licences for training data usage, rather than claiming fair use, should be applauded. We hope it will help reinforce the principle that rights-holder consent is needed for generative AI training. We don’t propose this as an end to the debate over what creator consent should look like. But we do think there is a key difference between AI companies who scrape data and claim fair use, and AI companies who license training data. The Licensed Model certification is intended to highlight this difference.

What if my training data practices change after getting certified?

If your training data practices change such that certification would no longer be granted, we expect you to let us know - certification will then be rescinded. We reserve the right to withdraw certification without reimbursement if new information comes to light regarding your AI practices that would change the outcome of your certification.