{"id":124,"date":"2025-12-18T18:42:45","date_gmt":"2025-12-18T23:42:45","guid":{"rendered":"https:\/\/pressbooks.bccampus.ca\/openscholarship\/chapter\/ai-and-creative-commons\/"},"modified":"2026-02-12T15:16:13","modified_gmt":"2026-02-12T20:16:13","slug":"ai-and-creative-commons","status":"publish","type":"chapter","link":"https:\/\/pressbooks.bccampus.ca\/openscholarship\/chapter\/ai-and-creative-commons\/","title":{"raw":"AI and Creative Commons","rendered":"AI and Creative Commons"},"content":{"raw":"As discussed in <a href=\"https:\/\/pressbooks.bccampus.ca\/openscholarship\/chapter\/can-a-machine-be-an-author\/\">Can a Machine be an Author?<\/a>, there is an ongoing debate about whether AI can be considered the author of a work. Since AI is a relatively new technology and there are still many grey areas, the guidelines are continuously evolving. Below is how various communities, including Creative Commons, educational institutions, and copyright organizations, are addressing AI-related challenges:\r\n<h2>Creative Commons and AI<\/h2>\r\n<blockquote>\u201cWe recognize that there is a perceived tension between openness and creator choice. Namely, if we give creators choice over how to manage their works in the face of generative AI, we may run the risk of shrinking the commons. To potentially overcome, or at least better understand the effect of generative AI on the commons, we believe that finding a way for creators to indicate \u201cno, unless\u2026\u201d would be positive for the commons.\u201d\r\n<p class=\"has-text-align-right\">-Anna Tumad\u00f3ttir, <a href=\"https:\/\/creativecommons.org\/2024\/07\/24\/preferencesignals\/\">Questions for Consideration on AI &amp; the Commons<\/a><\/p>\r\n<\/blockquote>\r\nAs we covered in the chapter, \u00a0<a href=\"https:\/\/pressbooks.bccampus.ca\/openscholarship\/chapter\/what-is-creative-commons\/\">What is Creative Commons?<\/a>, Creative Commons operates on top of copyright law. <a href=\"https:\/\/creativecommons.org\/2023\/02\/17\/fair-use-training-generative-ai\/\">In the United States, there are several strong cases where using copyrighted works to train generative AI models could be considered Fair Use<\/a>, even though that is also use case dependent. However, the use of openly available content in GenAI models may not always align with the original creator\u2019s intention for sharing it. This is especially relevant since much of this content was likely shared before the development of GenAI, meaning the creators may not have anticipated its use in such a context.(Ross, 2024) As of August 2024, Creative Commons is exploring the development of <a href=\"https:\/\/creativecommons.org\/2025\/06\/25\/introducing-cc-signals-a-new-social-contract-for-the-age-of-ai\/\">preference signals<\/a> to enable steward of collection of content to indicate their criteria regarding the term of use of the content in AI training. This initiative aims to empower creators with more nuanced control over how their content is utilized in the context of generative AI.\r\n<h2>Mitigating the Use of AI<\/h2>\r\nThere are multiple initiatives that are developing licenses , preference initiatives or softwares that are mitigating uses of AI or machine learning. Below are few examples:\r\n<h3>Preference Signals<\/h3>\r\nCreative Commons is developing a<a href=\"https:\/\/creativecommons.org\/2025\/06\/25\/introducing-cc-signals-a-new-social-contract-for-the-age-of-ai\/\"> preference signal<\/a> for collection stewards to indicate their preferences on how AI systems should contribute back to the collection when reusing and benefiting from the content.\r\n<h3>Rail License<\/h3>\r\n<a href=\"https:\/\/www.licenses.ai\/\" data-type=\"link\" data-id=\"https:\/\/www.licenses.ai\/\">RAIL (Responsible AI License) <\/a>is a license that allows software developer to restrict the use of their AI Technology in order to prevent irresponsible and harmful application, such as preventing AI software to use it for surveillance or malicious purpose.\r\n<h3>Nightshade<\/h3>\r\n<a href=\"https:\/\/nightshade.cs.uchicago.edu\/index.html\">Nightshade<\/a> is a tool developed by researchers at the University of Chicago to protect artists\u2019 work from unauthorized use in training AI models. It uses a data poisoning approach by subtly altering digital images in ways that are imperceptible to humans but cause AI systems to misinterpret the content, thereby disrupting attempts to replicate the artist\u2019s style.\r\n<h3>Have I Been Trained<\/h3>\r\n<a href=\"https:\/\/haveibeentrained.com\/\">Have I been trained?<\/a> is a search engine developed by <a href=\"https:\/\/spawning.ai\/\">Spawning<\/a> that allows users to check if their images have been used in AI training datasets. Spawning also created a tool called<a href=\"https:\/\/spawning.ai\/ai-txt\"> ai.txt<\/a>, which enables website owners to create a text file specifying rules to prevent AI from scraping their data.\r\n<div class=\"textbox shaded\">\r\n<table style=\"border-collapse: collapse;width: 100%\" border=\"0\">\r\n<tbody>\r\n<tr>\r\n<td style=\"width: 85%\">\r\n<h5>Dig Deeper<\/h5>\r\nTo learn more about Creative Commons and AI:\r\n<ul>\r\n \t<li><a href=\"https:\/\/creativecommons.org\/2023\/02\/17\/fair-use-training-generative-ai\/\">Blog post: Fair Use: Training Generative AI by Creative Commons<\/a><\/li>\r\n \t<li><a href=\"https:\/\/creativecommons.org\/wp-content\/uploads\/2025\/06\/Human-Content-to-Machine-Data_Final.pdf\"><strong>Creative Commons.<\/strong> (2025 June). From Human Content to Machine Data \u2013 Introducing CC Signals<\/a><\/li>\r\n<\/ul>\r\n<\/td>\r\n<td style=\"width: 15%\"><img class=\"aligncenter wp-image-33 size-thumbnail\" src=\"https:\/\/pressbooks.bccampus.ca\/wp-content\/uploads\/sites\/2593\/2025\/11\/Dig-Deeper-2-150x150.png\" alt=\"\" width=\"150\" height=\"150\" \/><\/td>\r\n<\/tr>\r\n<\/tbody>\r\n<\/table>\r\n<\/div>\r\n<h1>Adaptation Statement<\/h1>\r\n<small>Adapted from Ross, R. (2024, August 23).<a href=\"https:\/\/creativecommons.org\/2024\/08\/23\/six-insights-on-preference-signals-for-ai-training\/\"> <em>Six Insights on Preference Signals for AI Training<\/em><\/a>. Creative Commons.<\/small>","rendered":"<p>As discussed in <a href=\"https:\/\/pressbooks.bccampus.ca\/openscholarship\/chapter\/can-a-machine-be-an-author\/\">Can a Machine be an Author?<\/a>, there is an ongoing debate about whether AI can be considered the author of a work. Since AI is a relatively new technology and there are still many grey areas, the guidelines are continuously evolving. Below is how various communities, including Creative Commons, educational institutions, and copyright organizations, are addressing AI-related challenges:<\/p>\n<h2>Creative Commons and AI<\/h2>\n<blockquote><p>\u201cWe recognize that there is a perceived tension between openness and creator choice. Namely, if we give creators choice over how to manage their works in the face of generative AI, we may run the risk of shrinking the commons. To potentially overcome, or at least better understand the effect of generative AI on the commons, we believe that finding a way for creators to indicate \u201cno, unless\u2026\u201d would be positive for the commons.\u201d<\/p>\n<p class=\"has-text-align-right\">-Anna Tumad\u00f3ttir, <a href=\"https:\/\/creativecommons.org\/2024\/07\/24\/preferencesignals\/\">Questions for Consideration on AI &amp; the Commons<\/a><\/p>\n<\/blockquote>\n<p>As we covered in the chapter, \u00a0<a href=\"https:\/\/pressbooks.bccampus.ca\/openscholarship\/chapter\/what-is-creative-commons\/\">What is Creative Commons?<\/a>, Creative Commons operates on top of copyright law. <a href=\"https:\/\/creativecommons.org\/2023\/02\/17\/fair-use-training-generative-ai\/\">In the United States, there are several strong cases where using copyrighted works to train generative AI models could be considered Fair Use<\/a>, even though that is also use case dependent. However, the use of openly available content in GenAI models may not always align with the original creator\u2019s intention for sharing it. This is especially relevant since much of this content was likely shared before the development of GenAI, meaning the creators may not have anticipated its use in such a context.(Ross, 2024) As of August 2024, Creative Commons is exploring the development of <a href=\"https:\/\/creativecommons.org\/2025\/06\/25\/introducing-cc-signals-a-new-social-contract-for-the-age-of-ai\/\">preference signals<\/a> to enable steward of collection of content to indicate their criteria regarding the term of use of the content in AI training. This initiative aims to empower creators with more nuanced control over how their content is utilized in the context of generative AI.<\/p>\n<h2>Mitigating the Use of AI<\/h2>\n<p>There are multiple initiatives that are developing licenses , preference initiatives or softwares that are mitigating uses of AI or machine learning. Below are few examples:<\/p>\n<h3>Preference Signals<\/h3>\n<p>Creative Commons is developing a<a href=\"https:\/\/creativecommons.org\/2025\/06\/25\/introducing-cc-signals-a-new-social-contract-for-the-age-of-ai\/\"> preference signal<\/a> for collection stewards to indicate their preferences on how AI systems should contribute back to the collection when reusing and benefiting from the content.<\/p>\n<h3>Rail License<\/h3>\n<p><a href=\"https:\/\/www.licenses.ai\/\" data-type=\"link\" data-id=\"https:\/\/www.licenses.ai\/\">RAIL (Responsible AI License) <\/a>is a license that allows software developer to restrict the use of their AI Technology in order to prevent irresponsible and harmful application, such as preventing AI software to use it for surveillance or malicious purpose.<\/p>\n<h3>Nightshade<\/h3>\n<p><a href=\"https:\/\/nightshade.cs.uchicago.edu\/index.html\">Nightshade<\/a> is a tool developed by researchers at the University of Chicago to protect artists\u2019 work from unauthorized use in training AI models. It uses a data poisoning approach by subtly altering digital images in ways that are imperceptible to humans but cause AI systems to misinterpret the content, thereby disrupting attempts to replicate the artist\u2019s style.<\/p>\n<h3>Have I Been Trained<\/h3>\n<p><a href=\"https:\/\/haveibeentrained.com\/\">Have I been trained?<\/a> is a search engine developed by <a href=\"https:\/\/spawning.ai\/\">Spawning<\/a> that allows users to check if their images have been used in AI training datasets. Spawning also created a tool called<a href=\"https:\/\/spawning.ai\/ai-txt\"> ai.txt<\/a>, which enables website owners to create a text file specifying rules to prevent AI from scraping their data.<\/p>\n<div class=\"textbox shaded\">\n<table style=\"border-collapse: collapse;width: 100%\">\n<tbody>\n<tr>\n<td style=\"width: 85%\">\n<h5>Dig Deeper<\/h5>\n<p>To learn more about Creative Commons and AI:<\/p>\n<ul>\n<li><a href=\"https:\/\/creativecommons.org\/2023\/02\/17\/fair-use-training-generative-ai\/\">Blog post: Fair Use: Training Generative AI by Creative Commons<\/a><\/li>\n<li><a href=\"https:\/\/creativecommons.org\/wp-content\/uploads\/2025\/06\/Human-Content-to-Machine-Data_Final.pdf\"><strong>Creative Commons.<\/strong> (2025 June). From Human Content to Machine Data \u2013 Introducing CC Signals<\/a><\/li>\n<\/ul>\n<\/td>\n<td style=\"width: 15%\"><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-33 size-thumbnail\" src=\"https:\/\/pressbooks.bccampus.ca\/wp-content\/uploads\/sites\/2593\/2025\/11\/Dig-Deeper-2-150x150.png\" alt=\"\" width=\"150\" height=\"150\" srcset=\"https:\/\/pressbooks.bccampus.ca\/openscholarship\/wp-content\/uploads\/sites\/2593\/2025\/11\/Dig-Deeper-2-150x150.png 150w, https:\/\/pressbooks.bccampus.ca\/openscholarship\/wp-content\/uploads\/sites\/2593\/2025\/11\/Dig-Deeper-2-65x64.png 65w\" sizes=\"auto, (max-width: 150px) 100vw, 150px\" \/><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<h1>Adaptation Statement<\/h1>\n<p><small>Adapted from Ross, R. (2024, August 23).<a href=\"https:\/\/creativecommons.org\/2024\/08\/23\/six-insights-on-preference-signals-for-ai-training\/\"> <em>Six Insights on Preference Signals for AI Training<\/em><\/a>. Creative Commons.<\/small><\/p>\n","protected":false},"author":1076,"menu_order":6,"template":"","meta":{"pb_show_title":"on","pb_short_title":"","pb_subtitle":"","pb_authors":[],"pb_section_license":""},"chapter-type":[],"contributor":[],"license":[],"class_list":["post-124","chapter","type-chapter","status-publish","hentry"],"part":101,"_links":{"self":[{"href":"https:\/\/pressbooks.bccampus.ca\/openscholarship\/wp-json\/pressbooks\/v2\/chapters\/124","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/pressbooks.bccampus.ca\/openscholarship\/wp-json\/pressbooks\/v2\/chapters"}],"about":[{"href":"https:\/\/pressbooks.bccampus.ca\/openscholarship\/wp-json\/wp\/v2\/types\/chapter"}],"author":[{"embeddable":true,"href":"https:\/\/pressbooks.bccampus.ca\/openscholarship\/wp-json\/wp\/v2\/users\/1076"}],"version-history":[{"count":3,"href":"https:\/\/pressbooks.bccampus.ca\/openscholarship\/wp-json\/pressbooks\/v2\/chapters\/124\/revisions"}],"predecessor-version":[{"id":451,"href":"https:\/\/pressbooks.bccampus.ca\/openscholarship\/wp-json\/pressbooks\/v2\/chapters\/124\/revisions\/451"}],"part":[{"href":"https:\/\/pressbooks.bccampus.ca\/openscholarship\/wp-json\/pressbooks\/v2\/parts\/101"}],"metadata":[{"href":"https:\/\/pressbooks.bccampus.ca\/openscholarship\/wp-json\/pressbooks\/v2\/chapters\/124\/metadata\/"}],"wp:attachment":[{"href":"https:\/\/pressbooks.bccampus.ca\/openscholarship\/wp-json\/wp\/v2\/media?parent=124"}],"wp:term":[{"taxonomy":"chapter-type","embeddable":true,"href":"https:\/\/pressbooks.bccampus.ca\/openscholarship\/wp-json\/pressbooks\/v2\/chapter-type?post=124"},{"taxonomy":"contributor","embeddable":true,"href":"https:\/\/pressbooks.bccampus.ca\/openscholarship\/wp-json\/wp\/v2\/contributor?post=124"},{"taxonomy":"license","embeddable":true,"href":"https:\/\/pressbooks.bccampus.ca\/openscholarship\/wp-json\/wp\/v2\/license?post=124"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}