22949.90.1.
(a) A generative AI provider whose GenAI system is capable of producing digital content that would falsely appear to a reasonable person to depict real-life persons, objects, places, entities, or events shall do all of the following:(1) (A) Apply provenance data, either directly or through the use of third-party technology, to synthetic content produced or significantly modified by a generative AI system that the GenAI provider makes available. The GenAI provider shall make the provenance data difficult to remove or disassociate, taking into account the accuracy of the provenance data, the quality of the content produced or significantly modified by the generative AI system, and widely accepted industry standards on provenance data.
(B) The application of provenance data to synthetic content, as required by subparagraph (A), shall, at minimum, be difficult to remove or disassociate, identify the digital content as synthetic, and communicate the following provenance data in order of priority, with clause (i) being the most important, and clause (iv) being the least important:
(i) The synthetic nature of the content.
(ii) The name of the generative AI provider.
(iii) If feasible for the provenance technique used, the time and date the provenance data was applied.
(iv) If applicable and feasible for the provenance technique used, the specific portions of the content that are synthetic.
(2) (A) A generative AI provider shall create and make available to the public a provenance detection tool or permit users to use a provenance detection tool provided by a third party. The provenance detection tool shall be based on broadly adopted industry standards and, if technically feasible, meet the following criteria:
(i) The tool allows a user to assess whether digital content was created or altered by a generative AI system.
(ii) The tool allows a user to determine how digital content was created or altered by a generative AI system.
(iii) The tool outputs any provenance data that is detected in the content.
(iv) The tool is publicly accessible through the generative AI provider’s or the third-party’s internet website, its mobile application, or an application programming interface, as applicable.
(v) The tool allows a user to upload content or provide a uniform resource locator (URL) linking to online content.
(B) A generative AI provider or third party shall put in place a process to collect user feedback related to the efficacy of the provenance detection tool described in subparagraph (A) and incorporate any feedback into any attempt to improve the efficacy of the tool.
(C) A generative AI provider that creates or makes available a provenance detection tool pursuant to subparagraph (A) may limit access to the decoder to ensure the robustness and security of their provenance data techniques.
(3) (A) Conduct adversarial testing exercises following relevant guidelines from the National Institute of Standards and Technology. The adversarial testing exercises shall assess both of the following:
(i) The robustness of provenance data methods.
(ii) Whether the generative AI provider’s GenAI systems can be used to add false provenance data to content generated outside of the system.
(B) Adversarial testing exercises required by this paragraph shall be conducted before the general audience release of any new tool or method used to apply provenance data to synthetic content produced or significantly modified by a generative AI system that the GenAI provider makes available.
(C) In the event that a generative AI provider utilizes a third-party tool or method to apply provenance data, the generative AI provider may rely on the testing conducted by the provider of the third-party tool or method pursuant to paragraph (2).
(D) A generative AI provider shall submit full reports of its adversarial testing exercises to the Department of Technology within 90 days of conducting an adversarial testing exercise pursuant to this paragraph. The report shall address any material, systemic failures in a generative AI system related to the erroneous or malicious inclusion or removal of provenance data.
(E) (i) Upon the request of an accredited academic institution, a generative AI provider shall make available a summary or report of its adversarial testing exercises.
(ii) The provider may deny a request if providing a summary or report to the relevant institution would undermine the robustness or security of its provenance data techniques.
(F) This paragraph does not require the disclosure of trade secrets, as defined in Section 3426.1 of the Civil Code.
(b) Providers and distributors of software and online services shall not make available a system, application, tool, or service that is designed for the primary purpose of removing provenance data from synthetic content in a manner that would be reasonably likely to deceive a consumer of the origin or history of the content.
(c) Generative AI hosting platforms shall not make available a generative AI system that does not allow a GenAI provider, to the greatest extent possible and either directly providing functionality or making available the technology of a third-party vendor, to apply provenance data to content created or substantially modified by the system in a manner consistent with specifications set forth in paragraph (1) of subdivision (a).