Replies: 2 comments
-
Eric: I'd like to perhaps explore the case of collecting biodiversity data, which comes from multiple sources, devices etc. and is combined to generate global biodiversity health scores which governments and other organizations use to form policy and business decisions worldwide. |
Beta Was this translation helpful? Give feedback.
-
I'd like to add "transparency" to this use case - which may be the same or similar to Auditability. Transparecy is a common requirement from regulatory compliance - EU AI Act, US EO etc. all list transparency about what is training data and where the training data is from. For example, this can apply to RAG applications too. |
Beta Was this translation helpful? Give feedback.
-
Proposer: @eric Drury
Modern AI models are built using a large amount of quality training data. In the past, large scale data collections are often mandated by the service providers as a precondition of providing the service (e.g. through terms of use) where users typically have no choice (other than boycotting the service). We are interested in a mechanism by which users can contribute quality data to improve future AI while preserving user’s rights and privacy.
With C2PA+TSP solution, users can have significantly more control of how this data is shared, for instance:
Use of C2PA to claim authorship, label use of AI or other tools, and document provenance and ownership. Such claims can be the basis of data rights.
Use of TSP to execute consent agreement or choose from variants of such agreements.
AUditability
Focus is on how to make the dataset as authentic and transparent as possible.
Beta Was this translation helpful? Give feedback.
All reactions