# Adobe Aims to Give Creators Control Over Image Use in AI Training with “Robots.txt” Style Indicator

## Adobe Aims to Give Creators Control Over Image Use in AI Training with “Robots.txt” Style Indicator

Adobe is taking steps to empower creators in the age of AI with a new tool designed to control how their images are used in the training of artificial intelligence models. Drawing inspiration from the familiar `robots.txt` file used by websites to manage crawler access, Adobe is introducing a similar indicator for images, integrated into their content credentials system.

The core idea is to give artists and photographers more say over whether their work contributes to the vast datasets used to train AI. This initiative comes at a time when concerns about copyright, attribution, and the ethical use of creative content in AI training are increasingly prominent.

The challenge, however, lies in convincing AI companies to respect this new standard. As the article points out, AI crawlers have a history of ignoring the `robots.txt` protocol, raising questions about the enforceability of Adobe’s approach.

Content credentials, the foundation of this new tool, are essentially metadata embedded within a media file to verify its authenticity and ownership. Adobe is leveraging the Coalition for Content Provenance and Authenticity (C2PA), a broader industry standard, to implement these credentials.

The company is releasing a new web application, the Adobe Content Authenticity App, which allows creators to attach these credentials to their image files, regardless of whether the images were created or edited using Adobe software. This app supports batch processing, allowing users to apply credentials to up to 50 JPG or PNG files at once. Critically, the app also includes a checkbox that signals to AI companies that the creator *does not* want the image used for training AI models.

To further enhance the credibility of these credentials, Adobe is partnering with LinkedIn, leveraging the platform’s verification program. This integration allows creators to verify their identity and link their credentials to a confirmed name on LinkedIn. Support for linking Instagram and X (formerly Twitter) profiles is also included, although without direct platform verification.

While the technical infrastructure is being built, the ultimate success of Adobe’s initiative hinges on widespread adoption by AI developers. Currently, Adobe is engaged in discussions with leading AI model creators, urging them to integrate and respect the new standard.

“Content creators want a simple way to indicate that they don’t want their content to be used for gen AI training,” Andy Parson, Senior Director of the Content Authenticity Initiative at Adobe, told TechCrunch. “We have heard from small creators and agencies that they want more control over their creations [in terms of AI training on their content].”

Adding to their suite of tools, Adobe is also releasing a Chrome extension that allows users to easily identify images with embedded content credentials while browsing the web. This extension displays a “CR” symbol on images that possess the credentials, even on platforms that don’t natively support the standard.

The road ahead presents challenges, as evidenced by past controversies. For example, Meta’s labeling of images as “Made with AI” sparked backlash from photographers whose heavily edited images were incorrectly tagged. This incident underscored the complexities of implementing content authentication across different platforms.

Looking ahead, Adobe plans to extend its content credential system to support video and audio files, further solidifying its commitment to empowering creators in the age of AI. The success of this initiative will depend on collaboration and a shared understanding between creators and AI developers, paving the way for a more transparent and ethical ecosystem.

Yorumlar

Bir yanıt yazın

E-posta adresiniz yayınlanmayacak. Gerekli alanlar * ile işaretlenmişlerdir