PabloDiscobar,
@PabloDiscobar@kbin.social avatar

What’s so bad about giving AI models something to learn on?

From a user point of view? A lot. So far the AI has made itself the champion of the creation of fake. Fake news, fake pictures, fake videos, fake history, fake identity. Do you think that the AI will be used for your own good? Do you think that your private data are farmed for you own good? I don't.

I posted an example about fake identities and fake posters on Twitter. This is the end goal. This is where the money generated by the AI will come from.

That way you could detect and address rogue scrubbers while still working with LLM creators who are open to an honest training integration. And if your company can’t really detect the difference between users and LLM crawlers after implementing something like this, well, then those crawlers don’t really affect the company as much as the CEOs would like to pretend.

Twitter and Reddit probably want to be their own LLM creators. They don't want to leave this market to another LLM. Also it doesn't take a lot of API calls to generate the content that will astroturf your product.

Anyway the cat is out of the bag and this data will be harvested. The brands will astroturf their products using AI processes. People are not stupid and will realize the trick played on them. We are probably heading toward platforms using full authenticated access.

  • Todo
  • Suscrito
  • Moderado
  • Favoritos
  • RedditMigration@kbin.social
  • random
  • noticiascr
  • CostaRica
  • Todos las revistas