AI and privacy: Navigating the intersection of privacy rights and AI advancements while safeguarding businesses from liability

  • 25 Jul 2023
  • 2 Mins Read
  • 〜 by Annette Muindi

Earlier this month, Google, a subsidiary of Alphabet, was accused of misusing substantial quantities of personal information and copyrighted material in the training of its artificial intelligence systems. The complaint, filed in San Francisco federal court by eight individuals seeking to represent millions of internet users and copyright holders, said Google’s unauthorised scraping of data from websites violated their privacy and property rights.

The plaintiffs said Google misused content they posted to social media and information shared on Google platforms to train its chatbot Bard and other generative AI systems. The content identified in the lawsuit ranged from photos on dating websites to Spotify playlists and TikTok videos. One of the plaintiffs also claimed that Google copied her book in full to train Bard. The lawsuit asked the court to order Google to let internet users opt out of Google’s “illicit data collection” and to delete the existing data or pay its owners “fair compensation.”

Similarly, a Georgia man has sued ChatGPT-maker OpenAI alleging that the popular chatbot generated a fake legal summary accusing him of fraud and embezzlement through a phenomenon AI experts call “hallucination,” marking the first defamation suit against the creator of a generative AI tool.

The fake legal summary is likely the result of a frequent problem with generative AI known as hallucinations. This happens when a language model generates completely false information without any warning, sometimes occurring in the middle of otherwise accurate text. The hallucinated content can appear convincing, as it may superficially resemble real information and may also include bogus citations and made-up sources.

The main problem

AI companies have been criticised for violating privacy laws while collecting and sharing data for advertising including targeting minors and vulnerable people with predatory advertising, algorithmic discrimination and other unethical and harmful acts.

The legal challenges come as the AI industry faces increased scrutiny. Late last week, the US Federal Trade Commission published a blog post suggesting generative AI raises “competition concerns” related to data, talent, computing resources and other areas. Similarly, the European Union is moving forward with a proposal to regulate AI with the “AI Act,” prompting executives from more than 150 companies to send an open letter to the European Commission warning regulations could be both ineffective and harm competition. Lawmakers in the US also are exploring the possibility of regulating.

Despite the uncertain and evolving legal and regulatory landscape, more marketers are moving forward with seeing AI as more than a novel trend and something that could meaningfully impact many areas of business. However, that doesn’t mean many aren’t still suggesting companies experiment while also exercising caution.

Possible solution

Instead of scraping data without permission, some AI startups are taking an alternative approach to their processes. For example, Israel-based Bria only trains its visual AI tools with content it already has licenses to use. It’s more pricey but less risky and a process the company hopes will pay off. Bria’s partners include Getty Images, which sued Stability AI earlier this year for allegedly stealing 12 million images and using them to train its open-source AI art generator without permission.