At Loeb’s AI Summit in Los Angeles, I had the pleasure of hosting two, lively roundtable discussions focused on the intersection of artificial intelligence and privacy law. Participants represented a cross-section of industries, all grappling with how rapid AI adoption is challenging traditional privacy frameworks and compliance practices. The discussion highlighted several practical concerns and emerging trends across industries:
Service Provider Status and AI Training: Roundtable participants debated whether AI tool vendors could be classified as “service providers” under California privacy laws, including the CCPA. Central to this question was whether the particular vendor uses personal information to train its own or third-party models, uses it solely to train models for the business’s benefit or refrains from using personal information for training at all. The answer has major implications for privacy risk allocation and negotiation of contractual data processing provisions.
AI Note Taker Applications: There was strong interest among participants in the legal and policy questions surrounding AI-powered note-taking tools. Attendees weighed whether to permit such tools’ use, and if so, how to craft policies that take into consideration different use risk profiles, such as distinguishing between internal and external meetings, consumer-facing scenarios (such as use in call centers) and contexts implicating attorney-client privilege.
AI Agents and Expanded Privacy Risks: Participants also highlighted new privacy challenges introduced by AI agents, which can access multiple internal and external systems and data repositories, which may include personal or sensitive information. These agents create complex data flows and require fresh risk assessments and controls, as well as unique contracting approaches for AI vendors with agentic capabilities built into their services.
The Data Minimization Challenge: All participants agreed that organizations generally are feeling pressure to adopt AI tools at a rapid pace in order to drive business value and efficiency. In the privacy context, this push often results in more personal information being processed than ever before, and for novel uses. AI processing can even generate new categories of personal information that would not otherwise have been accessed or maintained by an organization. Balancing these business imperatives with privacy-driven data minimization efforts is a growing challenge, as companies strive to innovate while ensuring they do not collect or process more personal information than necessary.
Overall, the roundtable sessions underscored that as AI capabilities evolve, so do the privacy risks and compliance questions organizations must navigate. Clear policies, careful vendor management and ongoing risk assessments will be essential as companies work to responsibly integrate AI tools into their operations.