Skip to content

Navigating the Challenges of AI Contract Negotiations: Insights from Loeb's AI Summit in LA

At Loeb & Loeb’s recent AI Summit in Los Angeles, legal professionals gathered to share their experiences and strategies for negotiating contracts for artificial intelligence (AI) tools. The conversations highlighted both emerging best practices and persistent challenges as both vendors and customers grapple with the rapid evolution of AI technologies.

  • Diligence and Use Prohibitions. Participants noted that identifying AI components within software tools can be challenging when AI is not the tool’s primary purpose. Some have found that introducing language prohibiting the use of AI can prompt more transparent discussions and help organizations avoid unintended use of AI or exposure of sensitive data.
  • Ownership of Inputs and Outputs. Lawyers representing both vendors and customers of AI agreed that vendors are now often willing to grant customers ownership of both their prompts and other inputs and the outputs generated by AI systems. However, vendors often seek to retain rights to use derivative data for model training and improvement. This has caused more negotiation around the definition of derivative data to narrow what can be used by vendors.
  • Legacy Agreements and Unintended Data Use. Lawyers representing customers were concerned that legacy contracts predating the current wave of AI development had granted vendors rights to usage data that were now being used to train AI. The group emphasized the importance of revisiting and, where possible, amending older agreements to clarify permissible uses of data.
  • Third-Party Tools and Data Leakage. The proliferation of third-party tools and platforms used to process customer data introduces new risks, as there may be multiple layers of background technology using AI to process data. There may also be third-party terms vendors are required to flow down to customers from foundational AI models. Roundtable participants advised heightened diligence on third-party technology to manage risk and vendors clarified that certain tools required the use of customer data, and that customers might not have access to full functionality if that could not be agreed upon. 

    All individuals further expressed concern that because of the nature of training, certain customer data might be inadvertently used to improve systems and impact weighting in AI models (i.e., leakage). This has caused vendors to introduce liability restrictions on breaches of confidentiality and certain indemnities that traditionally would be uncapped.

  • Pacing and Governance. The pace of AI innovation continues to outstrip the typical contracting process. All participants found it difficult to “future-proof” contracts to account for new advancements in generative, agentic and causal AI. Many customers place restrictions on AI usage beyond what is agreed to at the outset of an agreement and require vendor participation in internal AI steering committees. Customers also agreed that requiring robust governance and efforts by vendors to ensure accuracy, comply with law and prevent model drift or bias and discrimination was a key concern. Although vendors have often been resistant to having their AI governance spelled out in a contract, many have begun adopting emerging frameworks, such as ISO 42001, which addresses many customer concerns, and with which most of the foundational model providers have now become certified.