As organizations across industries and sectors—from advertising, media and entertainment to financial services, retail, consumer brands and healthcare—integrate artificial intelligence (AI) solutions into all levels of their operations, legal, business and compliance teams are increasingly focused on a fundamental question: How can we maximize the value of AI tools and solutions while safeguarding the data that fuels them?
Balancing innovation with privacy protections and responsible data stewardship was a repeated theme at Loeb & Loeb’s 2026 AI Summit last month in New York. In presentations and across roundtable discussions, Loeb lawyers, in-house counsel and AI professionals discussed the legal, regulatory, business and technology drivers shaping AI innovation and implementation.
For organizations adopting AI tools, managing privacy concerns around personal and sometimes sensitive information, protecting their institutional data and the data of their clients, and structuring responsible governance frameworks are top concerns.
Here are some of the insights from the summit’s roundtables on privacy, AI governance, advertising and contracting that highlight how organizations are approaching AI implementation and managing legal, business and reputational risks.
Data as the Foundation of AI’s Value
A consistent theme across the roundtable discussions was how AI systems derive their value from data (both for training and as input and output of AI models), and that the quality and protection of that data are central to business strategy. AI models and AI-enabled solutions thrive on large and diverse datasets, but privacy concerns, legal and regulatory compliance, and contractual obligations necessarily limit how personal data and proprietary information can be collected, shared and used.
This tension requires organizations across industries and sectors to examine their internal data practices to focus on ensuring that data was obtained lawfully and with proper notices to consumers or under appropriate licenses, and that it is used in compliance with applicable laws, notices, and contracts. Those data sets also need to be appropriately protected (both internally and externally). AI governance is no longer about piecemeal risk assessments and compliance analysis, but also requires operationalizing data management practices, education and training, and ongoing awareness and risk management across the organization.
Privacy: Integrating Data Protection Into AI Development
The privacy roundtable focused on the growing importance of embedding privacy protections directly into AI design and deployment.
AI applications often involve the aggregation and analysis of large volumes of personal data. As a result, organizations must evaluate how their AI initiatives interact with an increasingly complex landscape of privacy laws and consumer expectations.
Participants stressed the importance of involving privacy counsel early in AI initiatives to identify potential compliance issues before systems are deployed, helping organizations avoid costly redesigns or regulatory scrutiny later.
Security risks were another central focus of the discussion. While AI technologies can enhance data analysis and operational efficiency, they may also introduce new vulnerabilities. AI systems often process sensitive data at scale, making them attractive targets for cyberattacks or misuse. Organizations are responding by strengthening their data security practices around AI systems, including:
- Implementing stricter access controls and monitoring for AI tools
- Conducting security assessments of AI vendors
- Incorporating AI systems into existing incident response frameworks
- Developing internal policies governing employee input of sensitive data into generative AI platforms
These measures reflect a growing recognition that privacy and cybersecurity risks cannot be treated separately from AI adoption—they must be addressed as part of a unified governance strategy.
AI Governance: Operationalizing Responsible Data Use
The AI governance roundtable explored how companies are building organizational structures to oversee AI deployment and ensure responsible data use.
Participants described a variety of governance models. Some organizations have created centralized AI oversight committees responsible for reviewing and approving AI use cases. Others rely on distributed governance models in which business units maintain primary responsibility while coordinating with legal, privacy and security teams.
Regardless of the structure, organizations are confronting similar operational challenges.
One major challenge is speed to implementation. Business teams often want to deploy AI tools quickly to improve efficiency or gain competitive advantages. Lengthy review processes can slow adoption and create pressure on legal and compliance teams to streamline oversight mechanisms.
Another challenge is employee awareness. Even when organizations provide approved AI tools, employees may not fully understand how to use them responsibly or securely. This can lead to inconsistent practices—or to employees turning to unapproved tools that may not meet organizational privacy or security standards.
Roundtable participants also highlighted the importance of lifecycle governance. While many companies conduct risk assessments before deploying AI tools, fewer have robust processes for monitoring systems after launch. As models evolve and new features are introduced, ongoing oversight becomes essential to ensure that AI systems continue to operate within acceptable risk parameters.
Organizations with mature privacy and data governance frameworks appear to have an advantage in this area. Existing processes for data classification, vendor management and risk assessment can often be extended to address AI-specific challenges.
AI Contracting: Controlling Data Use in Vendor Relationships
The contracting roundtable highlighted how AI technologies involve particular challenges and risks related to data protection and use and the challenge of balancing speed of implementation and legal and operational protection.
Unlike traditional software products, AI-enabled tools often rely on complex ecosystems of models, datasets, application programming interfaces (APIs) and third-party solutions. As a result, companies must carefully evaluate and understand the underlying technologies that power the tool and the contractual relationships between the various providers of those technologies.
Vendor use of customer data is a primary concern. Providers of AI-enabled solutions frequently seek broad rights to use client data to train or refine their models; most organizations severely restrict or outright refuse to grant this kind of unfettered data access and use. Roundtable participants cited skepticism about data segregation, security and adherence to contractual restrictions on data usage. Many organizations are negotiating tighter contractual restrictions on:
- The use of customer data for model training
- The retention and deletion of input data
- Secondary or downstream uses of data
- Vendor security obligations and breach notification procedures
Participants also emphasized that contractual protections alone are not sufficient. Legal teams must work closely with IT and security stakeholders to ensure that vendors implement technical controls capable of enforcing these restrictions.
Another key concern is the accuracy and reliability of AI outputs. “Hallucinations” and other incorrect results can create operational or legal risks, particularly when AI tools process sensitive data or inform business decisions. As a result, companies are exploring contractual provisions that address model performance, retraining obligations and remediation procedures when AI systems produce problematic results.
Advertising: Protecting Consumer Trust in AI-Driven Marketing
The advertising roundtable addressed how AI is transforming marketing and content creation and the implications for privacy, consumer trust, and legal and reputational risk.
AI tools are increasingly used to generate advertising copy, personalize marketing messages and optimize campaigns based on consumer data. While these capabilities can improve efficiency and targeting, they also raise questions about transparency and data use.
Participants discussed the growing number of legal limitations and the increasing regulatory scrutiny of AI-generated advertising content, including disclosure requirements under existing state and federal consumer protection and advertising laws, and new AI-specific laws such as the one recently enacted by New York requiring disclosures when synthetic performers or digitally generated personas are used.
Beyond legal compliance, companies are considering the broader issue of consumer trust. Consumers may react differently to marketing content if they learn that it was created or heavily influenced by AI. As a result, organizations must weigh the benefits of AI-generated content against potential reputational risks.
Data governance also plays a role in advertising applications of AI. Personalized marketing strategies often rely on extensive consumer data, making compliance with privacy laws and internal data-use policies critical. Organizations are exploring governance approaches that allow responsible use of AI in advertising while maintaining strong oversight of the underlying data practices.
The Convergence of AI Strategy and Data Governance
Across all four roundtables, one theme emerged: Data governance is essential to AI governance.
The companies best positioned to succeed with AI are those that treat privacy, security and data management not as obstacles to innovation but as enabling frameworks. Strong data governance practices can help organizations deploy AI tools more confidently, knowing that the underlying data is appropriately protected and responsibly used. Privacy, data protection and security, and data optimization will become more significant with the evolution of AI technology and development of AI-enabled tools (like agentic AI).
-
Deputy Chair, Privacy, Security & Data Innovations -
Chief Privacy & Security Partner; Chair, Privacy, Security & Data Innovations -