The AI Governance roundtable at Loeb's AI Summit in Los Angeles on April 21 generated a focused discussion that underscored a critical and recurring theme: organizations are looking to improve their training capabilities relating to their AI tools. Governance structures and frameworks are evolving quickly, making it difficult to keep workforce training and enablement up to speed.
Participants described two primary governance models in use across their organizations. Some have adopted oversight structures concentrated in a core AI governance team, while others allow their individual business units to manage AI adoption with guidance from designated AI leads.
Across both approaches, a consistent concern emerged around the capacity of staff—particularly junior or less experienced employees—to recognize when AI-generated output is inaccurate, incomplete or otherwise unreliable. Supervision by more experienced staff is critical, and employees must be transparent with their supervisors when using AI to create work product, so that flawed output can be caught before it causes problems. Without adequate training and supervision, less experienced staff may accept flawed outputs at face value, creating downstream compliance and operational risk that governance structures alone cannot address.
This training deficit compounds broader challenges with framework adoption. Most organizations are layering AI oversight onto their existing data governance frameworks. However, for other organizations that are still building baseline governance, accounting for AI-specific risks is challenging, and building internal literacy is critical to putting appropriate AI governance policies in place.
Where organizations have more developed AI governance teams, they are generally tasked with reviewing and approving AI tools and use cases, implementing guardrails and assessing risk prior to launch. Because AI is evolving so quickly, communicating updates and changes across the organization tends to be inconsistent, which can leave employees unsure of when and how to use AI tools properly.
Post-deployment monitoring, version updates or ongoing risk reassessment tends to be a particular challenge. This gap is expected to widen with the rise of agentic AI, which may significantly alter governance models and make centralized visibility more difficult. Strong human oversight policies will be essential to managing these evolving risks, particularly as AI systems become more autonomous and less transparent in their decision-making.
In summary, the discussion reflected a shift from theoretical AI governance to operational execution challenges. The most pressing of these is not structural but human: ensuring that employees at every level have the training, literacy and oversight frameworks necessary to use AI tools responsibly and to identify the limitations of AI-generated work product.