Skip to content

It looks like we may have content for your preferred language. Would you like to view this page in English?

California’s ADMT Regulations: Shaping the Future of Responsible AI

Hashed & Salted | A Privacy and Data Security Update

When California approved its automated decision making technology (ADMT) regulations in 2025, few businesses anticipated how quickly they would reshape AI governance in the U.S. Issued by the California Privacy Protection Agency (CPPA), these rules have broad impact—and, much like the California Consumer Privacy Act (CCPA), set the tone for privacy law across the country. These ADMT regulations dictate how companies must build, test and monitor their automated decision-making systems, especially those that influence people’s livelihoods, whether through hiring, lending, housing, health care or education.

These new regulations take effect Jan. 1, 2026. For ADMT requirements specifically (which apply to businesses using ADMT for “significant decisions”), the compliance deadline is Jan. 1, 2027. Related obligations (such as risk assessments and recordkeeping) also take effect Jan. 1, 2026, and formal attestations for those assessments will be due to the CPPA by April 1, 2028 (covering activities from the prior two years). Companies should use 2025 as a planning window to identify which systems qualify as ADMT, map data inputs, update privacy policies and train staff on new oversight protocols.

The CCPA ADMT regulations are designed to safeguard consumers by requiring businesses to use automated tools fairly, transparently and responsibly. The regulations lay out four major compliance requirements:

(1) Notice: Companies must provide clear, accessible disclosure to individuals whenever an ADMT system influences a decision that could affect them significantly. Notices must include the system’s purpose, categories of personal data used and a general explanation of the logic behind automated decisions.

(2) Opt-Out and Access Rights: Subject to certain exclusions, consumers have the right to opt out of automated processing or request meaningful information about how those automated decisions are reached. This typically applies when ADMT systems make or influence decisions that determine access to financial, employment, housing, health care or educational opportunities.

(3) Risk Assessment: Businesses must conduct formal risk assessments to identify potential harms, including bias, discrimination or other negative consequences. These assessments must be documented and maintained, enabling oversight and regulatory review.

(4) Human Oversight: Systems that automatically generate outputs for significant decisions must allow a qualified human reviewer to meaningfully interpret and, if necessary, override the system. The intent is that AI augment rather than wholly replace human judgment.

The regulations distinguish between two types of ADMT—“significant decision ADMT” and “high-risk profiling ADMT.” Significant decision ADMT points to AI systems that make or substantially influence decisions to produce legal or similarly significant effects on individuals. High-risk profiling ADMT does not directly make a significant decision but may still qualify as high-risk if it involves psychological profiling, targeted manipulation or tracking of sensitive data. Of particular note is the fact that the regulations were trimmed in scope to remove behavioral advertising from the list of significant decision categories, reducing immediate impact on ad targeting. The following chart illustrates the types of decisions that fall into each category of ADMT:

Significant Decision ADMT

  • Loan approvals or credit scoring in financial institutions
  • Employment hiring, promotion or termination decisions
  • Admissions or scholarship eligibility in educational institutions
  • Health care triage, diagnosis or treatment recommendations
  • Housing decisions, including tenant screening and rent pricing
High-Risk Profiling ADMT
  • Psychological profiling—inferring mental, emotional or cognitive traits
  • Targeted manipulation—using algorithmic insights to influence consumer behavior in potentially harmful ways
  • Tracking of sensitive data—collecting or analyzing personal attributes such as race, religion, sexual orientation, health status or biometric identifiers 
  

The ADMT regulations impact a wide swath of U.S. companies that make significant decisions as part of their business models. Even more U.S. companies are directly affected since the regulations include high-risk profiling ADMT. For example, social media and content platforms may use emotional AI. Those companies may use AI to track engagement, reactions and content preferences. The data collected allows these companies to refine automated recommendation algorithms to infer user interests—even political leanings or mental health conditions—and ultimately results in automated curated feeds. Another example of high-risk profiling occurs in the ecommerce space. In practice, ADMT can be used to personalize product recommendations and dynamic pricing. This is done by profiling purchase history, click patterns, and time spent on certain items or pages, and then inferring a user’s willingness to pay or emotional state. These examples highlight the ethical tension between personalization and manipulation created by ADMT.

The monetary and human resources necessary to implement ADMT obligations imposed by these new regulations are substantial because these regulations require businesses to build a new operational model. Committing time and money to incorporate measurable accountability, transparent decision-making, formal risk evaluations and evidence of human review is not optional.

The practical next step for companies is to integrate compliance strategies into their management of ADMT. Limiting the use of ADMT to nonsignificant decisions, where feasible, is ideal but likely neither practical nor reflective of the direction many companies are taking today.

Businesses must revise their privacy disclosures at or before collection to provide a pre-use notice explaining the specific purpose of ADMT, the categories of personal data used, how the logic of the system works, how the decision may affect the individual, whether a human will review the output or have authority to override it, and (where applicable) the consumer’s right to opt out or appeal. Businesses must also implement procedures enabling consumers to request meaningful information about how ADMT reached a decision about them (including data inputs, attributes considered and the role of human review) and provide a clear path for appeal. These steps are fundamental to reducing algorithmic opacity (an ongoing issue for AI systems) and meeting the legal obligations of transparency and accountability.

One of the smartest moves a business can make under the new regulations is to integrate ADMT risk assessments into its day-to-day operations. When fairness reviews and bias testing become part of a company’s culture, compliance evolves into a proactive shield to reduce legal exposure and improve product quality and consumer trust.

Thorough documentation may be the simplest but most powerful compliance safeguard. Legally, documentation can serve as a defensible paper trail, reducing liability risk and demonstrating due diligence. Operationally, documentation provides the opportunity to improve over time because it helps identify where ADMT carries a higher risk that may trigger a more nuanced assessment. Tracking human overrides of AI output can also reveal bias trends or performance issues. Overall documentation supports internal traceability and external credibility in the event of regulatory scrutiny.

California’s ADMT regulations have effectively made AI governance a board-level issue. By narrowing the scope of the regulations and distinguishing between significant decision ADMT and high-risk profiling, these regulations attempt to achieve balance by enabling technological innovation while requiring transparency and human accountability to protect the consumer. The result is a new workstream in businesses. California’s rules may have started as a California-only initiative, but they have effectively set the tone for the nationwide handling of ADMT (given the number of nationwide companies that are subject to the CCPA). The companies that treat these regulations as a blueprint for responsibly using AI in ADMT will be the ones best positioned to handle what comes next in the U.S. and perhaps the world.