top of page
Search

New York Sets AI Safety Benchmark with SB6953B

  • Writer: Katarzyna  Celińska
    Katarzyna Celińska
  • Jun 24
  • 2 min read

On June 12, 2025, New York made a move in AI governance by passing the Responsible AI Safety and Education (RAISE) Act—SB6953B. This legislation targets "large developers" of frontier AI models, defined as entities spending over $100M on training high-compute models exceeding 10²⁶ operations. Academic institutions are excluded when acting in a research capacity.

 

Key Requirements:

✅ Written Safety & Security Protocols: Must be developed, maintained, and published.

✅ Retention: All safety protocols and testing documentation must be retained for the model’s lifecycle + 5 years.

✅ Transparency: Testing details must be recorded with enough depth to allow replication by third parties.

✅  Critical Harm Prohibition: Deployment is banned if it risks large-scale harm (100+ fatalities or $1B+ damage), including CBRN threats or autonomous criminal conduct.

✅  Annual Reviews & Third-Party Audits: Developers must review safety practices yearly and engage independent auditors to validate compliance.

✅  Incident Disclosure: Any safety-related incident must be reported within 72 hours to regulatory bodies.

✅  Penalties for Non-Compliance:

Up to $10M for a first violation

Up to $30M for subsequent violations

 

AI regulation isn't just an EU initiative anymore. Other countries and US states are actively building frameworks. What I find particularly important is the requirement for external third-party audits, reflecting a broader market trend: ISO, SOC1/SOC2 attestations, PCI-DSS, ESG reports, FEDRAMP, and more. The need for trusted independent assurance is stronger than ever, especially when internal audit lacks perceived independence.

I also strongly believe in the potential of the Cloud Security Alliance and its evolving STAR for AI assurance model. CSA has already led the industry with STAR in cloud security, and now they're extending their framework into AI.

The STAR for AI will help organizations demonstrate transparency, accountability, and due diligence in a way that’s both verifiable and certifiable.

 


 
 
 

コメント


Stay in touch

ITGRC ADVISORY LTD. 

590 Kingston Road, London, 

United Kingdom, SW20 8DN

​company  number: 12435469

Privacy policy

  • Facebook
  • Twitter
  • LinkedIn
  • Instagram
bottom of page