New badging system, expanded membership tiers, and early access offerings support organizations in addressing agentic AI risks and regulatory demands
Austin, TX – April 22, 2025 – The Responsible AI Institute (RAI Institute) today announced a major expansion of its offerings, launching the new RAISE Pathways Program, powered by more than 1,100 curated AI controls and 17 global standards and guidelines, including NIST, ISO, OSWASP, and the EU AI Act. This strategic move marks a strategic shift from policy to practice and operational implementation—giving organizations the structure, tools, and community they need to transition from responsible AI intent to verifiable practice.
With global AI regulations tightening and the stakes for safety, fairness, and control rising, especially around agentic AI, the RAI Institute is redefining what it means to lead in responsible innovation. Organizations now face growing pressure to demonstrate—not just declare—their AI accountability. RAISE Pathways delivers the tools, structure, and community to make that shift possible.
““The rapid rise of agentic AI demands more than principles—it demands proof,” said Manoj Saxena, Founder and Chairman of the Responsible AI Institute. “With RAISE Pathways, leaders now have a clear way to operationalize responsible AI, validated against the world’s most comprehensive library of Gen AI and Agentic AI controls and standards.”
RAISE Pathways: A Structured Approach to Responsible AI Implementation
RAISE Pathways is designed to help organizations move from understanding responsible AI principles to operationalizing them in a measurable, verifiable way. It offers a five-level progression model that integrates educational resources, strategic frameworks, community collaboration, and the opportunity to earn RAI Verification Badges.
Participation in RAISE Pathways gives members access to:
- A structured, milestone-based path to responsible AI maturity
- A growing library of implementation tools aligned with global frameworks
- Opportunities to co-develop best practices and engage in peer collaboration
- Exclusive access to expert roundtables, webinars, and working groups
- Eligibility for digital credentials that verify policy implementation
Interested in joining? Organizations and individuals can contact us about the program with the RAISE Pathways Interest Form.
Badging System and Expanding Membership: Independent Verification of Responsible AI Practices
To support this shift from policy to practice, the Institute is rolling out a new Responsible AI Verification and Badging System, now in limited beta. The system provides organizations with a path to earn digital credentials based on AI system type—Machine Learning, Generative AI, and Agentic AI—and risk dimensions such as:
- AI Security, Risk & Trust Management
- AI Sustainability, Cost & Performance
- AI Governance & Regulatory Compliance
- Workforce & Vulnerable Communities
The badge structure is powered by over 1,100 curated AI controls and 17 global standards and guidelines, including ISO/IEC 42001:2023 (AI Management Systems), NIST AI RMF, OWASP standards, ISO 23001, and GSF’s SCI specification. Verification is based on both system-level and organizational-level reviews.
RAI badges offer members practical benefits such as:
- External verification of responsible AI practices
- Streamlined internal audits
- Enhanced stakeholder and public trust
- Demonstrated market leadership
The Institute also previewed a forthcoming expansion of its membership tiers designed to better serve innovative startups and small-to-midsize businesses (SMBs). These new tiers will provide access to essential tools, community resources, and foundational badge opportunities—creating an on-ramp for organizations building early-stage AI governance capabilities.
Responding to an Urgent Challenge
According to RAI Institute, more than 80% of AI initiatives still fail to transition from pilot to production—a figure expected to rise with the complexity introduced by agentic AI. Simultaneously, a “verification vacuum” persists, with most organizations lacking reliable, standards-based mechanisms to validate responsible AI practices and communicate them externally.
The RAI Institute’s new offerings are designed to address three core challenges:
- The implementation gap between policy commitments and deployed systems
- The regulatory burden transfer placing compliance responsibility on businesses
- The absence of trusted external validation to support audits, stakeholder trust, and market differentiation
Welcoming Our Newest Q1 Members: HCLTech and Cotiviti
The RAI Institute is also proud to welcome HCLTech and Cotiviti as its newest members in Q1, further diversifying the expertise within our community.
HCLTech is a global technology company headquartered in Noida, India. With a presence in 60 countries and over 220,000 employees, HCLTech delivers industry-leading capabilities in digital, engineering, cloud, and AI services. The company is recognized for its commitment to innovation and has been ranked among Forbes’ “World’s Best Employers.”
Cotiviti, based in South Jordan, Utah, is a leading solutions and analytics company focusing on healthcare. Leveraging extensive clinical and financial datasets, Cotiviti provides insights into healthcare system performance, offering services in payment accuracy, quality improvement, risk adjustment, and network performance management.
“As enterprises worldwide accelerate AI adoption, responsible AI practices and partnerships are critical to the success of these initiatives. We are honored to join the Responsible AI Institute and its mission to be at the forefront of the responsible AI evolution,” stated Dr. Heather Domin, Vice President and Head of Office of Responsible AI and Governance at HCLTech. “The time for responsible AI deployment is now. Together, we can help enterprises embrace responsible AI while maintaining the highest standards of ethics and sustainable growth.”
“Cotiviti is committed to deploying AI responsibly to enable our human specialists to improve performance and the client experience, with rigorous standards focused on accuracy, transparency, security, and accountability,” said Suvajit Gupta, Chief Technology Officer of Cotiviti. “We are proud to partner with the Responsible AI Institute to develop new ways to leverage AI to enable a high-quality and viable healthcare system.”
Why Organizations Join the RAI Institute
Across industries and levels of AI maturity, members join RAI Institute to gain access to practical tools, strategic insights, and a trusted community of experts. Based on extensive member feedback, RAI Institute has identified 12 key drivers behind membership engagement:
- Independent, third-party validation of responsible AI governance practices
- Demonstrable due diligence for boards, investors, customers, and regulators
- Robust documentation to support board-level and executive reporting
- Early intelligence on evolving AI regulations and policy shifts
- Proactive compliance strategies to get ahead of regulatory requirements
- Reduced exposure to legal, financial, and reputational risks
- Structured frameworks for AI risk, trust, and performance management
- Alignment with leading standards such as NIST AI RMF, ISO/IEC 42001, and the EU AI Act
- Enterprise-grade visibility into AI compliance and risk posture
- Comprehensive maturity assessments across people, processes, and systems
- Streamlined AI governance workflows to reduce friction and improve efficiency
- Optimized AI cost management through better risk controls and resource allocation.
To learn more about joining the Responsible AI Institute and participating in RAISE Pathways, visit https://www.responsible.ai/join/.
Media Contact
Rhea Saxena / rhea@responsible.ai
Connect with RAI Institute