AI Governance
19 articles about AI Governance
Netomi's approach to scaling enterprise AI agents with GPT-4.1 and GPT-5.2
Netomi explains how it scales enterprise AI agents with GPT-4.1 and GPT-5.2 using concurrency, governance, and multi-step reasoning to run reliable production workflows.
Public input on AI behavior informs OpenAI's model specifications
OpenAI surveyed 1,000+ people worldwide to compare their views on how AI should behave with its Model Spec, using the results to shape AI defaults around diverse human values.
OpenAI launches initiative to support countries building democratic AI
OpenAI for Countries is a new initiative to help nations build AI on democratic principles.
OpenAI participates in the Paris AI Action Summit
OpenAI will join global leaders at the Paris AI Action Summit to discuss how AI can drive innovation and economic growth.
OpenAI and leading labs advance AI governance through voluntary safety commitments
OpenAI and other top AI labs are advancing AI governance by making voluntary commitments to improve AI safety, security, and trust.
Defining AI system behavior and the role of public input in decision-making
It explains how ChatGPT’s behavior is set today and how OpenAI plans to improve it with more user customization and public input.
Cooperation strategies for improving safety in AI development
A policy paper outlines four ways—risk communication, technical collaboration, transparency, and incentives—to boost industry cooperation on AI safety despite competitive pressures.
AI progress and recommendations for safe and beneficial development
AI is advancing quickly, and we should guide it toward safe breakthroughs that benefit everyone.
OpenAI urges Governor Newsom to support harmonized AI regulation in California
OpenAI urged Governor Newsom to have California align its AI rules with national and emerging global standards.
Updated framework for measuring and mitigating risks from frontier AI capabilities
We’re sharing an updated framework to measure and reduce the risk of severe harm from frontier AI.
Research analyzes current misuse of multimodal generative AI
New research maps how multimodal generative AI is being misused today to guide safer, more responsible tech.
OpenAI launches grant program to explore democratic AI governance
OpenAI is offering ten $100,000 grants to test democratic ways to decide what rules AI should follow within the law.
Best practices for deploying large language models
Cohere, OpenAI, and AI21 Labs outline early best practices for organizations building or deploying large language models.
OpenAI partners with Japan’s Digital Agency to advance generative AI in public services
OpenAI is partnering with Japan’s Digital Agency to bring generative AI into public services while promoting safe, trustworthy use and global AI governance.
OpenAI joins EU code of practice to support responsible AI and innovation in Europe
OpenAI has joined the EU AI Code of Practice to support responsible AI and work with European governments on innovation, infrastructure, and economic growth.
Addressing malicious uses of AI to promote democratic and safe applications
This is about keeping AI beneficial by promoting democratic use, stopping malicious misuse, and defending against authoritarian threats.
Frontier Model Forum established to promote safe development of advanced AI systems
Frontier Model Forum is a new industry group promoting safe, responsible frontier AI by advancing safety research, setting best practices, and sharing information with policymakers and industry.
Governance considerations for future superintelligent AI systems
It urges starting now to plan how to govern superintelligent AI systems far more capable than AGI.
Mechanisms to improve verifiability in AI system development
A 58-author, 30-organization report outlines 10 tools to verify claims about AI systems, helping developers prove safety and helping others assess AI development.
Showing page 1 of 1