Our Mission
Accelerating progress in AI derisking
Our Primary Objective
The primary objective of Factor Omega is to accelerate progress in AI alignment, reducing the likelihood that Advanced AI results in catastrophic outcomes.
The Challenge
Many institutions and individuals want to help make AI safer, yet the field lacks an easy front door. This creates a critical bottleneck in our collective ability to address one of the most pressing challenges of our time.
The Problem
Global AI spend is projected to exceed $300B by 2026, while AI safety/alignment funding is less than $300M annually.
Our Solution
We identify, capture, and channel resources including GPU compute, expert services (legal, operations, engineering), social capital, and non-dilutive funds to early-stage organizations working on AI derisking.
Our Approach
Factor Omega consists of four founders with deep connections in AI and adjacent fields (academia and industry), bolstered by a network of close advisors. This combination drives unprecedented access to both 'hard' resources like GPU compute and 'soft' resources like follow-on funding and model access.
“Outside In”
We make deployment efficient by assuming all overhead exclusively — donors simply have to give. We deploy resources across both early-stage non-profit and for-profit organizations, funneling previously untapped resources into underfunded AI derisking areas.
- Handle intake, triage, and vetting
- Ensure compliance and delivery
- Provide ongoing support
“Inside Out”
As we scale, we'll add communication efforts to improve public understanding with clear and accessible information about AI safety.
- Political advocacy
- Policymaker education
- Public awareness campaigns
Why This Matters
AI capabilities are likely to continue their exponential growth over the coming years, plausibly reaching AGI-level performance within the next two decades.
The Upside
In the optimistic scenario, AGI will usher in an era of unprecedented abundance. When intelligence becomes abundant and inexpensive, scarcity itself may disappear, creating some sort of utopia.
The Risk
However, increased AI power equally raises the stakes of a profoundly dystopian outcome. The risks include loss of control over exponentially self-improving AI systems and malicious use.
Critical Timeline
In 2022, 48% of surveyed researchers said there is at least a 10% chance AI causes an existential catastrophe. Given the enormous stakes and short timelines, this makes AI safety the most urgent priority we face.
Our Commitment
We are long-term optimists and short-term cautious. Our role is to accelerate alignment without affecting capabilities.
Careful Assessment
We assess second-order effects and support only work where risk reduction is clearly established.
Mission Clarity
We maintain independence and avoid funding that would compromise our safety-first mission.
Transparency
Every resource is tracked and reported to ensure alignment with our mission of reducing AI risk.
Join Our Mission
Whether you have resources to contribute or expertise to share, there's a place for you in the effort to make AI safe for humanity.