OpenAI announced that it closed their latest funding round with $122 billion in committed capital at a post money valuation of $852 billion. For the first time, OpenAI extended participation to investors through bank channels, raising over $3 billion from individual investors. „This funding gives us the resources to continue to lead at the scale this moment demands.” – the company said in a blog post.
„We have also expanded our existing revolving credit facility to approximately $4.7 billion, which gives us added flexibility as we continue to invest at scale. The facility is supported by a global syndicate including JPMorgan Chase, Citi, Goldman Sachs, Morgan Stanley, Wells Fargo, Mizuho, Royal Bank of Canada, SMBC, UBS, HSBC, and Santander. The facility remains undrawn at close.” – OpenAI added.
OpenAI is becoming the core infrastructure for AI, making it possible for people around the world and businesses, big and small, to just build things. The broad consumer reach of ChatGPT creates a powerful distribution channel into the workplace, where demand is rapidly shifting from basic model access to intelligent systems that reshape how businesses operate.
„OpenAI was the fastest technology platform to reach 10 million users, the fastest to 100 million users, and soon the fastest to 1 billion weekly active users. Within a year of launching ChatGPT, we reached $1B in revenue. By the end of 2024 we were generating $1B per quarter. We are now generating $2B in revenue per month. At this stage, we are growing revenue four times faster than the companies who defined the Internet and mobile eras, including Alphabet and Meta.” – the company said.
Leadership across consumer and enterprise
We are continually shipping advances across ChatGPT, the API, and our enterprise products. We recently launched GPT‑5.4, our most capable model yet, with meaningful gains in intelligence and workflow performance. We expanded Codex into a flagship coding agent. We pushed forward on memory, search, personalization, and multimodal interaction. We also expanded into areas like health, scientific discovery, and commerce.
That product momentum shows up in the numbers. ChatGPT is the overwhelming leader in consumer AI with more than 900 million weekly active users, and over 50 million subscribers. ChatGPT has 6x the monthly web visits and mobile sessions than the next largest AI app, while total AI time spent is 4x the next largest AI app and 4x all others combined. Search usage has nearly tripled in a year, and our ads pilot reached more than $100 million in ARR in under six weeks. These are not just growth milestones—they show that frontier AI is becoming part of everyday life for people around the world.
Momentum is just as strong on the enterprise side, which now makes up more than 40% of our revenue, and is on track to reach parity with consumer by the end of 2026. GPT‑5.4 is driving record engagement across agentic workflows. Our APIs now process more than 15 billion tokens per minute. Codex now serves over 2 million weekly users, up 5x in the past three months, with usage growing more than 70% month over month.
Compute is a strategic advantage
Compute powers every layer of AI: frontier research and models, products, deployment, and revenue. Since ChatGPT launched, both our revenue and our available compute have scaled rapidly as demand for intelligent systems has accelerated.
With each new generation of infrastructure, we train more capable models, making each token more intelligent than before. At the same time, algorithmic and hardware improvements reduce the cost to serve each token, lowering the cost per unit of intelligence. That added intelligence makes AI useful for more complex workflows, which increases usage, drives compute demand, and accelerates the next turn of the flywheel.
This creates a compounding effect: better infrastructure and better models lower the cost of delivery, while improved products and deeper enterprise deployment increase revenue per unit of compute. As utilization increases and the platform matures, this drives meaningful operating leverage over time.
Over the past 15 months, we have expanded our infrastructure strategy beyond a small number of core providers to meet the scale and reliability requirements of global AI deployment.
Nvidia remains the foundation of our infrastructure. Our training fleet and the majority of our inference stack continue to run on Nvidia GPUs, and with this round we are deepening that partnership as we scale.
Demand for AI systems is growing faster and becoming more diverse. No single architecture can efficiently meet the needs of the entire AI frontier. To meet that demand and stay flexible, we are building a broader infrastructure portfolio across multiple cloud partners, multiple chip platforms, and deeper co-design across the stack.
This strategy now spans: cloud through Microsoft, Oracle, AWS, CoreWeave, and Google Cloud; silicon through NVIDIA, AMD, AWS Trainium, Cerebras, and our own chip in partnership with Broadcom; and data centers through partnerships with Oracle, SBE, and SoftBank.
The OpenAI flywheel is simple. More compute drives more intelligent models. More intelligent models drive better products. Better products drive faster adoption, more revenue and more cashflow. That gives us the ability to reinvest and deliver intelligence more efficiently to consumers, enterprises, and builders around the world.
Building an AI superapp
That is why we are building a unified AI superapp. As models become more capable, the limiting factor shifts from intelligence to usability. Users do not want disconnected tools. They want a single system that can understand intent, take action, and operate across applications, data, and workflows. Our superapp will bring together ChatGPT, Codex, browsing, and our broader agentic capabilities into one agent-first experience.
This is not just product simplification. It is a distribution and deployment strategy. By unifying our surfaces, we can translate advances in model capability directly into user adoption and engagement. Our consumer scale becomes the front door for enterprise usage, as familiarity in daily life drives adoption at work. At the same time, a single product surface allows us to improve faster, ship more coherently, and capture more of the value created by agentic workflows.
The result is a tightly integrated system: infrastructure that enables intelligence, intelligence that powers agents, and products that make those agents useful at global scale.
Banking 4.0 – „how was the experience for you”
„To be honest I think that Sinaia, your conference, is much better then Davos.”
Many more interesting quotes in the video below: