Worldwide revenue for the public cloud services market reached $545.8 billion in 2022, up 22.9% from 2021, according to new data from the IDC Worldwide Public Cloud Services Tracker.
SaaS applications are the largest source of public cloud services revenue, accounting for more than 45% of total revenue in 2022. Infrastructure as a service is the second largest revenue category at 21.2% total revenue, behind Platform as a Service and Software SaaS Systems Infrastructure at 17.0% and 16.7% respectively.
IDC data also shows that the top five public cloud providers – Microsoft, Amazon Web Services, Salesforce Inc., Google and Oracle – accounted for more than 41% of total revenue worldwide with 27.3% growth. compared with the same period last year. Microsoft has the largest share of the public cloud services market with 16.8%, followed by Amazon Web Services with 13.5%.
“Given the economic challenges of the past year, it is easy to conclude that we are in a phase where a focus on limiting new spending and optimizing the use of existing cloud assets will dominate the CIO’s priorities and shape the destiny of IT providers in the coming time. years. That is also a very wrong conclusion,” said Rick Villars, vice president of the Global Research group at IDC. “The assessment and use of AI, enabled by generalized AI, is beginning to dominate the long-term investment and planning programs of enterprises and cloud providers will play an important role. in evaluating and adopting AI-enabled services.”
Innovative AI is driving investment in the cloud
While some budgets may be tight due to lingering economic uncertainty, innovative AI is driving long-term investment strategies for enterprises and cloud providers.
“Cloud providers are making significant investments in high-performance infrastructure,” said Dave McCarthy, research vice president, Cloud and Edge Infrastructure Services, IDC. “This serves two purposes. First, it ushers in the next wave of migration for enterprise applications that previously remained on-premises. Second, it creates a foundation for new AI software that can be rapidly deployed at scale. In both cases, these investments provide market growth opportunities.”
Venture capital firm Andreessen Horowitz says that Generative AI warrants significant investments in the cloud due to its resource-intensive nature.
“Nearly everything in Generative AI goes through a GPU (or TPU) hosted in the cloud at some point. Whether for model providers/research labs running training workloads, hosting companies running inference/fine-tuning, or application companies doing some combination of both two – FLOPS is the lifeblood of creative AI,” the company wrote. “For the first time in a very long time, the most groundbreaking computing technology advances are bound by massive computing.”
Andreessen Horowitz says that access to computing resources at the lowest total cost has become a determining factor in the success of AI companies. The venture capital firm expects a majority of startups to use cloud computing for synthetic AI, as it offers less upfront costs and greater scalability in many cases.
Cloud providers look to keep up with AI demand
Recently Wall Street Journal The article notes that traditional cloud infrastructure is not designed to support large-scale AI, and cloud providers are rushing to keep up with demand.
“There’s a pretty big imbalance between supply and demand right now,” said Chetan Kapoor, director of product management at Amazon Web Services’ Elastic Compute Cloud in the region. WSJ article.
Only a small portion of cloud services are optimized for AI. The cloud is largely composed of servers that leverage general-purpose CPUs, while GPU clusters are better served to run AI workloads that are in the minority of the infrastructure.
Kapoor said WSJ that AWS plans to deploy multiple AI-optimized server clusters over the next 12 months. The article also notes that Microsoft Azure and Google Cloud are also working to enhance their AI infrastructure.
Hewlett Packard Enterprise is also entering the AI cloud market. The company recently announced it will introduce HPE GreenLake for Large Language Models, an on-demand multi-tenant, supercomputing supercomputing cloud service that enables businesses to train, adapt, and deploy AI at scale. privately.
“Unlike general-purpose cloud services that run multiple workloads in parallel, HPE GreenLake for LLM runs on an AI-native architecture uniquely designed to run a single workload,” the company said. unique large-scale AI training and simulation work and at maximum computational power.” in a release. “The offering will support AI and HPC jobs on hundreds or thousands of CPUs or GPUs at the same time. This capability is significantly more efficient, reliable, and effective for training AI and generating more accurate models, allowing businesses to accelerate the journey from POC to production to solve problems faster. ”
“We have achieved a generational market shift in AI that will be as transformative as previous technology breakthroughs like the web,” commented HPE Chairman and CEO Antonio Neri. , mobile and cloud.
“HPE is making AI, once the domain of well-funded government labs and global cloud giants, accessible by offering a wide range of AI applications, starting with large language models, running on HPE’s proven rugged supercomputers. Organizations can now leverage AI to drive innovation, disrupt markets, and achieve breakthroughs with on-demand cloud services that train, adapt, and deploy models at scale. big and responsible,” he said.
This story originally appeared on sister site Enterprise AI.
#Public #cloud #service #revenue #exceeds #billion #IDC
World Innovations: Top Trends Shaping the Future Worldwide
Global Migration Trends: Understanding the Modern Movement of People
World Sports: Discover the Most Exciting Global Sporting Events