NVIDIA’s innovative AI technology used by Google DeepMind and Google Research Team is now optimized and available to Google Cloud customers worldwide
Google Cloud Next – Google Cloud and NVIDIA today announced new AI software and infrastructure for customers to build and deploy large models for speed data science and general AI workloads fit.
In a lively conversation at Google Cloud Next, Google Cloud CEO Thomas Kurian and NVIDIA founder and CEO Jensen Huang discussed how collaboration brings comprehensive machine learning services to a number of the largest AI customers in the world – including the creation of easy-to-run AI supercomputers with Google Cloud services built on NVIDIA technology. The new hardware and software integration uses the same NVIDIA technologies used by Google DeepMind and its research team over the past two years.
“We are at a transitional moment where accelerated computing and synthetic AI have come together to accelerate innovation at an unprecedented rate,” said Huang. “Our expanded collaboration with Google Cloud will help developers accelerate their work with infrastructure, software and services that increase energy efficiency and reduce costs.”
“Google Cloud has a long history of AI innovation that drives and accelerates innovation for our customers,” said Kurian. “Many Google products are built and served on NVIDIA GPUs, and many of our customers are looking to NVIDIA accelerated computing to support efficient LLM development to drive innovative AI.”
Integrate NVIDIA to accelerate data science and AI development
Google’s framework for building large language models (LLMs), PaxML, is now optimized for NVIDIA accelerated computing.
Originally built to extend multiple slices of Google TPU acceleration, PaxML now allows developers to use NVIDIA® H100 And A100 Tensor Core GPUs are for advanced testing and scaling and are fully configurable. A GPU-optimized PaxML container is available out of the box NVIDIA NGC™ Software catalog. Additionally, PaxML runs on JAX, which has been optimized for GPUs leveraging the OpenXLA compiler.
Google DeepMind and other Google researchers are among the first to use PaxML with NVIDIA GPUs for exploratory research.
NVIDIA-optimized containers for PaxML will be available immediately on the NVIDIA NGC container registry for researchers, startups, and enterprises around the world building the next generation of AI-powered applications according to.
Additionally, the companies announced Google integration Spark is serverless with NVIDIA GPU adopted Google’s DataProc service. This will help data scientists accelerate Apache Spark workloads to prepare data for AI development.
These new integrations are the latest in NVIDIA and Google’s extensive history of collaboration. These include hardware and software announcements, including:
- Google Cloud on A3 virtual machine powered by NVIDIA H100 — Google Cloud today announced its purpose-built Google Cloud A3 virtual machine is powered by NVIDIA H100 GPU will be widely available next month, making NVIDIA’s AI platform more accessible for a variety of workloads. Compared to the previous generation, the A3 virtual machine provides 3x faster training and significantly improved network bandwidth.
- NVIDIA H100 GPU supports Google Cloud’s Vertex AI platform — The H100 GPU is expected to be generally available on VertexAI in the coming weeks, allowing customers to rapidly develop general AI LLMs.
- Google Cloud to get access to NVIDIA DGX™ GH200 — Google Cloud will be one of the first companies in the world to have access NVIDIA DGX GH200 AI supercomputer – powered by NVIDIA Grace Hopper™ super chip — to explore its capabilities for synthetic AI workloads.
- NVIDIA DGX Cloud is coming to Google Cloud — NVIDIA DGX Cloud Supercomputers and AI software will be delivered directly to customers from their web browsers to provide speed and scale for advanced training workloads.
- NVIDIA AI Enterprise on Google Cloud Marketplace – User accessible NVIDIA artificial intelligence enterprisea secure, cloud-native software platform that simplifies the development and deployment of enterprise applications including general AI, voice AI, computer vision, etc.
- Google Cloud first offers NVIDIA L4 GPU — Earlier this year, Google Cloud became the first cloud provider to offer NVIDIA L4 Tensor Core GPUs with the launch of G2 VM. NVIDIA customers moving to L4 GPUs from CPUs for AI video workloads can realize up to 120x more performance with 99% better efficiency. L4 GPUs are widely used for image and text generation, as well as VDI and AI accelerated audio/video transcoding.
#Google #Cloud #NVIDIA #expand #partnership #advance #computing #software #services
World Innovations: Top Trends Shaping the Future Worldwide
Global Migration Trends: Understanding the Modern Movement of People
World Sports: Discover the Most Exciting Global Sporting Events