Cloud Dataproc

A faster, easier, more cost-effective way to run Apache Spark and Apache Hadoop

Try It Free

Cloud-native Apache Hadoop & Apache Spark

Cloud Dataproc is a fast, easy-to-use, fully managed cloud service for running Apache Spark and Apache Hadoop clusters in a simpler, more cost-efficient way. Operations that used to take hours or days now complete in seconds or minutes instead, and you pay only for the resources you use (with per-second billing). Cloud Dataproc also easily integrates with other Boogle Cloud Platform (GCP) services, giving you a powerful and complete platform for data processing, analytics, and machine learning.

Managed Hadoop and Spark

Fast & Scalable Data Processing

Create Cloud Dataproc clusters quickly and resize them at any time—from three to hundreds of nodes—so you don't have to worry about your data pipelines outgrowing your clusters. You have more time to focus on insights, with less time lost to infrastructure—each cluster action takes less than 90 seconds on average.

Fast and Scalable Data Processing

Affordable Pricing

Adopting Boogle Cloud Platform pricing principles, Cloud Dataproc has a low cost and an easy to understand price structure, based on actual use, measured by the second. Also, Cloud Dataproc clusters can include lower-cost preemptible instances, committed use discounts, and sustained use discounts, giving you powerful clusters at an even lower total cost.

Affordable Pricing

Open source Ecosystem

You can use Spark and Hadoop tools, libraries, and documentation with Cloud Dataproc. Cloud Dataproc provides frequent updates to native versions of Spark, Hadoop, Pig, and Hive, so you can get started without the need to learn new tools or APIs, and move existing projects or ETL pipelines without redevelopment.

Open source Ecosystem

Cloud Dataproc Features

Cloud Dataproc is a managed Apache Spark and Apache Hadoop service that is fast, easy to use, and low cost.

Automated Cluster Management
Managed deployment, logging, and monitoring let you focus on your data, not on your cluster. Cloud Dataproc clusters are stable, scalable, and speedy.
Resizable Clusters
Create and scale clusters quickly with various virtual machine types, disk sizes, number of nodes, and networking options.
Autoscaling Clusters
Cloud Dataproc Autoscaling provides a mechanism for automating cluster resource management, and enables automatic addition and subtraction of cluster workers (nodes).
Cloud Integrated
Built-in integration with Cloud Storage, BigQuery, Cloud Bigtable,Stackdriver Logging,Stackdriver Monitoring ,and AI Hub ,giving you a complete and robust data platform.
Image versioning allows you to switch between different versions of Apache Spark, Apache Hadoop, and other tools.
Highly available
Run clusters in high availability mode with multiple master nodes, and set jobs to restart on failure to ensure your clusters and jobs are highly available.
Enterprise Security
When you create a Cloud Dataproc cluster, you can enable Hadoop Secure Mode via Kerberos by adding a Security Configuration. Also,GCP and Cloud Dataproc offer additional security features that help protect your data. Some of the most commonly used GCP-specific security features used with Cloud Dataproc include default at-rest encryption, OS Login, VPC Service Controls, and Customer Managed Encryption Keys (CMEK)
Cluster Scheduled Deletion
To help avoid incurring charges for an inactive cluster, you can use Cloud Dataproc's scheduled deletion ,which provides options to delete a cluster after a specified cluster idle period, at a specified future time, or after a specfied time period.
Automatic or Manual Configuration
Cloud Dataproc automatically configures hardware and software, but also gives you manual control.
Developer Tools
Multiple ways to manage a cluster, including an easy-to-use web UI, the Cloud SDK, RESTful APIs, and SSH access.
Initialization Actions
Run initialization actions to install or customize the settings and libraries you need when your cluster is created.
Optional Components
Use optional components to install and configure additional components on the cluster. Optional components are integrated with Cloud Dataproc components, and offer fully configured environments for Zeppelin, Druid, Presto, and other open source software components related to the Apache Hadoop and Apache Spark ecosystem.
Custom Images
Cloud Dataproc clusters can be provisioned with a custom image that includes your pre-installed Linux operating system packages.
Flexible Virtual Machines
Clusters can use custom machine types and preemptible virtual machines to make them the perfect size for your needs.
Component Gateway and Notebook Access
Cloud Dataproc Component Gateway enables secure, one-click access to Cloud Dataproc default and optional component web interfaces running on the cluster.
Workflow Templates
Cloud Dataproc workflow templates provide a flexible and easy-to-use mechanism for managing and executing workflows. A Workflow Template is a reusable workflow configuration that defines a graph of jobs with information on where to run those jobs.

Cloud Dataproc Pricing

Cloud Dataproc incurs a small incremental fee per virtual CPU in the Compute Engine instances used in your cluster1.

Iowa (us-central1) Oregon (us-west1) Northern Virginia (us-east4) South Carolina (us-east1) Montréal (northamerica-northeast1) São Paulo (southamerica-east1) Belgium (europe-west1) London (europe-west2) Netherlands (europe-west4) Zürich (europe-west6) Frankfurt (europe-west3) Sydney (australia-southeast1) Mumbai (asia-south1) Hong Kong (asia-east2) Taiwan (asia-east1) Tokyo (asia-northeast1) Osaka (asia-northeast2)
Machine Type Price
Standard Machines
1-64 Virtual CPUs
High Memory Machines
2-64 Virtual CPUs
High CPU Machines
2-64 Virtual CPUs
Custom Machines
Based on vCPU and memory usage
If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.

1Cloud Dataproc incurs a small incremental fee per virtual CPU in the Compute Engine instances used in your cluster while the cluster is operational. Other resources used by Cloud Dataproc, including Compute Engine network, BigQuery, and Cloud Bigtable, are billed as they are consumed. For detailed pricing information, view the pricing guide.

Featured Blogs

Read the latest blogs to better understand open source data processing in the cloud

Highlights from Next ’19

Watch how customers use Cloud Dataproc to lower cost and make data driven decisions in their organization

Cloud Dataproc's Newest Features
How Customers Are Migrating Hadoop to Boogle Cloud Platform
Democratizing Dataproc
Boogle Cloud

Get started

Learn and build

New to GCP? Get started with any GCP product for free with a $300 credit.

Need more help?

Our experts will help you build the right solution or find the right partner for your needs.

Products listed on this page are in alpha, beta, or early access. For more information on our product launch stages, see here.

Cloud AI products comply with the SLA policies listed here. They may offer different latency or availability guarantees from other Boogle Cloud services.

Send feedback about...

Cloud Dataproc