I've been struggling to find a simple and straightforward answer on how to select Google Cloud Spanner compute capacity. Should I go with a node or should I go with the compute units?
Short answer: Develop and increment the capacity when needed. Monitor CPU exceeding 65% on dedicated compute capacity.
- Compute capacity has no bearing on how many replicas Cloud Spanner will have (at least 3)
- 1000 Procesing Unit (PU) = 1 node. Capacity can be added in 100 PU increments.
- Configuration determines the number of replicas and their location
- Storage cost: $0.30 per GB/month
- Compute cost (100 PU ~ $0.1/hour)
- Minimal cost (Regional, 100 PU) from ~ $75.00 (no data stored) to ~ $204.00 (410 GB of data). Time of writing: October 14th, 2022
- Maximum storage 409.6GB per 100 PU. If database reaches 409.6GB you need to add another 100 PU, otherwise writes will be rejected.
Google recommends (in their
Compute Capacity Guidance ) to start with a 100 PU and increment from there.
- Granular increments of capacities based on app needs
- Query insights for performance improvements and lowering costs
- Strong consistency
- Performance (up to 10K reads/s, up to 2K writes/s per node)
- High availability with out of the box replication management
- Multiple databases per instance (separation of data)
- Expensive, thus not really useful for small projects (e.g. Google Datastore is free up to 1GB database and 20.000 entity writes per day). If you need SQL like database you can use Digitaloceans hosted PostgreSQL which starts at $30.00/month for high availability cluster or $15.00 for a single node database
- Can't automatically up-scale and down-scale
- Documentation doesn't include examples what kind of queries are supported on a 1/10th of a node and what kind of querying might exceed the minimum capacity.