Google Cloud Platform

At work I’ve been instructed to get MySQL working in the cloud. I scoffed, whined and asked “Do we really need SQL?”. The Peanuts Gang teacher explained why and I didn’t really follow but I think I heard the words “business needs” and “blah blah blah”. If I had my druthers I’d ditch SQL for a key/value store but I digress.

Google Cloud SQL

I tend to prefer GCP over AWS because it’s the better infrastructure for containers imo. My understanding is the AWS MySQL solution is probably better but I’m not familiar with it so this is not a compare/contrast with AWS RDS/Aurora. I took Google Cloud SQL 2nd generation beta for a test drive with a focus on minimizing Ops, maximizing reliability and flexible replication and here’s what I found.

TL;DR Pros and Cons

GCP Hosted MySQL 2nd Generation beta has some great features but there are important MySQL incompatibilities to be aware of.


  • Easy setup
  • Automatic backups
  • Fast and easy read replication within region
  • Easy failover* within region
  • Online ssd or disk storage size increase (decrease not available)
  • IPv6

failover*: server only fails over if the entire GCP zone is down. Failures of smaller magnitude can still occur without triggering a failover. This could be a pro or con depending on the failure type.


  • Read replicas not possible between regions
  • Replicas with non-google-hosted masters or slaves not possible
  • No private IP (so no Google Load Balancer)
  • Downtime required for occasional maintenance
  • Downtime for changing cpu or memory
  • No custom server TLS cert
  • No way to force TLS and not force client certs
  • No Roadmap

Cons (in depth)

The GCP Cloud SQL Documentation does a good job at explaining the pros, so you can read about those there. They do not do a good job at explaining the cons, so I’m sure their marketing people are patting themselves on the back oblivious to the engineer spittle.

Impossible External Slave/Master

I tried two approaches to this: all inside of GCP and hybrid in/out.

Impossible Inter-Regional Replication, aka: hosted slave incompatible with MySQL

First, the Web UI does not allow you to configure a replica outside of the region of the master. I’d assume that the API call would barf if you tried the same, though I didn’t try.

So I created a SQL instance in the US and another one in Europe, both as hosted instances. After some simple configuration, I logged into the “slave” instance and issued the CHANGE MASTER mysql command which failed:

Access denied; you need (at least one of) the SUPER privilege(s) for this operation

And as is documented in the FAQ, the Google fork of MySQL has SUPER privileges stripped out. So you just can’t manually configure a hosted instance to be a MySQL compatible slave, period.

Impossible Hybrid Hosted/Non-Hosted Replication, aka: hosted master incompatible with MySQL

So what about an external MySQL slave and a GCP hosted master? So I created a test mysql server like so:

docker run -p 1234:3306 --name mysql-slave --rm -e MYSQL_ROOT_PASSWORD=password mysql --server-id=1234

And then run the mysql commands:

DROP DATABASE performance_schema;

start slave USER='user' PASSWORD='password';

And we get error:

2016-06-09T23:18:35.741239Z 4 [ERROR] Slave SQL for channel '': Error 'You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'CERTIFICATES' at line 1' on query. Default database: ''. Query: 'FLUSH CERTIFICATES', Error_code: 1064

2016-06-09T23:18:35.741360Z 4 [Warning] Slave: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'CERTIFICATES' at line 1 Error_code: 1064

FLUSH CERTIFICATES is not a MySQL command, so we’ve clearly run into another incompatibility between Google’s fork of MySQL and MySQL proper.

No Private IP

GCP has a really nice SDN (software defined network). By default your entire project shares a private ipv4 network no matter which part of the world your VMs are located in. So normally I can communicate between the US and Europe within the rfc1918 network. And each host normally gets a /24 private IP range which is really useful for e.g. kubernetes.

So it’s confounding to discover that Google Hosted SQL instances only have a WAN IP. This means you can’t use a google load balancer in front of your hosted read replicas because those will only point at private IPs (or something like this, I don’t know how LB routing works behind the scenes).

This also means that for application connections you have to use one or more of:

  • IP Whitelisting (hint: don’t pick only this one)
  • Credentials
  • Client Certificates

My recommendation is that if you don’t force client certificates, you at least ensure that your clients are enforcing TLS. With strong passwords you might be able to get away with whitelisting Note that you can require TLS for users with:

CREATE USER foobazzer IDENTIFIED BY 'password here' REQUIRE SSL;

The workaround that GCP offers is by helping you setup a MySQL proxy within your private network. This is an okay workaround but has two major drawbacks:

  • More Ops (more operations work)
  • Less Performance

Hidden Roadmap

Since secrecy seems to be a cloud business strategy, we customers have no idea which of these cons might be fixed or mitigated in the GA (general availability) release. We don’t even know if there is going to be a GA release. Being unable to plan - to know which constraints are permanent and which you only need to mitigate for a short time - is really obnoxious.

TLS Oddities

No Custom TLS Certs

I did not see any way to upload your own TLS cert. This means that you cannot use the mysql client option --ssl-verify-server-cert

Enforcing TLS or Client Certs?

This Web UI below confuses the TLS options.

Forcing TLS and Client Certs are different
“Allow only SSL Connections” should say “Allow only connections with client certificates”

There is no way via the UI to require TLS connections, but not require client certificates.

As mentioned above, to require TLS without client certs you can require SSL when you create the user:

CREATE USER foobazzer IDENTIFIED BY 'password here' REQUIRE SSL;


This is the TLS cert generated automatically by instantiation:

        Version: 3 (0x2)
        Serial Number: 0 (0x0)
    Signature Algorithm: sha1WithRSAEncryption
        Issuer: CN=Google Cloud SQL Server CA, O=Google, Inc, C=US
            Not Before: Jun  3 18:49:47 2016 GMT
            Not After : Jun  3 18:50:47 2018 GMT
        Subject: CN=Google Cloud SQL Server CA, O=Google, Inc, C=US
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Basic Constraints: critical
                CA:TRUE, pathlen:0
    Signature Algorithm: sha1WithRSAEncryption

notice anything strange? GCP is signing these certs with old sha1. AFAIK, this is not actually important in this case because these TLS certificates can only be used for encryption and cannot be used for server verification anyway. Still, it’s the sort of thing that might make you double take and pause to think over.

Downtime for Changing CPU/Memory

On a smaller instance with not much data on it, I changed the machine size while querying the instance. It stayed up for about a minute while, I assume, it brought up a newly sized vm. Then the machine did not return queries for about 2.5 minutes, I assume while syncing, before coming back up.

I might just be wanting a pony here, but it would be outstanding if this didn’t require any downtime. The time when you’re most likely going to care about scaling vertically is when you’re handling a larger amount of queries than usual and hence when downtime is the most painful.

Downtime for Required Maintenance

You cannot control if your instance will go down for maintenance. This means that your application(s) must tolerate some downtime.

At the moment, you may select the day of week and hour of day to restrict maintenance to: e.g. Sunday between 2 - 3 AM local time.

Downtime Tolerance

For many reasons you should write your applications to be tolerant of downtime. Queueing writes while fanning out reads is a really good scaling/tolerance strategy.

But if your application was not constructed this way, GCP hosted SQL may not be a good fit for you.

Slightly Broken Web UI

A couple bits of their Web UI was broken when I tested.

The “Download server-ca.pem” link would download an empty file. The only way to download the cert was to first create a client key and download it from that popup.

The menu item “Access Control -> Users” would yield “Failed to load”. It’s not possible to see the users on the SQL box via the Web UI.

This is the sort of thing might be obnoxious if you’re not skilled at using their API or CLI.


The pros of GCP Hosted SQL are pretty impressive and the cons can hurt if you’re not prepared.

If you do not have sophisticated replication needs, this 2nd generation beta makes managing SQL easy. But this just won’t work for non-trivial topologies.


comments powered by Disqus