Not really though. That google demo looks like a single-pass feed-forward neural network trained to perform style transfer. The textures and colors get replaced while overall content is stayed the same.
This project seems to work by finding your image in the latent space of a GAN model, and then re-synthesizing a new image from that vector.
It's more like generating a whole new image which is targeting the overall look of an existing image, while jointly having optimized the generated image to look like it comes from a set of renaissance art.
Edit: on second thought, this tool might be running too quickly to be doing optimization to find an image in latent space. It might just be fancy vanilla style transfer done nicely. Hard to tell.