Yes, they have their own query engine but the idea is same; separate storage and query layer, scale the query layer on demand and charge for it. I like their approach but not everyone is willing to depend on a third party company for their company data.
The motivation is rather different. If you ask Redshift users, the motivation is mostly that they're already using AWS and depend on their services. That's the case for GCP as well, I would love to hear from the Google side of the story but AWS customers don't usually use BigQuery for their analytics stack because as the author explained it's not that easy. That's why BigQuery is not getting enough attraction from the outside, even though it's a cool technology.
We all agree that Redshift is not scalable at some point and people switch to other technologies when they need to but that's not the point. Creating a Redshift cluster is dead-easy and getting the data in it is also not complex compared to other solutions if you're already an AWS user.
The case for Snowflake is different, they also use AWS but they manage your services. Although they did a smart move and store the data on their customers' S3 buckets in order to make them feel like they "own" the data but it's possible only because AWS lets them to that way.
I believe that AWS doesn't try to make Redshift cost-efficient for large volumes of data because they already have the advantage of vendor-lock and making money from their large enterprise customers processing billions of data points in Redshift. That's why there are many Redshift monitoring startups out there to save the cost for you.
On the other hand, AWS is smart enough to build a Snowflake-like solution when their Redshift users start to switch BigQuery and Snowflake and the companies such as Snowflake needs to be prepared when the day comes. Cloud is killing everyone including us.
Snowflake has their own storage unless you opt for the enterprise version to have it stored in your own account. They also use their own table format (like BigQuery) but you can always export it.
It's true that AWS has Redshift Spectrum (and Athena) to help with more scalable querying across S3, however I don't think that makes a big risk for another company that provide a focused offering on top. Snowflake is very well capitalized with close to $500M in investment and plenty of customers so I wouldn't worry about them going out of business.
I especially like them because they have the elastic computing style of BigQuery but charge for computing time rather than data scanned, which is a much more effective billing model than anything else out there.
What's the focused offering provided by them? Does their system more scalable, efficient or cheap or is there any technical limitations of the AWS products? Personally, I don't think $500M matters if their customers (I'm assuming most of their revenue comes from enterprise customers because that's how it works usually) churn in a few years.
A more focused data warehouse product that is priced on a better model and is more scalable and efficient and cheaper than any of the existing options. $500M means they'll be around longer than most of the startup customers that try them.
Anyway, at this point I'm just repeating myself so I suggest you actually try them if you care for a better model. It works for us at 600B rows of data that was too expensive with BigQuery and too slow and complicated with Redshift/Spectrum.
Sounds fair. I'm convinced to try out their service, it's good to see that they also added pricing to their website. Let's see how it's better compared to Presto as they have the similar use-case.