Hacker News new | past | comments | ask | show | jobs | submit | jhas78asd's comments login

PostgreSQL and Ruby on Rails. https://andyatkinson.com/


Any posts on this? Are there bulk data loads that make table stats more stale and affect plans? I’m wondering what would suddenly make a plan selection change a lot that might be a contributing factor.


> bulk data loads that make table stats more stale

This is the usual culprit (cure: "ANALYZE ((tablename))").

Collecting more samples (ALTER TABLE SET STATISTICS...) may be useful.

"Extended Statistics" covers most(?) other cases: https://www.postgresql.org/docs/current/planner-stats.html#P... https://www.postgresql.org/docs/current/multivariate-statist...


If you’re talking about the command line client that’s built in, it’s psql. If you can’t remember the command name to launch it or regularly type those other commands when you meant to type psql, you could add aliases to your shell that point to psql. :)

Learning any new CLI client is a bit daunting at first. With repetition and intention, I think the commands become very memorable. Eg “describe table” is “dt”.



There is a way to provide some type of planner hints https://pghintplan.osdn.jp/pg_hint_plan.html


Each of the bullets you listed have very straightforward and memorable meta commands that I use on a regular basis with psql. It may be worth learning them just for when you use Postgres. There is also a built in help. These can also be saved into your dot files so you don’t need to memorize them. Happy to show you if you’re interested!


Find the SQL from meta commands. Example: https://dba.stackexchange.com/a/131031


That is not equivalent to "show create table" at all.


You can likely get the SQL for a meta command, and you could run the SQL from your preferred client if you don’t use psql. Here is one example: https://dba.stackexchange.com/a/131031

I also highly recommend investing in psql skills though if you are a Postgres user.


Yes, that's the "100 lines of SQL to do the same" the previous poster mentioned (obviously they were exaggerating a bit and it's less than literally 100 lines, but it's pretty complex).


What’s the reason though for Vitess? Postgres supports tables up to 32TB but hopefully you’re splitting them up using declarative partitioning in one or more ways before that. If you have tables that are smaller than a TB and a large memory DB (>1 TB RAM) Postgres should run ok right? I’d also imagine you’re splitting up your database into multiple databases and multiple instances (the writers) well before that as well right?


Thanks for calling out table partitioning. Besides implementing it at one level, multiple levels can be used simultaneously (eg list and range). Tables can be grouped and split out to their own database (aka functional sharding/vertical sharding) and again partitioned. This all takes more effort and investment but keeps you on PostgreSQL. As you said fillfactor can be tuned, more HOT updates. Even analyzing whether the Updates could be turned into inserts that are written at a high rate, not incurring bloat, and then fewer updates are made at a rate that does not outrun Vacuum.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: