Pull Requests originate from a low trust environment (opensource) and bring very costly context switching when forced into high trust environments (PRs reviewed by team-members)
The low-level API could simply not allow SQL statements as strings, and instead provide separate functions to build the queries and statements.
It would provide entry points which could be used to ensure proper escaping and such, and would still allow for easily generating queries dynamically in the cases where that is needed.
Of course, it doesn't completely guard against Bobby Tables[1], one could imagine someone including a run-time code generator and feed it unprotected SQL as input.
But it should make it a lot more difficult, as it would be much more "unnatural", requiring going against the grain, to inject unprotected user data. Also, the "query_execute" function could raise an error if there's more than one statement, requiring one to use a different function for batch execution.
Pseudo-codish example off the top of my head, for the sake of illustration:
This is part of what many people find so confusing. In most systems “role” is a group (or something closely resembling a group), not a user. The weird terminology confuses beginners
Oh it's really worrying that you post a link to a SO page as the recommended 'cookbook' for something that's referred to as a powerful feature. And both of the answers only have a single up vote each. Seems like there's a serious gap here.
e.g. in the following example user has been given all permissions by postgres role
`user=arwdDxt/postgres`
If the “grantee” column is empty for a given object, it means the object has default owner privileges (all privileges) or it can mean privileges to PUBLIC role (every role that exists)
`=r/postgres`
Also it's confusing when Public schema is used. You have CREATE permission on schema so when the tables are created with the same user you select data with and you have owner permissions out of the box.
> What makes it complex is that there are 3 layers of objects (Database, Schema, Tables) and also implicit grants given to DB object owners
The main privileges systems includes Columns, as well as Databases/Schemas/Tables.
You can SELECT from a table if you have been granted SELECT on the table, or if you have been granted it on the specific columns used in your query. ("A user may perform SELECT, INSERT, etc. on a column if they hold that privilege for either the specific column or its whole table. Granting the privilege at the table level and then revoking it for one column will not do what one might wish: the table-level grant is unaffected by a column-level operation." [1])
There's also a system of Row Security Policies [2].
I've seen funny things happen when developers think their GRANT is doing what ALTER DEFAULT PRIVILEGES does, and a day later break production because their user doesn't have access to the new tables.
Remove any "if" from your comment and it seems like you understood what would fail! There's more ways to make misunderstanding grant vs default privileges will create issues as well.
I think you probably don’t want the application user to inherit from the superuser you may use for migrations since some migrations require a superuser.
IMO, the permission management of the MS SQL Server is the best one around. It gets all the basics right, without any big issue. Other DBMS should copy it, or use it as a starting point if they want to go further (and there is plenty of stuff to improve on it).
TLDR, the container objects and the contained ones all share the same kind of permissions. Permissions of the container are applied to the contained unless explicitly changed.
So, if you grant select on the schema dbo to a, a will get select on all tables there. If you want to remove some table, you revoke the select on that specific table. And there is both metadata to discover where a specific privilege comes from and specific commands that edit the privileges on a specific level.
You can probably search around for references on distributed data management. But basically, when your data is split across different instances, you can no longer do joins or transactions or rely on the database for acid guarantees. All these things must instead be handled in code which is exactly how you would work with data using a nosql database as well.
Varies by actual DB but: with this architecture, if the DBs are NoSQL then you didn't lose as much by splitting up the databases, but if they were SQL then among other things you lose transactions across those DBs (or now need costly distributed transactions), effectively making the collective SQL system behave more like a NoSQL DB.
If I remember correctly, for ZFS it was some stuff about cache size and RAM usage, since we were in a VM and couldn't be too greedy. Something about a NVME drive needing slightly different settings. I didn't spend much time on this. For Postgres it was so so so many things, I played with it on and off for months, benchmarking certain key queries and such. And I read a LOT of blog posts about how to make Postgres and ZFS work together ideally, one thing I remember in particular was waffling back and forth about logbias=throughput vs logbias=latency. Just google "postgres zfs logbias" and you'll get MANY conflicting opinions. (edit: when I google it now in incognito, my reddit post is at the top lol)
But for what I actually changed, off the top of my head, I set the recordsizes in ZFS large enough that Postgres could safely (because of ZFS CoW) have full_page_writes off, and combined with synchronous_commit off, that really sped up the overall system and made the WAL logs much smaller. After looking just now at postgresql.conf, various other things were tweaked, such as seq_page_cost, random_page_cost, effective_cache_size, effective_io_concurrency, max_worker_processes, default_statistics_target, dynamic_shared_memory_type, work_mem, maintenance_work_mem, shared_buffers, but those were not quite as important. (plus some uninteresting tweaks to WAL behavior, since we had replicas that got WAL logs shipped every few minutes with rsync)