The question is far too broad, and contextual. You're never going to get an answer to that question.
Sometimes, the rules add more optimization potential. (like how restrict technically exists in C but is on every (okay almost every) reference in Rust) Sometimes, the rules let you be more confident that a trickier and faster design will be maintainable over time, so even if it is possible without these rules, you may not be able to do that in practice. (Stylo)
Sometimes, they may result in slower things. Maybe while you could use Rust's type system to help you with a design, it's too tough for you, or simply not worth the effort, so you make a copy instead of using a reference. Maybe the compiler isn't fantastic at compiling away an abstraction, and you end up with slower code than you otherwise would.
And that's before you get into complexities like "I see Rc<RefCell<T>> all the time in Rust code" "that doesn't make sense, I never see that pattern in code".
I'd say it mostly applies to manual optimization, when we're restructuring our program.
If the situation calls for a B-tree, the borrow checker loves that. If the situation calls for some sort of intrusive or self-referential data structure (like in https://lwn.net/Articles/907876/), then you might have to retreat to a different data structure which could incur more bounds checking, hasher costs, or expansion costs.
It's probably not worth worrying about most the time, unless you're in a very performance-sensitive situation.
There can be no answer. Research is ongoing, smart people are actively trying to make optimizer better, so even if I gave a 100% correct answer now (which would be pages long), a new commit 1 minute latter will change the rules. Sometimes someone discovers what we thought was safe isn't safe in some obscure case and so we are forced to no longer apply some optimization. sometimes optimization is a compromise and we decide that the using a couple extra CPU cycles is worth it because of some other gain (a CPU cycle is often impossible to measure in the real world as things like caches tend to dominate benchmarks, so you can make this comprise many times when suddenly the total adds up to something you can measure.).
The short answer for those who don't want details: it is unlikely you can measure a difference in real world code assuming good clean code with the right algorithm.
Without directly answering your question, it's worth noting that there are also additional optimizations made available by Rust that are not easily accessible in C/C++ (mostly around stronger guarantees the Rust compiler is able to make about aliasing).
However, what you can say is that the borrow-checker works like a straight-jacket for the programmer, making them less capable to focus on other things like performance issues, high-level data leaks (e.g. a map that is filled with values without removing them eventually), or high-level safety issues.
You can also say that the borrow checker works like a helpful editor, double checking your work, so that you can focus on the important details of performance issues, safety issues, and such, without needing to waste brain power on the low-level details.
The point is that the compiler helps you “read” it. This takes mental effort off of you.
I agree that not everyone thinks this is true, but this is my experience. I do not relate to the compiler as a straight jacket. I relate to it as a helpful assistant.
I think it’s generally accepted that writing code is nearly universally easier than reading code, in any language. That aside, getting a mechanical check on memory safety for the price of some extra language verbosity is obviously worth it IMO.
By the same token, it is common to see criticisms of the complexity of templates in C++, but templates are the cornerstone of “Modern C++” and many libraries could not exist without them.
GC has little to do with it. The borrow checker as a developer tool has much more to do with preventing concurrency bugs and unexpected mutation than it does with memory management.
"As a developer tool" is doing some work in that sentence though. As a language implementation characteristic, the checker can help inform (or, more accurately, ensures that code is written in a way that informs) memory management decisions.
What performance? That’s not a single thing. Do you pay in throughput or latency?
It certainly has a price but it is waaay too overblown in many discussions. What it mostly does entail is a slightly larger p99 latency. Where it actually matters is entirely another question.
It seems like I see this opinion often and every time there are tons of people on both sides who seem sure they are correct.
What are the limitations for optimization? Does unsafe rust really force those?