Sure, when we solve AI alignment (and I mean x-risk / Eliezer-style AI alignment, not the outrage-minimizing political correctness that's being called "alignment" by OpenAI and the others).
Sure, when we solve AI alignment (and I mean x-risk / Eliezer-style AI alignment, not the outrage-minimizing political correctness that's being called "alignment" by OpenAI and the others).