Indeed, but here "re-identification" generally means the sort of attack where you have an aggregated genomic dataset, and you already have access to full genomic data for a target individual, and you use the genomic dataset to infer something about that target that you didn't know, like whether or not they participated in that study. Not to entirely minimize this sort of attack, but the NIH decided it was a sufficiently low risk that most of the sorts of datasets it applies to (like GWAS) are routinely shared with no access controls.
are you asking about methods to improve privacy of aggregated datasets? They seem to be not super popular with people in the field, I think because they sharply curtail how data can be used compared to having access to datasets with no strong privacy guarantees. I think the maybe more impactful recent shift is toward "trusted research environments" where you get to work with a particular dataset only in a controlled setting with actively monitored egress.
Homomorphic encryption enables standard GWAS workflows (not just summary stats) while “sharing” all genotypes and phenotypes. Richard Mott and colleagues have a paper and colleagues on this method;
Just a note that re-identifying aggregate data is a whole field of study that is decently successful.