Years ago, I published a piece noting some unfortunate implications of philosophical dissensus. ("Dissensus" = widespread disagreement on philosophy about almost everything.) By logical necessity, most published papers contain unsound arguments for false conclusions. Further, even if studying philosophy makes people more apt to discover the truth (about philosophical questions) afterward than beforehand, nevertheless, philosophical methods as actually practiced do not generally lead people to learn and believe the truth about the questions they study.
Look at the survey. Each "answer" is broad and contains multiple incompatible doctrines. Even if, fortuitously, one of those doctrines turns out to be correct, we know that for any issue X, something like 85% of philosophers or more believe the wrong thing for, at best, clever but misleading reasons. Yay.
This applies to any field in which there is widespread disagreement. If there are only three positions and each wins the assent of a third of people, then by necessity two thirds are mistaken. At best, they believe the wrong thing on the basis of clever but ultimately misleading arguments, evidence, or reasons. Ouch.
We can have fun discussing what bearing any of this has on personal justification, one's self-assessment of one's own reliability, whether methods should be revised, or whether one is entitled to hold beliefs in light of peer disagreement, but those are in a way beside the point. If a field has this kind of dissensus, then by necessity, most people are wrong. By necessity, then, most people did not discover the truth when using that field's methods. (It's tempting to say, ah, but that just means they are using the methods incorrectly, since the proper use of the methods would have led them to the truth. I agree, since I personally always get the right answer on the basis of sound arguments in my own papers.)
However, what about when we see consensus? Let's stipulate that consensus is achieved when 50% + 1 of the people in a field achieve the same conclusion about some issue the field investigates.
The presence of consensus indicates it's at least possible that most people are correct, which means it's at least possible the field's methods are sufficiently good that they lead people to the truth. (Of course, this is not sufficient. It could be that, say, 90% of laypeople believe the truth but only 52% of the researchers, in which case the field might be making them worse off.)
Still, another looming and realistic problem is that the field instead suffers from selection effects and/or groupthink and the problems of bias in publication.
For instance, in philosophy in general, only a small minority of people accept theism. But among specialists in philosophy of religion, the overwhelming majority believe in God. Looking through the editors of the major philosophy of religion journals, we see they are also disproportionately evangelical Protestants too. It's possible that if one specializes in philosophy of religion, one gets exposed to lots of good arguments demonstrating that evangelical Protestantism is correct, while the rest of those with PhDs in philosophy somehow miss or misunderstand these good arguments. But it seems more likely, in this case, that part of the explanation is that evangelical Protestants are more likely to self-select into this specialization, which in turn means they end up publishing more papers, are more likely to become editors of the journals, and so on. I'm not criticizing evangelical Protestants here or saying they are doing anything wrong. I'm simply suggesting that their disproportionate presence in the field is more likely to be a selection effect than a treatment effect.
Further, in some cases, groupthink and bias take over a field. A certain number of motivated people believe in some doctrine. They manage to become the editors of the good journals. As a result, they find papers on that doctrine more interesting than other papers, and so those papers are more likely to get published. The editors are more likely to desk reject papers on rival views. They read the papers defending their own position more sympathetically. They are more likely to send papers they agree with or on the issues they care about to referees they will suspect are sympathetic. They end up holding papers they disagree with or which are on issues they don't personally care about to a higher standard. Referees, in turn, are more sympathetic to papers they care about or agree with. We don't need to presume explicit bias here (though we have very strong evidence such bias exists); even people who mean well and try to act well will act this way.
When we see consensus, we have to wonder: Is the best explanation for this consensus that good researchers, using proper methods, have arrived at what is likely to be the most justifiable view or theory to hold given current levels of evidence? Or is the best explanation that there are selection effects and/or perverse groupthink and bias in who publishes and who gets a job?