A friend of mine from way, way back recently emailed me questions about the nice experts study and whether they provide any insight on findings regarding the Milgram study. I've pasted my response below.
The Milgram study (and similar ones) are much more focused on (a) diffusion of responsibility and (b) obedience --not necessarily perceptions of expertise and how they related to trustworthiness of informants in social learning situations. In Milgram's study, which of course didn't have enough participants (due to it having to be shut down) to run inferential statistics, they mostly reported exploratory and descriptive stats. What they have proposed, and others who have followed-up, is that people are more likely to engage in amoral tasks if they are being told to by someone else who seems to be an authority figure. This could be because (a) people have a tendency to be obedient to authority figures (i.e., Yale researchers in white lab coats), (b) they feel compelled to engage in the task because they had already agreed to do so and were being paid to participate (even though they were told--during the informed consent process-- that they would still get paid even if they chose to end the task early), or (c) in combination with the prior two reasons, participants may also feel like they are not personally responsible if someone else has ordered the amoral behavior, even if they are the ones "pulling the trigger" so to speak--this one has been used to explain some of the behavior by the nazi soldiers (which you referenced).
These types of research questions are slightly different than the ones that we were investigating, but could be related on a broader perspective--at least in terms of what characteristics make someone seem trustworthy (e.g., white lab coats? Ivy League affiliations?, High-ranking positions?). However, we are more concerned with two components of a learner's cognition:
(1) what features do children use to determine whether someone has knowledge; and
(2) are children more inclined to trust someone who has knowledge or someone who does not have knowledge but is socially-positive (i.e., nice).
Regarding the first, I've conducted research to examine what children understand about expertise. What does it mean to have expert knowledge and how much knowledge do you have (and about what) when you are labeled an expert. Then, I've looked at what cues children use to determine if someone has knowledge--are they labeled "smart", are they labeled "experts", have they demonstrated prior history of accuracy, have they exhibited nice behaviors? We've found that when children are asked whether someone has knowledge, they use BOTH information about that person's expertise as well as information about that person's niceness/meanness. Specifically, a nice person who is described as having no knowledge what so ever about a topic, is usually evaluated to have average knowledge (the half way point on a scale), where as a mean person without knowledge is usually evaluated to have 0 knowledge (the bottom of a scale). Then, when looking at both groups who are also described as having expertise, children move a nice expert up to the top of the scale and a mean expert to the midpoint. So a mean expert is seen as having similar amounts of knowledge as a nice non-expert. Thus, they are using niceness and meanness as well as expertise to evaluate whether someone has knowledge. In an interesting twist (or maybe not so interesting) children ONLY use niceness and meanness to predict someone's social behavior. So, a mean non-expert is seen as just as mean as a mean expert. Similarly, a nice expert is seen as just as nice as a nice non-expert. They don't get the same bump from expertise on niceness evaluations as they get from niceness on expertise evaluations.
Then, when it comes to trusting someone for information, Children prefer to ask the nice non-expert over the mean expert. This is not totally surprising considering children seem to rate them as having similar amounts of knowledge (even though they don't); thus, children are, instead, preferring the informant they have evaluated as nice (the nice non-expert) over the one who is mean (the mean expert). This is not a bad strategy on the children's part. After all, when trusting information that has been presented, it is important that you believe that the informant does not have the intent to deceive. If children predict that a mean informant would be more likely to deceive than a nice one (while still believing that they have equal knowledge), their best bet is to go with the nice non-expert.
To what extent does non-acceptance of a belief/concept/idea/theory strongly supported by science lead to a lessening trust in science/scientific process of those who are not accepting the belief/etc? We could say that we are only concerned with making sure that individuals in the general populace understand a concept, and would thus be able to employ core principles from that concept if need be—leading to situations like the Pakistani doctor and the Kentucky farmer using such principles (i.e., cognitive dualism). However, I’m also concerned with how non-acceptance of such scientific information may lead to increasing distrust in science and scientists.
Dan Kahan and his colleagues have shown before that people will accept or reject information based on whether that information is in line with the person’s cultural worldviews—but they do the same if the person speaking is perceived to be affiliated with one or another cultural worldview (and this is supported by developmental research in epistemic trust). Is academia and science perceived to be mostly aligned with egalitarian/communitarianism? If so, does this lead those with hierarchist/individualist views to mistrust science more generally? Or are people more domain specific with trust—only distrusting information from scientists who are individually perceived to be outgroup members?
Distrust in science and scientists could lead to a whole host of problems. Currently, those who distrust medical doctors turning to "holistic" treatments that are less effective (if effective at all, *cough*SteveJobs*cough*). Similarly, individuals may choose not to vaccinate. Now, Dan's cultural cognition premise may say that it doesn't matter what science says on this issue--those whose cultural identities are served by seeking treatments that are sold as "holistic" or "natural" would do so whether they were supported by doctors or not. However, research in conspiracy theories might suggest that the fact that certain treatments are suggested by doctors--who are authorities--make those less inclined to trust authorities (i.e., conspiracy theorists) suspicious of such treatments and more prone to trusting treatments that doctors do not recommend.
So what do you think? To what extent are people evaluating the source of the information and to what extent are they evaluating the information itself?
My dissertation topic focused on the boundaries of expertise--demonstrating that people tend to overestimate what others know based on the labels used to describe their respective expertise domains. For instance, friends of mine often think--because I'm a psychologist--that I can answer questions that they have about psychological disorders. Similarly, I have friends who have asked me questions about their children's educational standing. The thing is, my opinion is probably not much more informed than theirs are. While my research is broadly related to education, I have no experience with curriculum development or placement in gifted and talented programs. My grad school advisor experienced similar problems when she received a phone call from a journalist who wanted her "expert opinion" on peer tutoring programs, which is similarly outside her expertise.
Now, is understandable that to people who have no familiarity with psychology (as in, they don't know all of the different sub specialties, or aren't familiar with our training programs) would assume that we do have this knowledge. Thus, it is unsurprising that we may get questions from people who know us and are seeking information that seems relevant to our expert labels. And even though we may not be able to answer their questions--we likely have enough knowledge to be able to direct them to someone who can answer their questions.
Now let's talk about LinkedIn Endorsements. Given the above, it makes no sense to me that the same people who are unclear on exactly what it is that I do can endorse me for skills that I do not have. I often receive notifications that I've been endorsed by family members or friends for all sorts of skills that I do not have, from research experience in peer relationships to familiarity with software programming languages. According to LinkedIn:
But how are these meaningful? Now, if my research idols were to endorse me for having skills that are actually within my domain of expertise (e.g., child development research, trust in testimony research), I would be over the moon. These are the types of endorsements that actually mean something.
Here are few tips from a 2012 Forbes article about making the most of endorsements:
1. List your own skills - if we are going to have people who have no idea what we do endorse us for skills, they might as well be the skills that we actually have.
2. Hide endorsements that are not reflective of your real skill set - If potential employers, collaborators, or recruiters ever do start to use endorsements, it would be deceptive to let them think you have skills that you do not have.
3. Seek endorsements from people you know well--particularly people who are in your field of study- if we want endorsements to mean something, then it is better that they come from people who can actually speak to our work--same as recommendations. It is common knowledge that we shouldn't have our parents or close family friends write recommendation letters. We should apply this same principle for endorsements (at least in my opinion).
4. (and this one was not on the Forbes article) We should be more careful about when we choose to endorse as much as when we want others to endorse us.
In sum, while I appreciate that social networking sites are seeking methods for increasing engagement on each other's profiles, if we really want these endorsements to mean something, we need to change how we use them.
So, I've decided to try my hand at blogging. I haven't been much of a writer--other than the much more formal academic papers, which involve iterations of edits before being put any where public. Let's see how it goes!