tks
0
Most professional such as a Medical Doctors or Lawyers or bureaucrats in a Government are singularly ill trained to think logically. I use the word most but not all and this seem to be true regardless of where the professionals lives based on my own minimal sample.
I pick on Doctors more in this instance because sometimes their ability to reason can have life or death consequences.
All my immediate relations (like parents) died (in India) with root cause being a masterful combination of negligence and poor judgement based on faulty logic. I do not want to bring Karma model in this discussion.
The doctors I come across in USA including those trained in top schools exhibit a singular inability to think. I had to do a lot of search to find my primary care physician who I see once a year for regular check up. He can think and so can my Ophthalmologist that I see once every two years. I discovered that my eye Doctor had undergraduate degrees in Physics and Engineering which explained his analytical bend.
Certain professionals like Doctors should be required to undergo periodic training in logical thinking in my view for the betterment of society. I have many examples but let me instead share the following.
Here is a funny story that appeared in NY Times in 2010.
Many Doctors were asked a simple problem that can arise in their day today lives. Though the sample is relatively small everyone of them got it wrong regardless of the country they lived in.
The journalists are not any better in this regard. There was once a weather news caster who said that "it is predicted that there will be rain with 50% chance on both Saturday and Sunday. This means there is 100% (50% + 50%) chance of rain this weekend"
All one can mutter hearing this is 'what a dumb ass' with apologies to the ass ...
Here is the NY times article
Excerpt:
In one study, Gigerenzer and his colleagues asked doctors in Germany and the United States to estimate the probability that a woman with a positive mammogram actually has breast cancer, even though she’s in a low-risk group: 40 to 50 years old, with no symptoms or family history of breast cancer. To make the question specific, the doctors were told to assume the following statistics — couched in terms of percentages and probabilities — about the prevalence of breast cancer among women in this cohort, and also about the mammogram’s sensitivity and rate of false positives:
The probability that one of these women has breast cancer is 0.8 percent. If a woman has breast cancer, the probability is 90 percent that she will have a positive mammogram. If a woman does not have breast cancer, the probability is 7 percent that she will still have a positive mammogram. Imagine a woman who has a positive mammogram. What is the probability that she actually has breast cancer?
Gigerenzer describes the reaction of the first doctor he tested, a department chief at a university teaching hospital with more than 30 years of professional experience:
“[He] was visibly nervous while trying to figure out what he would tell the woman. After mulling the numbers over, he finally estimated the woman’s probability of having breast cancer, given that she has a positive mammogram, to be 90 percent. Nervously, he added, ‘Oh, what nonsense. I can’t do this. You should test my daughter; she is studying medicine.’ He knew that his estimate was wrong, but he did not know how to reason better. Despite the fact that he had spent 10 minutes wringing his mind for an answer, he could not figure out how to draw a sound inference from the probabilities.”
When Gigerenzer asked 24 other German doctors the same question, their estimates whipsawed from 1 percent to 90 percent. Eight of them thought the chances were 10 percent or less, 8 more said 90 percent, and the remaining 8 guessed somewhere between 50 and 80 percent. Imagine how upsetting it would be as a patient to hear such divergent opinions.
As for the American doctors, 95 out of 100 estimated the woman’s probability of having breast cancer to be somewhere around 75 percent.
The right answer is 9 percent.
How can it be so low? Gigerenzer’s point is that the analysis becomes almost transparent if we translate the original information from percentages and probabilities into natural frequencies:
Eight out of every 1,000 women have breast cancer. Of these 8 women with breast cancer, 7 will have a positive mammogram. Of the remaining 992 women who don’t have breast cancer, some 70 will still have a positive mammogram. Imagine a sample of women who have positive mammograms in screening. How many of these women actually have breast cancer?
Since a total of 7 + 70 = 77 women have positive mammograms, and only 7 of them truly have breast cancer, the probability of having breast cancer given a positive mammogram is 7 out of 77, which is 1 in 11, or about 9 percent.
Notice two simplifications in the calculation above. First, we rounded off decimals to whole numbers. That happened in a few places, like when we said, “Of these 8 women with breast cancer, 7 will have a positive mammogram.” Really we should have said 90 percent of 8 women, or 7.2 women, will have a positive mammogram. So we sacrificed a little precision for a lot of clarity.
Second, we assumed that everything happens exactly as frequently as its probability suggests. For instance, since the probability of breast cancer is 0.8 percent, exactly 8 women out of 1,000 in our hypothetical sample were assumed to have it. In reality, this wouldn’t necessarily be true. Things don’t have to follow their probabilities; a coin flipped 1,000 times doesn’t always come up heads 500 times. But pretending that it does gives the right answer in problems like this.
Admittedly the logic is a little shaky — that’s why the textbooks look down their noses at this approach, compared to the more rigorous but hard-to-use Bayes’s theorem — but the gains in clarity are justification enough.
I pick on Doctors more in this instance because sometimes their ability to reason can have life or death consequences.
All my immediate relations (like parents) died (in India) with root cause being a masterful combination of negligence and poor judgement based on faulty logic. I do not want to bring Karma model in this discussion.
The doctors I come across in USA including those trained in top schools exhibit a singular inability to think. I had to do a lot of search to find my primary care physician who I see once a year for regular check up. He can think and so can my Ophthalmologist that I see once every two years. I discovered that my eye Doctor had undergraduate degrees in Physics and Engineering which explained his analytical bend.
Certain professionals like Doctors should be required to undergo periodic training in logical thinking in my view for the betterment of society. I have many examples but let me instead share the following.
Here is a funny story that appeared in NY Times in 2010.
Many Doctors were asked a simple problem that can arise in their day today lives. Though the sample is relatively small everyone of them got it wrong regardless of the country they lived in.
The journalists are not any better in this regard. There was once a weather news caster who said that "it is predicted that there will be rain with 50% chance on both Saturday and Sunday. This means there is 100% (50% + 50%) chance of rain this weekend"
All one can mutter hearing this is 'what a dumb ass' with apologies to the ass ...
Here is the NY times article
Excerpt:
In one study, Gigerenzer and his colleagues asked doctors in Germany and the United States to estimate the probability that a woman with a positive mammogram actually has breast cancer, even though she’s in a low-risk group: 40 to 50 years old, with no symptoms or family history of breast cancer. To make the question specific, the doctors were told to assume the following statistics — couched in terms of percentages and probabilities — about the prevalence of breast cancer among women in this cohort, and also about the mammogram’s sensitivity and rate of false positives:
The probability that one of these women has breast cancer is 0.8 percent. If a woman has breast cancer, the probability is 90 percent that she will have a positive mammogram. If a woman does not have breast cancer, the probability is 7 percent that she will still have a positive mammogram. Imagine a woman who has a positive mammogram. What is the probability that she actually has breast cancer?
Gigerenzer describes the reaction of the first doctor he tested, a department chief at a university teaching hospital with more than 30 years of professional experience:
“[He] was visibly nervous while trying to figure out what he would tell the woman. After mulling the numbers over, he finally estimated the woman’s probability of having breast cancer, given that she has a positive mammogram, to be 90 percent. Nervously, he added, ‘Oh, what nonsense. I can’t do this. You should test my daughter; she is studying medicine.’ He knew that his estimate was wrong, but he did not know how to reason better. Despite the fact that he had spent 10 minutes wringing his mind for an answer, he could not figure out how to draw a sound inference from the probabilities.”
When Gigerenzer asked 24 other German doctors the same question, their estimates whipsawed from 1 percent to 90 percent. Eight of them thought the chances were 10 percent or less, 8 more said 90 percent, and the remaining 8 guessed somewhere between 50 and 80 percent. Imagine how upsetting it would be as a patient to hear such divergent opinions.
As for the American doctors, 95 out of 100 estimated the woman’s probability of having breast cancer to be somewhere around 75 percent.
The right answer is 9 percent.
How can it be so low? Gigerenzer’s point is that the analysis becomes almost transparent if we translate the original information from percentages and probabilities into natural frequencies:
Eight out of every 1,000 women have breast cancer. Of these 8 women with breast cancer, 7 will have a positive mammogram. Of the remaining 992 women who don’t have breast cancer, some 70 will still have a positive mammogram. Imagine a sample of women who have positive mammograms in screening. How many of these women actually have breast cancer?
Since a total of 7 + 70 = 77 women have positive mammograms, and only 7 of them truly have breast cancer, the probability of having breast cancer given a positive mammogram is 7 out of 77, which is 1 in 11, or about 9 percent.
Notice two simplifications in the calculation above. First, we rounded off decimals to whole numbers. That happened in a few places, like when we said, “Of these 8 women with breast cancer, 7 will have a positive mammogram.” Really we should have said 90 percent of 8 women, or 7.2 women, will have a positive mammogram. So we sacrificed a little precision for a lot of clarity.
Second, we assumed that everything happens exactly as frequently as its probability suggests. For instance, since the probability of breast cancer is 0.8 percent, exactly 8 women out of 1,000 in our hypothetical sample were assumed to have it. In reality, this wouldn’t necessarily be true. Things don’t have to follow their probabilities; a coin flipped 1,000 times doesn’t always come up heads 500 times. But pretending that it does gives the right answer in problems like this.
Admittedly the logic is a little shaky — that’s why the textbooks look down their noses at this approach, compared to the more rigorous but hard-to-use Bayes’s theorem — but the gains in clarity are justification enough.
Last edited: