There is a common anecdote which consists in stating the likelihood of dying from a falling coconut, and sometimes comparing it to, say, the likelihood of dying from a shark attack. Those probabilities are usually inferred by taking the number of casualties of those bizarre deaths each year, and dividing it by the number of people on Earth, or something like that.
If I point my index finger to you and state: "You have X% probability of dying from a falling coconut this year", is that accurate? No, because you surely have more information about yourself than the fact that you inhabit Earth. If, for example, you live in Paris and don't go surfing on islands with coconut trees, then the more accurate probability for you is way way closer to zero.
The more I think about this observation, the more I notice examples where it is relevant.
So-called "rationalist" people are often comparing sets of overall probabilities of society without nuance. They say you shouldn't fear terrorism in comparison to car accidents or to common law crimes. But what if I barely ever ride in any car? What if I have no enemy, a healthy relationship, no tie whatsoever to criminal affairs, but that I do go often to events in a big city? Do I really have a higher probability of dying in a common crime than in a terrorist attack? As soon as there is more information than merely "living in France", all your likelihood estimations about what I should fear are potentially worthless.
When interacting with a group of people, racist people will be prone to point out the one they're prejudiced against as the "statistically most likely" to commit some offense, with supporting evidence of statistical records "proving" their point. Except they have conveniently ignored all the other available information. Gender, profession, place of living, socio-professional category, etc, are surely way stronger predictors than whatever dimension they tunneled-vision into, and, if taken into account, could very well direct their attention to someone else in the crowd (maybe themselves).
When I shared this observation with ChatGPT, he directed me to the notions of Bayesian inference, which is a statistical technique used to update some probability when more information becomes available, and to the Principle of Total Evidence, which apparently states that one should include all the information available whenever stating some probability.
I don't have any grand conclusion about this matter, apart from the fact that stating likelihoods is tricky, and you should always try to include as much information as possible when doing so.