There are only two types of "unvaccinated" : Both have not had any antibody treatments, and are : - type 1, who had never been prior infected - type 2, who were infected and naturally developed infection response Those who adhere to the rigor of the scientific method will therefore gather the following hospitalisation totals : - T1, the number of type 1s - T2, the number of type 2s Those who have nominal nous will also take the opportunity to study the antibody profiles of the type 2s (to use a battery analogy - what are the "idle" levels, the time "decay" functions etc) .
Then they are NOT PRESENTING the stats correctly. Which is incompetence at best, dishonesty at worst (in either case a ;clear your desk; IMHO given what is at stake) .
The Government statisticians have all this data but the Government communication of it is uniformly terrible. And it is sometimes changed into absurd policy. Case in point...if you have a positive Lateral Flow test you are required to 'confirm' it with a PCR test. But such a confirmation doesn't give much extra information because, at current levels of incidence, the chance of you having Covid if you have a positive LFT followed by a negative PCR is around 0.6. So logically you should still isolate whatever the PCR result.
Agreed, but everyone would need to have been tested to use as a statistic. I don't believe I have ever had the virus, but I may have had and not have been symptomatic. If I was unvaccinated (I'm not) no one - including me - would know whether I am part of that group. They only check for the antibody in a minority of cases.
You are saying : P(corona | PCR- | LFT+ ) ~ 0.6 . You have built your own Bayesian network from public domain src data/stats in order to arrive at that value ?? Or seen a network that somebody else has produced ??
The latter. The false negatives from the PCR tests are the key number....not actually sure how they are worked out.
Can you send a link to that network ?? It should be possible to work out the probabilities of each node on the graph from first principles (assuming they give their src data) .
Sorry can't find the one I quoted but this link gives estimates of the probabilities which bracket the ones used to get 0.6. https://unherd.com/thepost/pcrs-are-not-as-reliable-as-you-might-think/
This typifies the bad presentation of what is actually going on (the conditions of the tests being performed, the nature of the probability events) . It would have cost the authors nothing to have put in a couple of sentences to simply describe the above. Fortunately I am in the trade so am able to infer (sic) what they have not explicitly stated. Anyway, I will go thru the values they have given and see whether I can get your "0.6" (or backward compute the changes in their values in order to achieve it) ...
Interesting discussion. No idea what you said. For a while I thought I was reading the transcript of that Star Trek Next Generation episode "Darmok" where it the aliens seemed to be speaking English but made no sense because they were speaking in metaphors. Don't know if that's going on here, but if so may the metaphors be with you. Actually it reminds me of my most interesting (so not that interesting) brush with stats that is completely off topic but I will relay it anyway. When I was working on the Sizewell B Public Inquiry there was a question which asked the CEGB something to the effect of what was the chance of the power station blowing up. The response was along the lines of 'so small it cannot be calculated'. So a paper was later submitted to the Inquiry with a title like "The chance of a fully loaded jumbo jet crashing on a fully occupied Wembley Stadium", just to prove that things with a really small chance of happening could be calculated.
Lies, damn lies and statistics. If I jump off a bridge over the M25 during rush hour, there’s no statistical chance of me dying in Brazil. So what?