A short statistical reasoning test
The first question seems a little unfair because it does not say how much more expensive overestimation is compared to underestimation. It implicitly assumes 19:1 given that it's ordering by the 0.05th quantile of the posterior distribution, but that's information not contained in the question.
The general framework would be to sort by `U(dist(ratio))`. The choice of `U` (utility function) being a separate question to the estimation of `dist(ratio)`.
Bingo.
Is there a better principaled approach to #1 than Monte Carlo sampling from beta distributions?
The Jeffrey posterior in the answer is closed form and Bayesian. The other answer is a profile likelihood.
Neither involve Monte Carlo sampling. Both are general and principled.
The answer looks a bit simplistic compared to the question (as I interpreted it at least). In order to estimate the risk of incorrect ordering you have to calculate P(p2 > p1) where p1 and p2 are drawn from different beta distributions. AFAIK there’s no closed form expression for that probability (so Monte Carlo is one possible approach).
The question doesn’t ask for that —- it explicitly asks us to control for over estimation of the fraction — although I rather like your interpretation as an extension.
Ok. I may have read a bit too much into this paragraph:
> Order the items above, smallest fraction first, whilst taking into account the uncertainty in the number of trials to bound the probability of over-estimating each fraction.
But why mention ordering if you’re not looking for statistical reasoning around the ordering in particular?
( replying to a now deleted post)
>> the uncertainty in the number of trials > Has no meaning to me.
What the author is trying to get at in the admittedly poorly worded question is that the trials are noisy measures of an underlying effect. Your job is to sort by effect size, while accounting for the random chance that a low sample size trial just got unlucky.
You might argue that the question is much harder than the author assumes, since your best guess at the actual effect size seems like it should still just be the success rate, even if the low sample size trials have wider error bars. You'd need to come up with some sort of heuristic that says why 7/9 deserves a lower rank than 50/70 using binomial confidence intervals.
Probably that heuristic is intended to be a bayesian approach? Like, if you add just two successes and two failures to each scenario as a prior, thats enough to put the 50/70 option ahead.
I wrote the deleted comment you are replying to.
The essence of my comment was that this text/test is not for me (one person of the general public) but more like a few leetcode-style questions for statisticians.
Your attempt to explain what I didn't understand just proves my point as I don't really understand what you are saying either.
And that's ok: this is just not for me! (And that's why I deleted my original comment)
The problems are really underspecified for statisticians, too. Leetcode is normally very clear on the requirements.
> it is very important that the uncertainty in the number of trials is taken into account because over-estimating a fraction is a costly mistake.
This is not some precise jargon that is meaningless to the layman but completely clearly specified to a professional statistician. It's more like the specification written by your non-technical product manager for how some technical feature should work. A skilled data scientist will have the experience and the context to figure out what it's probably asking for, but he might write down a few more clarifying details before giving it to a junior on his team to implement.
If testing these kind of guess-what-the-stakeholder probably-means skills is the point of this test, it's quite good at it. But that's not what leetcode is for.
No worries, I wasn't really trying to explain it anyways, as much as seeking confirmation from the rest of HN that this question is ill specified. Judging from responses, yes, it is.
If "binomial distribution" and "confidence interval" are unfamiliar terms then you probably are not prepared to pass OP's "statistical reasoning test" regardless. I think most engineers wouldn't, and I only understood the intent of question 1 because my pandemic lockdown project was reading a stats textbook cover to cover.
Put more simply: suppose I have a coin that might be biased. I decide to asses this via repeatedly flipping it—if it’s biased, it will disproportionately land on heads or tails. If I flip 10 times and get 4 heads/6 tails, I don’t have the power to make a confident assessment of any bias. On the other hand, if I flip it 100 times and get 40 heads/60 tails, I am a bit more certain. At 1000 flips, with 400 heads/600 tails, I am extremely confident the coin is biased. Even though the fraction of heads is identically 40% across all three sets of flips, the underlying counts yield very different amounts of confidence on how close to 40% the coin’s bias is. The first question is a way of rigorously quantifying this confidence.
I don’t think this is “leetcode for statisticians.” This question (and the other two) are all examples of concrete, real-world problems that people across a variety of quantitative disciplines frequently encounter.
In fact, the first question is directly relevant to voting on this site. When sorting replies by fraction of upvotes, how should the forum software rank a new reply with 1 upvote/0 downvotes, versus an older reply with 4 upvotes/1 downvote? What about an older, more controversial reply with 20 upvotes/7 downvotes? 15 upvotes/2 downvotes?
> This question (and the other two) are all examples of concrete, real-world problems that people across a variety of quantitative disciplines frequently encounter.
Indeed, I use this technique to sort search results in Splunk, as an extension of TF-IDF. Consider a scenario where us-east-2 is broken but us-east-1 is fine (clearly just a hypothetical!). Split the logs along that good/bad dimension, and then break down by some other pattern; log class, punct, etc. Usually I use a prior of 50:50 to help sort out the "happened once in bad cluster" events.
And I guess since they answer the questions at the bottom, it seems their intent is indeed the simplistic approach
> The lower bound of which can be used to order the fractions, and so control the risk of over-estimation.
It not clear to me from the question whether the cost of a mistake is in the over-estimating the underlying effect or in misranking the effects, and that seems like it would drive your heuristic selection.
From the question:
“However, it is very important that the uncertainty in the number of trials is taken into account because over-estimating a fraction is a costly mistake.“
Seems fairly clear to me that you’re supposed to use a lower bound estimate to take into account variance on the fraction due to the number of trials in a way to bounds the chance of over estimation.
Further, there is no need for a heuristic when there a several statistical models for this exact problem with clear properties. Some are given in the answer.
Out of context, the expression "the uncertainty in the number of trials" would refer to missing knowledge in terms of how many trials actually ran.
In the context of the post this doesn't make sense, so the reader is left to hypothesize what the writer actually meant.
I agree it could be clearer but as a general rule, if you find an interpretation under which the question doesn’t make sense, try considering another interpretation.
I think "uncertainty due to the number of trials" would be clearer.
Uncertainty in the number of successes due to the number of trials, would be even better.
That would also be an incorrect phrasing. This entire thread is a good illustration of the difficulty of speaking precisely about probabilistic concepts.
(The number of successes has zero uncertainty. If you flip a coin 10 times and get 5 heads, there is no uncertainty on the number of heads. In general, for any statistical model the uncertainty is only with respect to an underlying model parameter - in this example, while your number of successes is perfectly known, it can be used to infer a probability of success, p, of 0.5, and there is uncertainty associated with that inferred probability.)
I guess I struggle to understand why under estimation is worse than over estimation, when the final result is a ranking. It seems like they're equally likely to produce an incorrect ranking!
I think the formula should be (n+1)/(n+m+1) which should correspond to the mean of a binomial distribution with a uniform prior. So it's adding 1 to each count of observations.
This is probably the formula to memorise and check against.
You mean (n+1)/(n+m+2) and yes, this is Laplace's rule of succession. It won't give you a confidence interval, but it gives you a posterior point estimate.
If you want a rough 95 % confidence interval without complicated maths, the Agresti–Coull interval is useful. It's computed as if the distribution was normal, but pretending there were two more successes and failures instead.
Yep, you're correct. It should be (n+1)/(n+m+2).
If you have access to a machine or lookup tables, you might as well plug in the values for the distribution Beta(1+n,1+m) which should correspond to the joint density.
(The formula above corresponds to the mean of this distribution, so it's probably right but I haven't work it through myself now ...)
See the Jeffrey posterior section here:
https://en.m.wikipedia.org/wiki/Binomial_proportion_confiden...
The blog post uses a non informative Jeffrey prior.