I'm very glad you're writing this because I just happened to be, casually, trying (and not succeeding) to understand Blackwell's results on experiments. You say you're going to come back to the significance in the future and I look forward to it.
I'm really puzzle by this whole thing. "For any two experiments f and g, f is more valuable than g if and only if it is more informative than g." I assume (again, not quite understand the formalism yet) "informative" is in the information-theory sense and "valuable" is in the betting sense (doesn't have to be money but has to be a linearly-ordered utility which arguably may not exist). And now "experiments" are functions from worlds to signals. All of this is so incredibly foreign to my (and I suspect most people's) conception of experiment. At the same time the result (or at least the informal gloss of it) sounds so obvious that it's almost tautological. I'm really puzzled about what's accomplished here. Usually in logic we start with natural assumptions that lead to a surprising result. Here it seems we start with very surprising assumptions that lead to a natural result. I'm very confused by this whole exercise, especially so given how apparently important it is. What am I missing?
There are three surprising/interesting things here, related to the notion of 'informativeness'.
1. The thing that got the most mathematical interest is the converse direction to what philosophers are most interested in; i.e., that if an experiment is guaranteed to be more valuable, then it is more informative.
2. We can define a notion of more informative for experiments with uncertain outcomes. The short version is that f is more informative than g iff given f's signal and a randomising device, you can recreate an experiment that's equivalent to g. One simple implication of that is that more accurate experiments are more informative. And given some assumptions, that gives us back the standard informative-theoretic account of information as a special case.
3. What's most interesting about all this to me is that so much of it falls apart the minute you weaken any of the big assumptions. Some of the results around here don't generalise to the infinite case. Many others don't generalise if you drop negative introspection.
My plan is to try to write some more stuff about 3.
Some of it is pure logic stuff like what epistemic logics do and don't support which versions of Blackwell type results. (I'm a philosopher not a mathematician, so I'm mostly doing the more informative -> more valuable direction, not the converse, which I really don't have the chops for.)
Some of it is even figuring out conceptual stuff like how to generalise the notion of 'more informative' to cases where (a) we don't have a partitional structure on the space of signals and (b) experiments are indeterministic. I have ideas for each of these on their own, but they don't combine well. And that feels like the kind of conceptual space-clearing that I actually should be able to be useful in.
One last thing on experiments. It could be because I'm not really a scientist, but I thought that model of experiments was somewhat natural. When I want to know how much coffee is in the pot, I do a little experiment - I put the pot on the kitchen scales. That's a (noisy) function from states of the world - how much coffee is in the pot - to signals - what the scale reads. That's an incredibly simplified case, but I thought that's what a lot of experiments consist in: we take some measurements, and we have a background belief about how various hypotheses about the world make different measurements more and less likely. It's a stylised model, but it didn't feel as foreign to me.
I'm very glad you're writing this because I just happened to be, casually, trying (and not succeeding) to understand Blackwell's results on experiments. You say you're going to come back to the significance in the future and I look forward to it.
I'm really puzzle by this whole thing. "For any two experiments f and g, f is more valuable than g if and only if it is more informative than g." I assume (again, not quite understand the formalism yet) "informative" is in the information-theory sense and "valuable" is in the betting sense (doesn't have to be money but has to be a linearly-ordered utility which arguably may not exist). And now "experiments" are functions from worlds to signals. All of this is so incredibly foreign to my (and I suspect most people's) conception of experiment. At the same time the result (or at least the informal gloss of it) sounds so obvious that it's almost tautological. I'm really puzzled about what's accomplished here. Usually in logic we start with natural assumptions that lead to a surprising result. Here it seems we start with very surprising assumptions that lead to a natural result. I'm very confused by this whole exercise, especially so given how apparently important it is. What am I missing?
There are three surprising/interesting things here, related to the notion of 'informativeness'.
1. The thing that got the most mathematical interest is the converse direction to what philosophers are most interested in; i.e., that if an experiment is guaranteed to be more valuable, then it is more informative.
2. We can define a notion of more informative for experiments with uncertain outcomes. The short version is that f is more informative than g iff given f's signal and a randomising device, you can recreate an experiment that's equivalent to g. One simple implication of that is that more accurate experiments are more informative. And given some assumptions, that gives us back the standard informative-theoretic account of information as a special case.
3. What's most interesting about all this to me is that so much of it falls apart the minute you weaken any of the big assumptions. Some of the results around here don't generalise to the infinite case. Many others don't generalise if you drop negative introspection.
My plan is to try to write some more stuff about 3.
Some of it is pure logic stuff like what epistemic logics do and don't support which versions of Blackwell type results. (I'm a philosopher not a mathematician, so I'm mostly doing the more informative -> more valuable direction, not the converse, which I really don't have the chops for.)
Some of it is even figuring out conceptual stuff like how to generalise the notion of 'more informative' to cases where (a) we don't have a partitional structure on the space of signals and (b) experiments are indeterministic. I have ideas for each of these on their own, but they don't combine well. And that feels like the kind of conceptual space-clearing that I actually should be able to be useful in.
One last thing on experiments. It could be because I'm not really a scientist, but I thought that model of experiments was somewhat natural. When I want to know how much coffee is in the pot, I do a little experiment - I put the pot on the kitchen scales. That's a (noisy) function from states of the world - how much coffee is in the pot - to signals - what the scale reads. That's an incredibly simplified case, but I thought that's what a lot of experiments consist in: we take some measurements, and we have a background belief about how various hypotheses about the world make different measurements more and less likely. It's a stylised model, but it didn't feel as foreign to me.