Simfish/InquilineKea's Thoughts

post I made at LessWrong
January 7, 2011, 2:23 am
Filed under: Uncategorized

The "map" and "territory" analogy as it pertains to potentially novel territories that people may not anticipate

0InquilineKea07 January 2011 09:19AM

So in terms of the "map" and "territory" analogy, the goal of rationality is to make our map correspond more closely with the territory. This comes in two forms – (a) area and (b) accuracy. Person A could have a larger map than person B, even if A’s map might be less accurate than B’s map. There are ways to increase the area of the territory – often by testing things in the boundary value conditions of the territory. I often like asking boundary value/possibility space questions like "well, what might happen to the atmosphere of a rogue planet as time approaches infinity?", since I feel like they might give us additional insight about the robustness of planetary atmosphere models across different environments (and also, the possibility that I might be wrong makes me more motivated to actually spend additional effort to test/calibrate my model more than I otherwise would test/calibrate it). My intense curiosity with these highly theoretical questions often puzzles the experts in the field though, since they feel like these questions aren’t empirically verifiable (so they are considered less "interesting"). I also like to study other things that many academics aren’t necessarily comfortable with studying (perhaps since it is harder to be empirically rigorous), such as the possible social outcomes that could spring out of a radical social experiment. When you’re concerned with maintaining the accuracy of your map, it may come at the sacrifice of dA/dt, where A is area (so your Area increases more slowly with time).

I also feel that social breaching experiments are another interesting way of increasing the volume of my "map", since they help me test the robustness of my social models in situations that people are unaccustomed to. Hackers often perform these sorts of experiments to test the robustness of security systems (in fact, a low level of potentially embarrassing hacking is probably optimal when it comes to ensuring that the security system remains robust – although it’s entirely possible that even then, people may pay too much attention to certain models of hacking, causing potentially malicious hackers to dream up of new models of hacking).

With possibility space, you could code up the conditions of the environment in a k-dimensional space such as (1,0,0,1,0,…), where 1 indicates the existence of some variable in a particular environment, and 0 indicates the absence of such variable. We can then use Huffman Coding to indicate the frequency of the combination of each set of conditions in the set of environments we most frequently encounter (so then, less probable environments would have longer Huffman codes, or higher values of entropy/information).

As we know from Taleb’s book "The Black Swan", many people frequently underestimate the prevalence of "long tail" events (which are often part of the unrealized portion of possibility space, and have longer Huffman codes). This causes them to over-rely on Gaussian distributions even in situations where the Gaussian distributions may be inappropriate, and it is often said that this was one of the factors behind the recent financial crisis.

Now, what does this investigation of possibility space allow us to do? It allows us to re-examine the robustness of our formal system – how sensitive or flexible our system is with respect to continuing its duties in the face of perturbations in the environment we believe it’s applicable for. We often have a tendency to overestimate the consistency of the environment. But if we consistently try to test the boundary conditions, we might be able to better estimate the "map" that corresponds to the "territory" of different (or potentially novel) environments that exist in possibility space, but not yet in realized possibility space.

The thing is, though, that many people have a habitual tendency to avoid exploring boundary conditions. The fact is, that the space of realized events is always far smaller than the entirety of possibility space, and it is usually impractical to explore all of possibility space. Since our time is limited, and the payoffs of exploring the unrealized portions of possibility space uncertain (and often time-delayed, and also subject to hyperbolic time-discounting, especially when the payoffs may come only after a single person’s lifetime), people often don’t explore these portions of possibility space (although life extension, combined with various creative approaches to decrease people’s time preference, might change the incentives). Furthermore, we cannot empirically verify unrealized portions of possibility space using the traditional scientific method. Bayesian methods may be more appropriate, but even then, people may be susceptible to plugging the wrong values into the Bayesian formula (again, perhaps due to over-assuming continuity in environmental conditions). As in my original example about hacking, it is way too easy for the designers of security systems to use the wrong Bayesian priors when they are being observed by potential hackers, who may have an idea about ways that take advantage of the values of these Bayesian priors.


Leave a Comment so far
Leave a comment

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: