Simfish/InquilineKea's Thoughts

24 May, 2011 20:36
May 24, 2011, 8:36 pm
Filed under: Uncategorized

Well, truth is, a lot of things cost money because you have to motivate people to do them.

But once you bring in robots (or cyborg animals – yes – they *do* exist, and may be FAR cheaper because they can reproduce and sustain themselves), you don’t have to keep paying them to do work, and the cost of extracting resources is only limited to the cost of building them/keeping them alive/fueling them. In other words, you can get them to pick up all sorts of random scraps (or to dig through garbage heaps)


2 May, 2011 21:29
May 2, 2011, 9:29 pm
Filed under: Uncategorized

we need a better way to measure grit

grit – is persistence and determination.

but yet, what if one is uneven? one can have grit that can fail for a SINGLE week. this can result in the COMPLETE failure of an ENTIRE quarter. but in the long-scheme of things, if you fail for a single week, you haven’t lost much at all.

i don’t think that anyone can deny that i have grit, if they look at my transcript. rather, my unevenness is what’s obvious in my transcript. the thing is, i don’t think any researcher has managed to disentangle the evenness of someone’s grit. I can be determined to do something over the timescale of years, and ultimately reach it (whereas others would give up – yes – including those who can retain an amazing consistency in “grit” for a quarter, without ever wavering. But I am very capable of failing more than others are, even though I would eventually succeed if given more time.

In other words… It’s ENTIRELY possible that PersonA has the grit to do something for 6 years, even if he may not have the grit to carry it CONSISTENTLY for 3 months without breaks. Meanwhile, personB may have the grit to CONSISTENTLY do it for 2 years (WITHOUT breaks), but be without the grit to carry it on for 6 entire years.

And of course, the question is, which matters more for success? Truth is, that personB’s style makes the person more vulnerable to burnout. It’s also a common style among those who feel pressured, but this often hurts creativity

thing is, people often DO plan out breaks. But the thing is, it really is often difficult to anticipate when you need a break. You generally need breaks when you hit dead-ends, after all, and it’s impossible to project when you’ll hit a dead-end.

an interesting “paradox”
April 30, 2011, 7:01 pm
Filed under: Uncategorized

smart people are better at distinguishing the "gems" from the "bullshit" in less credible sources. In less credible sources, you can find real gems that you can’t find anywhere else (because there will be ideas that haven’t been well-researched or confirmed). But at the same time, they *look* stupid whenever they read the less credible sources because it’s usually only the less intelligent who even believe in most of the things that are less credible. of course, "less credible" is open to interpretation. a place like or autoadmit looks "uncredible" to the unguided observer. but both sites also contain true gems that you really can’t find on any more "legit" site

16 April, 2011 13:06
April 16, 2011, 1:06 pm
Filed under: Uncategorized

Wow, I’m so amazed.

I entertained an alternative hypothesis about something, and everything fell into place.

The sad thing is that it took me THIS long to entertain it. and it was NOT a difficult hypothesis to entertain at all. Yet, it just conflicted with some WEIRD belief of mine, a belief that wasn’t EVEN strong. The truth is, that I eventually assumed the STRONGER version of the belief – the version i was more reluctant to accept (and ALSO the version that was more absurd to begin with – i was also
EMBARRASSED to believe that i could think or even BELIEVE such thoughts). this stronger version then later turned into a WEAKER version of the same hypothesis (by WEAKER, I mean that the basic ASSUMPTIONS of it were weaker). And everything then really DID fall into place. I need to note this. Damn it, this could explain one reason why crazier people CAN be more creative. you know, crazy things like dyson spheres and all that. EVEN the discovery of benzene’s chemical structure may have started out with a brief attempt to simply entertain that extremely crazy hypothesis.

29 January, 2011 20:43
January 29, 2011, 8:43 pm
Filed under: Uncategorized

How do you operationalize a measurement anyways?
byX on Saturday, January 29, 2011 at 2:15pm
First of all, you have to define your output. It can be a scalar, a vector, or a tensor.

Of course, you can simply determine which criteria are relevant and output these criteria as yet another vector of relevant criteria. That preserves information, and is the most useful metric for people who are attuned to comparing the specific facets of this criteria.

Or you can apply a metric (a distance metric such as the 2-norm – but it can also be a p-norm or a Hamming Distance or whatever) and output a score as a scalar value.

And you can assign different weights to each criteria (there are many different ways to weight, not all as simple as the case where the coefficients of the weights all sum up to 1). This, of course, presupposes that the criteria take the form of scalar quantities. Sometimes, the relevant criteria is more complex than that.

And like what the NRC did with its grad program rankings, you can produce multiple outputs (R and S criterion). Actually the NRC did something more complicated than that. For the R criteria, it asked professors which schools are most prestigious. Then it looked at the criteria characteristic of more prestigious schools, and ranked schools according to which ones had the “highest amount” of the criteria that were most prevalent in the more prestigious schools. Then for the S criteria, it simply asked professors which criteria were most important, and ranked schools according to how much they contained desirable criteria. And of course, users can rank schools in any order they want to simply by assigning weights to each value (although I think this is the simple case where all coefficients sum up to 1). In terms of validity, I’d trust the S metric over the R metric since it is less suspect to cognitive biases. [1]

This, of course, is what’s already done and well-known. There are other ways of operationalizing output too. What’s often neglected – are the effects of the 2nd-order interactions. Maybe there’s a way to use 2nd-order interactions to operationalize output (maybe there are ways that they are already done).

Of course, even 2nd-order interactions are not enough. 2nd-order interactions tend to be commutative (in order words, order does not matter). However, there are situations where the order of the 2nd (and higher) order interactions does matter.

And then, of course, you also have geometric relationships to consider. Geometric relationships may be as simple as taking the “distance” between two criteria (in order words, a scalar value that’s the output of some metric). Or they could be more complex.

And another relationship too: probabilities. Every measurement is uncertain and these uncertainties may also have to be included in our operationalization.

Also, weights are not simply scalars. Weights can also be functions. They can be functions of time, or functions of the particular person, or whatever. These multidimensional weights must still sum to 1, so when the weight of one goes up, the weight of the others goes down.

Also, even scalar outputs are not necessarily scalars. Rather, they can be outputs of probability distributions (again, I was quite impressed with the uncertainty range/confidence interval outputs of the NRC rankings)

Maybe studying ways to *operationalize* things is already a domain of intense interdisciplinary interest.


Anyways, some examples:

The DSM-IV is fundamentally flawed. Of course, every
operationalization is flawed – some are still good enough to do what they do despite being flawed. And why is that? Because in order to get diagnosed with some condition, you must only fulfill at least X of Y different criteria. There’s absolutely nothing about the magnitude of each symptom, or the environmental-dependence of each symptom, or the interactions each symptom can have with each other.

Furthermore, each operationalization (or definition, really) should take in parameters to specify potential environmental-dependence (the operationalization may have VERY LITTLE change when you vary the parameters, OR it could change SIGNIFICANTLY when you vary the parameters). I believe that people are currently systematically underestimating the environmental-dependence of many of their definitions/operationalizations. You can also call these parameters “relevance parameters”, since they depend on the person who’s processing them.

[1] This dichotomy is actually VERY useful for other fields too. For example – rating which forumers are the funniest in a forum


Skepticism regarding coarse mechanisms
by X on Saturday, January 29, 2011 at 1:46pm
I like trying to elucidate neurobiological/cognitive/behavioral mechanisms at a finer structure. Coarse mechanisms are often less robust (we don’t know how they really interact, and in the presence of a DIFFERENT environment, then the preconditions of these mechanisms may uncouple – we may not even know the preconditions of these mechanisms). The reason is this: necessary conditions are inclusive of BOTH preconditions and the GEOMETRY+combinatorial arrangements of these preconditions – where these preconditions are with respect to each other. This is a good reason to be skeptical of the utility of measuring things that we only measure coarsely, such as IQ, since we still don’t know the GEOMETRY+combinatorial arrangements of the preconditions of high IQ (that being said, I still do believe most of the correlations in this particular environment)

In other words, having all the preconditions is not sufficient to explain the mechanism. You have to have the preconditions *arranged* in the *right* way – both geometrically and combinatorially (inclusive of order effects, where the preconditions have to be
temporally/spatially arranged in the right way – not just withe each other – but also with the rest of the environment).


If the mechanism preserves itself even *after* we have updated information of its finer structure, then yes, we can update our posterior probability of the mechanism applying.


Now, how is research in the finer mechanisms done? Is it more likely than others to be Kuhnian normal science or Kuhnian revolutionary science? Studying combinatorial/geometric interactions can be *very* analytically (and computationally) intensive.

What’s also interesting: finer mechanisms tend to be more general. Even though finer = smaller scale. But it often takes large numbers of measurements before someone has enough data to elucidate the finer mechanism.

this is a sticky page
January 16, 2011, 7:54 pm
Filed under: Uncategorized

Hey, I’m just an intensely curious kid in astrophysics
Quora Profile (this is where I spend all my time now)
Less Wrong:

Some of my best threads on Physics Forums


books I’ve read:

journal articles i’ve read:

google profile:

Google reader shared items:

Science news: (URLs don’t really matter anymore, just google the titles)

Favorite blogs @

8 January, 2011 18:16
January 8, 2011, 6:16 pm
Filed under: Uncategorized

Also posted here:

Okay, so maybe you could say this.

Suppose you have an index I. I could be a list of items in belief-space (or a person’s map). So I could have these items (believes in evolution, believes in free will, believes that he will get energy from eating food, etc..) Of course, in order to make this argument more rigorous, we must make the beliefs finer.

For now, we can assume the non-existence of a priori knowledge. In other words, facts they may not explicitly know, but would explicitly deduce simply by using the knowledge they already have.

Now, maybe Person1 has a map in j-space with values of (0,0,0.2,0.5,0,1,…), corresponding to the degree of his belief in items in index I. So the first value of 0 corresponds to his total disbelief in evolution, the second corresponds to total disbelief in free will, and so on.

Person2 has a map in k-space with values of (0,0,0.2,0.5,0,0.8, NaN, 0, 1, …), corresponding to the degree of his belief in everything in the world. Now, I include a value of NaN in his map, because the NaN could correspond to an item in index I that he has never encountered. Maybe there’s a way to quantify NaN, which might make it possible for Person1 and Person2 to both have maps in the same n-space (which might make it more possible to compare their mutual information using traditional math methods).

Furthermore, Person1’s map is a function of time, as is Person2’s map. Their maps evolve over time since they learn new information, change their beliefs, and forget information. Person1’s map can expand from j-space to (j+n)th space, as he forms new beliefs on new items. Once you apply a distance metric to their beliefs, you might be able to map them on a grid, to compare their beliefs with each other. A distance metric with a scalar value, for example, would map their beliefs to a 1D axis (this is what political tests often do). A distance metric can also output a vector value (much like what a MBTI personality test could do) to a value in j-space. If you simply took the difference between the two maps, you cold also output a vector value that could be mapped to a space whose dimension is equal to the dimension of the original map (assuming that the two maps have the same dimension, of course).

Anyways, here is my question: Is there a better way to quantify this? Has anyone else thought of this? Of course, we could use a distance metric to compare their distances with respect to each other (of course, a Euclidean metric could be used if they have maps in the same n-space.


As an alternative question, are there metrics that could compare the distance between a map in j-space with a map in k-space (even if j is not equal to k)? I know that you have p-norms that correspond to some absolute scalar value when you apply the p-norms to a matrix. But this is sort of difference. And could mutual information be considered a metric?