Simfish/InquilineKea's Thoughts


29 January, 2011 20:43
January 29, 2011, 8:43 pm
Filed under: Uncategorized

How do you operationalize a measurement anyways?
byX on Saturday, January 29, 2011 at 2:15pm
First of all, you have to define your output. It can be a scalar, a vector, or a tensor.

Of course, you can simply determine which criteria are relevant and output these criteria as yet another vector of relevant criteria. That preserves information, and is the most useful metric for people who are attuned to comparing the specific facets of this criteria.

Or you can apply a metric (a distance metric such as the 2-norm – but it can also be a p-norm or a Hamming Distance or whatever) and output a score as a scalar value.

And you can assign different weights to each criteria (there are many different ways to weight, not all as simple as the case where the coefficients of the weights all sum up to 1). This, of course, presupposes that the criteria take the form of scalar quantities. Sometimes, the relevant criteria is more complex than that.

And like what the NRC did with its grad program rankings, you can produce multiple outputs (R and S criterion). Actually the NRC did something more complicated than that. For the R criteria, it asked professors which schools are most prestigious. Then it looked at the criteria characteristic of more prestigious schools, and ranked schools according to which ones had the “highest amount” of the criteria that were most prevalent in the more prestigious schools. Then for the S criteria, it simply asked professors which criteria were most important, and ranked schools according to how much they contained desirable criteria. And of course, users can rank schools in any order they want to simply by assigning weights to each value (although I think this is the simple case where all coefficients sum up to 1). In terms of validity, I’d trust the S metric over the R metric since it is less suspect to cognitive biases. [1]

This, of course, is what’s already done and well-known. There are other ways of operationalizing output too. What’s often neglected – are the effects of the 2nd-order interactions. Maybe there’s a way to use 2nd-order interactions to operationalize output (maybe there are ways that they are already done).

Of course, even 2nd-order interactions are not enough. 2nd-order interactions tend to be commutative (in order words, order does not matter). However, there are situations where the order of the 2nd (and higher) order interactions does matter.

And then, of course, you also have geometric relationships to consider. Geometric relationships may be as simple as taking the “distance” between two criteria (in order words, a scalar value that’s the output of some metric). Or they could be more complex.

And another relationship too: probabilities. Every measurement is uncertain and these uncertainties may also have to be included in our operationalization.

Also, weights are not simply scalars. Weights can also be functions. They can be functions of time, or functions of the particular person, or whatever. These multidimensional weights must still sum to 1, so when the weight of one goes up, the weight of the others goes down.

Also, even scalar outputs are not necessarily scalars. Rather, they can be outputs of probability distributions (again, I was quite impressed with the uncertainty range/confidence interval outputs of the NRC rankings)

Maybe studying ways to *operationalize* things is already a domain of intense interdisciplinary interest.

==

Anyways, some examples:

The DSM-IV is fundamentally flawed. Of course, every
operationalization is flawed – some are still good enough to do what they do despite being flawed. And why is that? Because in order to get diagnosed with some condition, you must only fulfill at least X of Y different criteria. There’s absolutely nothing about the magnitude of each symptom, or the environmental-dependence of each symptom, or the interactions each symptom can have with each other.

Furthermore, each operationalization (or definition, really) should take in parameters to specify potential environmental-dependence (the operationalization may have VERY LITTLE change when you vary the parameters, OR it could change SIGNIFICANTLY when you vary the parameters). I believe that people are currently systematically underestimating the environmental-dependence of many of their definitions/operationalizations. You can also call these parameters “relevance parameters”, since they depend on the person who’s processing them.

[1] This dichotomy is actually VERY useful for other fields too. For example – rating which forumers are the funniest in a forum

==

Skepticism regarding coarse mechanisms
by X on Saturday, January 29, 2011 at 1:46pm
I like trying to elucidate neurobiological/cognitive/behavioral mechanisms at a finer structure. Coarse mechanisms are often less robust (we don’t know how they really interact, and in the presence of a DIFFERENT environment, then the preconditions of these mechanisms may uncouple – we may not even know the preconditions of these mechanisms). The reason is this: necessary conditions are inclusive of BOTH preconditions and the GEOMETRY+combinatorial arrangements of these preconditions – where these preconditions are with respect to each other. This is a good reason to be skeptical of the utility of measuring things that we only measure coarsely, such as IQ, since we still don’t know the GEOMETRY+combinatorial arrangements of the preconditions of high IQ (that being said, I still do believe most of the correlations in this particular environment)

In other words, having all the preconditions is not sufficient to explain the mechanism. You have to have the preconditions *arranged* in the *right* way – both geometrically and combinatorially (inclusive of order effects, where the preconditions have to be
temporally/spatially arranged in the right way – not just withe each other – but also with the rest of the environment).

==

If the mechanism preserves itself even *after* we have updated information of its finer structure, then yes, we can update our posterior probability of the mechanism applying.

==

Now, how is research in the finer mechanisms done? Is it more likely than others to be Kuhnian normal science or Kuhnian revolutionary science? Studying combinatorial/geometric interactions can be *very* analytically (and computationally) intensive.

What’s also interesting: finer mechanisms tend to be more general. Even though finer = smaller scale. But it often takes large numbers of measurements before someone has enough data to elucidate the finer mechanism.

Advertisements

Leave a Comment so far
Leave a comment



Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s



%d bloggers like this: