Simfish/InquilineKea's Thoughts


January 31, 2010, 9:58 pm
Filed under: Uncategorized

edited out, post did not pass threshold of significance to be included here

Advertisements


January 31, 2010, 9:54 pm
Filed under: Uncategorized

There are two fundamental ways of learning

(a) general to specific
(b) specific to general (e.g. case studies)

Scientific hypotheses (general) are motivated by experiments (specific). With data, one can hypothesize a trend and see the general hypotheses

One can also try this process mathematically, as specific results can motivate a hypothesis of the general structure, which can then be proved.

Which way is faster? It depends on person. It might be plausible that learning styles are “bunk” and that smarter students are more efficient through learning of type (a), but it’s also quite plausible that this is not true (for one thing, learning is dependent on both intelligence and motivation/interest, and the motivation/interest component can make type b learning more efficient even for geniuses). I, for one, learn best through the “specific to general” method. As such, I believe that I learn math best when it’s motivated by physical phenomena (in other words, learning math “along the way of doing science”) than when pursuing math first and then learning science (which is what I did, which didn’t work as well as I hoped, especially since it killed my motivation). As I’m quite familiar with the climate trends of specific localities, I also learn the generalities of climate best through case studies.

And then after learning the applications of this math field, one is more motivated to learn the specifics of the math behind the math, and one even has more physical intuition through this learning route. It actually means something when one learns through the second route.

It is also true, however, that route (b) can be taken too far, as is evident in the “discovery-based” math curricula, which generally produce poor results. When one is self-motivated, route (b) can be especially rewarding, but the selection of case studies is important, as an improper selection of case studies can result in a very minute exploration of the general structure (it is also true that very few textbooks are written in a way as to make route (b) most exciting to learn about). Generally textbooks present their material as ends, not as means to an end (except in the crappy discovery-based math textbooks). However, one can most certainly learn calculus through physics (especially div/grad/curl), and linear algebra through its applications, and a very smart (or lucky) person can design such a curriculum that would work for many people (it is much easier to design such curriculums for oneself than it is for a wide variety of personalities).

Nonetheless, route (b) is often stultifying. In fact, I sometimes feel impatient and feel like I’d rather learn the math first. A person’s temperament may vary from time to time, and find type a rewarding at some types, and type b rewarding at other times.

===

learning how it’s done vs why it’s done: what’s the optimal order of learning these things

in grade school, you’re taught how it’s done. why it’s gone come later
in college, opposite happens. but sometimes it’s a lot more confusing that way. and requires absorption of more details and multistep processes
what is optimal? it obviously depends from person to person. it’s more “natural” to learn how it’s done first. but it’s only “natural” for phenomena that are discovered through hypothesis->observation or derivation. However, it’s not “natural” for phenomena that are discovered through serendipity, in which one learns the result/how to get it before one learns why the result is the way it is. and in some cases, like quantum mechanics, one may never learn why it is the way it is. of course, it feels more “natural” and “satisfying” to learn how it’s done, and promotes habits that are helpful to further discovery, but learning the process AFTER learning the result is ALSO curiosity-satisfying, and does not necessarily lead to the sense of “helplessness” that could allegedly come after learning the result first time after time. That “helplessness” could come, but if one has internalized both approaches, then it is far from inevitable, and then learning the result before the explanation can be faster and more efficient.
But in the end, it depends on person and context. Sometimes I feel more stifled when I learn result before explanation; sometimes I feel more stifled when I learn explanation before result. It is much easier to trick oneself into thinking that one has learned the material if one has only learned the result (without learning the explanation); it is also easier to forget the material if one has only learned the result (but learning the explanation along with the result shouldn’t take too much more time); and learning only the result is also less challenging (so familiarity with the process carries better “signalling” value and makes one more absorbed into the process so that one internalizes it better ). But again, once one has learned BOTH the process and the result, then the signalling value/internalization value is irrelevant. The only point of relevance is when one has learned one but not the other (which can happen, especially when people are lazy, slow, or time-constrained), or when one has partially learned one and learned another more (although this is very common). So perhaps in an environment where one has partially learned one and learned another more, then learning the process first may be more optimal, especially when people forget easily and quickly. But when one learns things completely, then the order should not matter much (or the order should depend on how much time more time one spends doing it one way vs how much time one spends doing it the opposite way; or on how rewarding the two orders are relative to each other)


January 6, 2010, 8:52 am
Filed under: Uncategorized

so one thing ive always wondered: in pharmacology, is the dose/mg really valid? this assumes that the extra cells (fat or muscle cells) have equal uptake of the drug as all other cells do, AND that blood vessel growth is proportional to weight growth. but i dont think this assumption is totally valid. more fat might spring up new blood vessels, but what is the extra volume of all these blood vessels? well, fat tends to grow on “layers”. fat tends to distribute itself throughout the periphery of the body, so the blood vessel networks on the outer periphery have more surface area to unit volume? (as in, it’s the sort of thing that has a high surface area to volume ratio). so that would make blood vessels from fat cells act in a way that overrepresents the blood vessel growth from fat growth. of course there is an opposite trend too – the blood vessels from new fat cells are just new blood vessels, but they’re not major new blood vessels – there’s a fixed amount of blood vessel mass in every person of some arbitrary height.

there’s another assumption to – that new blood vessel growth will trigger the production of plasma + blood cells that increases in proportion to increased blood vessel growth



why is trend data so important?
January 3, 2010, 12:11 am
Filed under: Uncategorized

because the majority of the literature is in the past, and of that past, most of it will not be in the recent past. trends are the best way to inform the continual applicability of the past literature



model of scientific research
January 2, 2010, 10:47 pm
Filed under: Uncategorized

normal science: adding entries to the database of scientific knowledge (entries are categorized and analyzed according to current rules)

revolutionary science: change the rules. or design a totally different new system of rules that makes the entries in the database more consistent with each other.

there are several levels of normal science. a researcher/principal investigator can allocate most of the “normal science” work to graduate students/others as he can usually expect them to follow the procedures of normal science. occasionally the researcher chooses to look at the data more closely/examine the data, as specific instances can provide “schema/prototypes” to help clarify theory (and make the theory more salient and comprehensible)

Also, the “4 paradigms” – theory, experiment, simulations, data mining/pattern recognition. technically the latter two can be considered subsets of experiments. But at the same time, they also share some characteristics with theory. there’s a new computer program that can derive physical laws from mass datasets. how does one categorize that? it’s clearly the fourth paradigm, but at the same time, it creates theory (and theory is inspired by the desire to find a model/equation to make the data consistent). simulations can be analyzed too (simulations are pretty much experiments – the only difference is that simulations do not have to conform to the real world). in fact, a lot of the “normal science work” consists of the analysis of simulation output, as many simulations are ad hoc and haven’t been independently investigated by numerous people.

Also, we now know that crowdsourcing can also inspire research (usually through categorization/data mining/etc). Some sorts of crowdsourcing were implicitly used in empirical research (e.g. many fossils are not discovered by people who are actively looking for them – they just accidentally dig them up). now crowdsourcing is more active (galaxy identification, etc).

Technically, there are other important steps to further research too. developing the infrastructure of research (through engineers and programmers). a lot of the infrastructure is general and must be converted to scientific uses.

===

Methods of scientific research:

– using alternative paradigm to investigate data. sometimes alternative paradigm describes data better; sometimes data is described equally well by multiple paradigms/interpretations

– development of infrastructure (and integration thereof). matlab, python modules, toolkits, etc.

– run simulations, see if simulation outputs confirm paradigm

– lit reviews (they get the most citations!)

– structure of object, structure of object’s interactions

Personality Psychology:

– Comparing self-assessments with external assessments, assess both wrt accuracy

Psychopharmacology:

– Compare one drug’s effects in adolescence to its effects in adulthood

– bioavailability/pharmokinetics/toxicology/etc. anything on its spec sheet

– effects of metabolites

– effects on other disorders

– possible chronic effects

CS:

– two systems are developed independently from each other. often research (the lowly kind assigned to undergrads) is to link up the systems with each other (by making them able to communicate with each other [directly or through output files/reading output files], or by individually converting the code in one system to the code in another system. )
– rewriting code so that it can be run by parallel/distributed processes



working memory
January 2, 2010, 10:02 pm
Filed under: Uncategorized

science tests generally show that most g-related measurements (like working memory) increase up to age 14 and then decline. Also, the size of the brain grows until age 14 and then starts shrinking (I would think that it shrinks slowly at first and then at faster rates once one grows older)

What I find interesting is the digit span test. here, we see that young chimpanzees outperform college students. This is one of the tests where abilities increase up to age 5ish and then decline.

Other g-related measurements also increase to a certain age and then decrease. The age of peak measurement might vary according to test.

So this brings up a lot of questions. First of all, why do some of them peak out at very young ages? (aka why do they decline so early?) Surely it’s before the impact of aging-related effects.

==

Anyways, there are a bunch of g-factors for a lot of things. You can make them as specific or as general as you want to. Like, g-factors for RTS games, FPS games, all games, sports, etc. Some of these g-factors are formed of more malleable components than the g-factors of IQ tests; other g-factors are formed of more heritable components than the g-factors of IQ tests. Many also show a peak at a certain age, and then a decline.