wood choppin

My Mullet or lack Thereof

Sadly, Currently Mullet-less

Interest rates and taxes
wood choppin
theanphibian
A thought occurred to me today. The redistribution of wealth due to direct taxation and spending really doesn't matter as much as the redistribution related factors in the rules of society themselves.

Really, one of the most powerful observations is just how effective we can be at maintaining things a certain way. I think it's probably unsurprising that a certain group can be systematically held in poverty indefinitely. You just need a set of rules that can accomplish that. Rules are cheap.

I wanted to do the depreciation and effective tax rate thing. Assume that you're paying capital gains. So you start off with a certain value, and in the extreme just assume that you're paying a certain fraction of the final value regardless of principle. You have two rates, that of inflation and the real return on your investment. Your final value before tax will then be:

P*(1+r1+r2)^n

You will pay a fixed tax rate on this, meaning a fraction of the value. So you'll get:

P*e*(1+r1+r2)^n

But this is pointless to look at. We really want to look at the real return as opposed to the nominal return. Looking at just the final value we have this:

P*e*(1+r1+r2)^n/(1+r1)^n = P*e*(1+r2/(1+r1))^n

The first question is if your future value is greater than your present value. What set of assumptions could cause this?

F=P

P*e*(1+r2/(1+r1))^n = P

1+r2/(1+r1) = e^(-1/n)

r2 > (e^(-1/n) - 1) * (1+r1)

This is the breakeven rate of return that you need. If n goes to infinity, then you need only slightly more than 0% return on your capital. If the time frame is 0, then you need infinite return rate.

The concept of "capital" - why we always have both too much and too little
wood choppin
theanphibian
Today I was in a downtown area, parked in an old rickety parking deck, walked past several old rickety buildings, and then arrived at my destination - one of the oldest buildings around there with a dilapidated interior filed with progressive urbanites.

Staring at the cracks in those buildings I thought to myself "this creates wealth". Of course, not the cracks themselves, which maybe be a liability, but the existence and the history of the building creates wealth. As I was thinking this, it occurred to me that a large reason that cities, and downtowns, are such wealth creators is not just because of the density and social and economic consequences of that density, but the history is a wealth creator. More specifically, the long history of capital investment is a wealth creator. The reason those old buildings are so economically powerful (even thought one wonders how they stay standing) is because not once in their lifetimes did someone have to rebuild them.

Not only did those buildings not have to be rebuilt, but they generated revenue as they stood there. Not only did these buildings have revenue generating potential ever since they were built, but their potential to generate revenue has increased dramatically as the value of the location increased. The value of the location increased, in no small part, due to the fact that wealth creators such as those buildings existed. The profit that comes from an amortized asset is the best kind of money flow in existence - almost tantamount to free money.

I thought to myself that "capital" is a very abstract concept, but yet one of the most definably useful abstract concepts in existence. Those buildings are capital. Developed economies are held up by capital. In fact, that's an understatement. Developed economies are held up by a continent of capital. Capital keeps the lights on and even checks out my groceries these days. We have more capital than we ever had at any previous time in history.

The Power of Capital

Capital is fundamentally a means to weather bad times. Yet capital creates recessions.

Capital is the difference between hunter-gathers and early agrarian societies. Hunter-gather tribes had very little capital to speak of. Alternatively, the land is a renewable resource that is the capital that supports hunter-gathers. However, this resource never grows over time. Farms are actually a form of human-created capital. Yes, farms may harm the natural balance of the Earth, but they provide something definably useful to humans, and they have the basic characteristics of capital. Unlike hunter-gathers, farmers gained productivity as they invested more of their labor. This capital caused wars since humans in early history realized that ownership is never a guaranteed property an asset - a lesson we seem to forget in today's societies.

Capital causes recessions. And depressions. If there were no capital inputs to the economy there would only be labor. Everyone would always buy and produce the same amount since there would be no way to save excess productivity and there would be no fallback onto capital assets to to weather stormy times.

Capital is lent and borrowed

To accountants the world is filled with debits and credits. A creditor gives their capital (usually in money form) to a debtor. Like forces in physics, there are two sides to every transaction. Because of the fact capital changes hands in the form of debt, capital has different faces, and costs to different people.

Capital is both Expensive and Cheap

The United States treasury can make short term loans to banks for rates in the neighborhood of 0.25% annual interest. This is the pinnacle of cheap capital that the rich have access to. Short term loans to the poor, in the form of payday advances, and recently, micro-credit, is typically in the neighborhood of 70% annual interest. I have heard numbers going as far as 160%.

If I borrow $5 from a friend and agree to pay back $6 next week, then I will be paying a yield of 910,000%. Loan sharks get very high yields.

So called T-bonds, or US Treasury issued debt has yields of 0.01%. No really, I didn't make this up. The US was just downgraded from AAA+ to AA+ credit rating. It probably won't change the value of capital (or lack thereof) represented in the form of T-bonds. To be fair, the long term rates are closer to 4%. Whether or not this is less than or greater than inflation is up for debate.

http://www.treasury.gov/resource-center/data-chart-center/interest-rates/Pages/TextView.aspx?data=yield

You may have wondered if the price of capital is related to inequality in the world. More precisely, if the difference in price of different forms of capital is related to inequality. Personally, I wonder if anything could be more obvious.

The World's power struggles are over capital

The United States is thought, by many, to have too little capital to its name compared to its debt. The US also used several federal programs to encourage people to invest in homes - a physically definable form of capital. They did this by ensuring debt to drive interest rates low. This made homes a more attractive form of capital.

China, like the US, has a problem. It doesn't have a good place to put its capital. The citizens of China have poured money into real estate because they don't have a good alternative. This is the fundamental nature of a housing boom, but it's really a problem of "too much" capital in the economy, you can read about it here.

http://chovanec.wordpress.com/

China has transitioned to a more developed economy. Capital is the fundamental item needed to do so. But people in China don't get a good return on capital. Households find themselves with few options and put money into inferior investments, like real estate developments, because of this fact.

Capital has many many prices. There is the cost to borrow as well as the payment to lend, and these values exist for both the poor and the rich. My estimation is that the poor have a return on capital of around a money market account, which is 2-3% these days, and even that might be generous. The poor are also charged to receive the benefits of their capital, in the form of fees and difficulty of entry. The price the poor pay for capital is more certainly 10% and greater, like in the case of credit cards. The rich get returns of 8%, and quite often more. The return on accounts with $100 million or more are almost consistently more than 10%, which is an impressive figure. A very impressive figure. The middle class gets returns close to the general market performance, which has been good in recent decades and may be greater than 5% for the lives of most baby boomers. The price the rich pay for capital, on the other hand, is quite low. Corporate America has only been limited in the amount of capital they can borrow by how much people were willing to lend them.

Capital is needed to solve global warming

One of the biggest problem, if not the biggest problem, that faces us today is he fact that we use hydrocarbon resources from the ground for energy. Extracting these resources requires capital. We already have that capital in place, although we will probably need more. However, natural gas plant and automobiles have not paid for the capital to produce the fuel to run off of.

Devices that use hydrocarbons are "pay as you go" because they are absent the capital investment needed to produce the hydrocarbons. The "peakist" movement, which evangelizes the demise of society due to reaching the limits of hydrocarbon production is predicated on a belief that the capital to produce the hydrocarbons to power cars sold today is a myth.

Those hydrocarbons are a myth because the capital that companies like Exxon hold today will be insufficient to keep producing the energy we need. Alternatively, the price of the capital we must invest to run our cars and our economy is expensive beyond anything ever yet comprehended by society.

Unlike a car or a coal plant or a natural gas plant, carbon free power plants do not require expensive fuel. What do they require?

Capital.

The fundamental thing needed to stop global warming is a capital investment. This investment in into renewable energy sources, nuclear power plants, energy efficiently, and much much more. The thing that is indisputable is the fact that this is an investment. We have to invest in order to produce energy in a way that doesn't use oil or coal or natural gas.

The fuel for Carbon-free energy is capital.

Environmentalists that urge action to stop global warming are imploring the world to invest more now, and if we do so, we will have that value later.

A fully-constructed wind turbine produces value and uses very little labor.

Why are we not building nuclear power plants faster? I'll give you a hint - the answer starts with a "C".

Imbalances in capital have remained unsolved and caused problems

Let me tell you a story of de-leveraging. A fictional story.

It was the fall of 2008 and the media was abuzz with the threat of financial collapse. While some companies collapsed, the majority the corporate world was threatened by the situation, but still intact. Seeing the problem, Earthlings had a meeting.

In the meeting, those who held bonds and those who held equity were represented. A process of "de-leveraging" was necessary, as such, it was determined that the companies of Earth would convert some amount of their rigid debt - bonds, into risky debt - stocks (equity). The representatives of bondholders were asked to order themselves in terms of who was most willing to trade their bonds for stocks. The amount that needed to be converted was assessed and the right number of bond holders were turned into stockholders.

As such, the economy returned to its normal state and a major recession was averted.


Now, this is a silly story, I know. Unfortunately, reality is more silly. The difference in the yields on capital is generally attributed to risk. A large part of the financial problems we have are caused by people buying assurances of the security of their capital from others. The problem is that the security is not represented by a building, or a power plant, or a ship. In fact, those are thought to be fairly insecure. The reason is largely because they produce a product that has a price that fluctuates.

I often wonder why markets seem less efficient than ever today. The price of almost any commodity bounces all over the place. Electricity will routinely trade in a price range that varies by a factor of 4, and then there are spikes than go up to a factor of 16 and more. That's not quite puzzling to me, the more puzzling thing is the fact that consumers pay a single price that changes very little from month-to-month.

Banks are not kept in business by capital. They are kept in business by discrepancies, or differentials in price. The world today is full of discrepancies.

typing in Dvorak
wood choppin
theanphibian
So recently I started trying to learn Dvorak. It's quite fun once you get over the initial little hump. I've been amazed at both how quick I've been picking this up and mind bending the activity has been.

The worst is most definitely the "s". The fact that the key made so little sense in the first place makes the movements really hard to unlearn. Basically my right pinky finger has been that annoying group member picks an absurdly easy task so he can coast. Now, after 15 years of that, I'm putting him to work and he does nothing but complain. I constantly mistype the s.

Another strange thing is the shaking. I haven't had to actually wait on myself to type in years. Now when I'm slowly recalling where a letter is I'm terribly impatient and I can't help but do an annoying shaking of the leg.

Well I hope you liked that story, I'm done with my practice for today.

The fuzzy future of education and skill
wood choppin
theanphibian
Recent deliberations in the my department regarding the difference between 'education' and 'training' have caused me to reflect on the almost certain changing kind of premium on ability effectively accomplish tedious tasks and to manage complexity.

You have theory.
Then there is the application of theory to problems.
Then there is the real world.

How much time should we be spending with equations? To make a much harder question, how much time should we be spending on the translation of calculus to computers? I think the key question is how we should go about doing that translation.

I had a professor once who harped on the idea that "computers can't do calculus". I understand that. Computers are dense circuit boards manufactured to behave as a useful formal system. On top of that, through programming and by sacrificing clock cycles compared to the manufactured assembly language, computers can accurately perform as any other kind of formal system, in the enigmatic "Turing complete" sense. While they can't do calculus, I think the more fundamental distinction is that they can't do real numbers. Amusingly, the coveted metric for computational power is most often the matter of FLOPS, or FLoating Operations Per Second, which is the rate at which "pretend" real number operations are preformed.

In order to compare computing power we compare the rate at which is does something close to the very thing that it's faulted for not being able to do. But it does that very well.

I'm sure that human minds can't do calculus in the theoretical sense either, but we do something profoundly different. We consider calculus. But this might be a little over complicated. The universe is often said to be describable with calculus. At least the extent that we have probed so far this seems to be accurate. But you know what, I think that our minds are designed to consider the behavior of the universe, not the behavior of calculus. As a side consequence of our ability to consider the behavior of the universe, we have the ability to also consider the behavior of systems described with calculus (some easier than others, and some only after significant effort and experience).

We have a TED talk for this kind of thing. About math education and how we should change it:



In the video, Conrad Wolfram advocates changing the kind of interface we put between students and the mathematics, IMO. I think it's admirable to advocate this stuff for young students getting interested in whatever it is they're getting interested in, but I think that the HIGHEST level of education is visibly starving for it and about to get vocal in its dissatisfaction with theory as it's been presented so far.

The problem with theory is that we are limited in using, say an equation with 7 independent variables, to meaningfully analyze a real-world situation. I think the role of approximations in the entire suite of tools we have is a huge element of debate here. Science computing just makes approximations of real systems, but the fact of the matter is that many of those can be quite good even for very complicated systems. If not, what are all of the researchers doing right now?!

When we're taught theory, we often have to incorporate numerous ways of breaking the fully-described systems down into something that can be used to output a definable answer. This is the point that I think we start going wrong. I think that to an extent, we should be able to teach a higher-level view of a physical system with the most descriptive mathematical model and to and extent, just leave it there. And should a solution be desired, learn how to describe the system fully and completely to a computer program that can then accurately predict the behavior. All-in-all, the point should be to teach young engineers how to analyze the behavior of physical systems and get them on track to a useful application of the understanding.

--
The other part of the entire discussion, is that problems are solved and after so many times of being solved, teaching more people to solve them isn't doing much more good unless it is giving a headway into solving a NEW problem. The critical issue that we have not correctly responded to is the increasing complexity of the problems that need to be solved, and correspondingly, the "fuzziness" of the methods that need to be applied to meaningfully address them.

We need to train a new generation of engineers to address large and complicated systems, and a basis for learning coming from a systems view is the most critical. We need engineers who are eager to paint a picture of very complicated systems by knowing simple behavior laws of the constituents, and THEN be able to make useful observations about the large system.

We are overburdened to go about solving such problems without a nearly free-flowing virtual research world. And we need to devote substantial resources to paving the way for such a thing.

Why we came from Mars (rationally)
wood choppin
theanphibian
I developed some comments watching this video here:

http://www.youtube.com/watch?v=9RExQFZzHXQ

These are two well-established scientists and media personalities (yes this works). I thought it was interesting that Tyson seriously entertained the idea that life could have come from Mars.

The argument is that there are 100,000s of Martian rocks that should have fallen to Earth over the years, during the period of heavy bombardment (100s of tons). He makes the point that Mars would have been fertile for life before Earth, which I understand very very well. Then he says that it would be a disappointment to find fossils of life on Mars that was made of the same DNA, because that would not constitute a separate development of life.

I want to take this concept and run with it. Notably, I'm much more interested in the specialness of the idea of life on Earth coming from Mars. One would say, "but that's fantastically improbable". I agree. That's why it's a good theory.

The reality of our universe contains a number of peculiarities. Notably, let's look into these ideas.
1. Our Universe is fine-tuned for life, which is reasonable by the anthropic principle
2. Space is, as far as we know, dark with respect to life around us - we're alone as far as we know
3. We had more time than we need to evolve
4. The time scales needed for humans to develop from language to the LHC were virtually nothing

These 4 comprise the evidence I will need to make my claim. Most of them need some more explanation so I will be returning to all of them. Firstly, Tyson made some interesting points in the video about the elements of life being roughly the same elements that are the most common to the universe. This resulted in the observation that Silicon based life is possible, but completely unnecessary. Generally, the existence of Carbon is just one of the many reasons our universe is fertile for life, very much so in fact. One would expect that given a fertile universe for life, the universe would be teeming with life, but it gets more complicated, continuing...

My 2 point is certainly one of contention. We have the question of the observability of life at hand. I think that we should be able to see life if 1, 3, and 4 hold strongly true, but unfortunately I will need all those 3 to make the argument effectively. We can only theoretically detect radio waves from civilization from a local bubble as Search for Extra Terrestrial Intelligence (SETI) has been attempting, but we need to dig more into the argument here. How would we expect to detect life. I would propose:
- radio waves from industrial communications
- observational evidence from ordinary life/presence (2 kinds)
- they just come here and visit us
It's getting complicated here, but I'll note that a certain number of observable stars exist for each one of these interaction methods. SETI looks at the first one, and the number of stars is some number of 100,000s. By most reasonable constructions of the Drake Equations, it is more probable than not that SETI will fail. So this observation does not preclude the idea of the universe being teeming with life. The observation from ordinary presence will be possible for an advanced civilization but non-technological civilization should be within our reach in 100 years or so, but there's a more important part to this. The concept of a Dyson Sphere is rather nonsense in itself, but the idea of a mega-scale society is not. This goes back to my previous point 4. We may predict that advanced civilizations are basically explosions, and if intelligent life comes to be, they will almost instantaneously on universal scales, colonize all the local stars and expand infinitely.

My point 3 is something I don't hear other people argue very often. The Precambrian explosion happened 500 million years ago, and it is thinkable that advanced life like us could have evolved somewhere in the universe anytime after that time. It could have evolved before this! Think about this, the light bubble for 500 million years ago is absolutely monstrous. I put forth the observation that no civilization within 500 million light-years of us has come to visit us. That has huge implications for the Drake Equation. It is premature to say that our civilization is on the verge of a technological singularity, but that is not important. it is important that a civilization like us could hit a technological singularity. There is a valid observation probably no civilization within a billion light-year radius of us has hit a singularity.

My conclusion from that observation is that life, period, is highly unlikely to evolve. That is to say, abiogenesis leading to intelligent life is improbable. I admit, I'm applying my personal experience as a human in human society to say that the probability of hitting a technological singularity given intelligent life is not much less than 1. People could very easily disagree with my point, which would paint a picture that many intelligent but non-prolific civilizations are out there.

Let's revisit the Drake Equation. I will write it twice, representing two different forms.

The Drake Equation:

( # of intelligent life instances in Milky way ) = ( # of stars ) * sum( ( probability of success of i_th stage of life ) , i=1, ..., N )

( # of ppl who've visited us ) = ( # of stars in 1 Gly ) * sum( ( probability of success of i_th stage of life ) , i=1, ..., N ) * ( probability of singularity, N+1 )

nu_l = sum( ( probability of success of i_th stage of life )
nu_s = ( probability of singularity, N+1 )

The reality is that a coherent physical theory of the universe NEEDS the sum in these equations to be impossibly low. Our evolutions needs to be infinitely unlikely looking from afar. There is a balance between the two terms nu_l and nu_s, the probabilities of a star hosting life and the probability of intelligent beings explosively expanding to new celestial bodies, respectively.

The point where is disagree with most commentators on this matter is that nu_l being anything other than almost zero to the point of most prohibiting life in the universe is folly. Practically the product nu_s * nu_l < ( about 5 ) / (1.2x10^9).

3 to 7 × 10^22 = I'l say 3 x 10^22
then
(3 * (10^22) * (1^3)) / (13.75^3) = 1.2 × 10^19 --> number of stars in 1 Gly radius (with fuzzy math ;)

Milky Way has 1-4 x 10^11 stars.

The assertion that a technological singularity is unlikely must be, generally, less than the number of the Milky Way's stars divided by the 1 Gly stars. That comes out to be around 10^-8. I'm going out on a limb to say that the probability of humans killing ourselves is less than 1 in 100 million (although I could be wrong). A technological singularity is simply inevitable in a universe that is abundant in intelligent life.

Our universe is simply not abundant in intelligent life. How do we exist then?

Because life could not have evolved in Earth unless the more fertile conditions of primitive mars had not evolved the first forms of cellular life. Why?

Tectonics.

In Earth's early days it was a magma soup that nothing could have survived on. Mars also started out like this, but it cooled off much faster and the Goldilocks period of it's tectonic evolution came much earlier. Furthermore, change of environment in which life is living will very commonly spur an explosion in more advanced and new kinds of life.

This model would allow basic life to be common, but life advanced enough to attain intelligence totally rare. That explains well how large and complex life could have existed on Earth for 100s of millions of years without being disturbed by other visitors who developed more quickly. So this solves what I might call the "Dinosaur problem" in our model of the universe.

I've explained why we need a tool to identify a transition in the evolution of life to be very unlikely. To wrap this up, I need to explain why it WAS very unlikely that life flew on a space journey from Mars to Earth (as if you needed that explained). Solar systems with 8 nicely arranged planets are probably rare. Just the relative proximity of Earth and Mars is not a normal thing, and the movement of material from Mars (the perfect nursery) to Earth (the perfect home) requires the evolution of TWO 'perfects' in the same solar system as well as a meteor bombardment history that allowed a large transfer of Martian mass to Earth. ADDITIONALLY it require that spore-like life evolved on Mars and survived the journey. That's pretty obscenely unlikely. But this makes sense actually.

While it may seem self-defeating for this model to require a low probability of intelligent life evolving, it's actually not. Looking backward from our perspective, we've only observed ONE case of life like us, and therefore, the probability of such life evolving may be ARBITRARILY SMALL. As far as we can tell, a theory that places that probability lower than another theory is to be preferred rationally. In addition to the massive (known) expanses of space, the multi-verse theories dictate that fertile conditions (to the degree that they are) for life could have existed for an arbitrarily large amount of time. There could have been INFINITE trials and there is no logical fallacy if we are the only intelligent life in the universe. There is also no logical fallacy if rapid industrialization and evolution to a space-faring race is also near inevitable.

Is this a more 'dismal' view of the universe than Tyson prefers? Absolutely so, but it's highly credible. Indeed, if we find that life developed on Mars SEPARATE from life on Earth, we may as well prepare to join our galactic community, for advanced life is certainly common. We have every reason to prefer that view from a humanist perspective. We would like to find that we're not alone.

But we probably are alone. And I have described a model consistent with that.

Existentialist integers and the like
wood choppin
theanphibian
So, obviously I haven't had a life lately and my journal reflects this. This entry will continue the trend.

Python

I'm ready to go back on several of the points I was trying to make on the last entry. To be honest, it seems as though Python(x,y) is only good for a scant few things. Actually, it seems that it's overwhelmingly unhelpful in the area of Python modules, and mostly has utility in including applications that aid in program development.

There's that, and most of what Python(x,y) includes isn't really relevant to me. The Qt GUI stuff looks very well made, but I don't have the time or the [care] to do anything with it. And I'm only growing fonder of Crimson Editor as real IDEs continue to fail to keep my attention.

I've had a little bit more trouble in the area of compatibility. Things didn't start blowing up until I got into the domain of fancy stuff, but at one point I even tried GTK and then found out just how buggy this stuff can be. I even found ad-hock fixes for some of the compatibility issues.

Anyway, I was getting SO frustrated with all the junk I had been installing on my computer that I got to the point where I was thinking "I give up, I need a new computer!". And on that computer (if I had it), I would install from v2.5 and I've been getting together a list of files I would use in that case. Here's what I have so far.

File:PyQt-Py2.5-gpl-4.4.3-1.exe

gtk
File:pycairo-1.8.2.win32-py25.exe
File:pygobject-2.20.0.win32-py25.exe
File:pygtk-2.16.0.win32-py25.exe


File:matplotlib-0.99.1.win32-py2.5.exe
File:numpy-1.3.0-win32-superpack-python2.5.exe
File:pywin32-214.win32-py2.5.exe
File:scipy-0.7.1-win32-superpack-python2.5.exe
File:xlutils-1.4.1.win32.exe

I haven't really been able to work on this lately, but once I get around to it, I do want to get a new computer and then do crazy fun stuff on it. Getting all the left-field python modules to work on it is top of my list :D

----
Moving on, I've managed to do quite a number of useful things and while it might not have saved me time yet, every graph that I produce with Python is perfectly traceable in terms of how I arrived it, meaning that I seem to actually be developing *gasp* logically configured source that compliment my research writings.

Fortran

I'm also gaining some new abilities in Fortran. This has been prompted completely from a need I have had to do a particular thing, and in no way research for the point of learning more programing methods. I can genuinely say that the only way I've been getting into to Fortran objects is because it has become exceedingly difficult to do what I'm working on without them.

Mainly, up until now I've kept everything fairly well organized in a module/interface basis. If a set of variables are associated with a particular routine or set of routines, then fine, then I'll just attach the "use" statement for all the relevant data and pass things that way. The problem I ran into was.... I needed to have a set of routines that applied for a large set of data, with this data set replicated several times.

Do you see what I just did there? I effectively defined a class by my problem statement. I cast a type that contains all the associated data and then have routines that operates on that data. There are an awful lot of quirks associated with Fortran it would seem though. For instance, it appears that while variables are encapsulated in a user defined type, functions and subroutines only exist in modules.

Anyway, that's not even what I want to talk about here. Another neat thing that has just suddenly become invaluable to me is optional arguments. So I found and example, and noticed some odd behavior. Of course do link to the foo where credit is due.

The following program apparently works:

program footest

write(*,*) ' im a foo ',foo(1.2)
write(*,*) ' im a foo too ',foo(1.2,3)

contains
function foo(a, b)
real :: foo
real, intent(in) :: a
integer, intent(in), optional :: b

if (present(b)) then
foo = a**b
else
foo = a
end if
end function foo
end program footest


But this program does not.

program footest

write(*,*) ' im a foo ',foo(1.2)
write(*,*) ' im a foo too ',foo(1.2,3)

end program footest

function foo(a, b)
real :: foo
real, intent(in) :: a
integer, intent(in), optional :: b

if (present(b)) then
foo = a**b
else
foo = a
end if
end function foo


The optional argument stuff apparently works when you have the routine in a module or in a contains statement. I really am stretched to think of why it should not work as a stand alone function. But that's how it is it would seem.

I think the reason I don't strongly like the Fortran modules and interfaces is because I really just don't understand them. At least with Java I knew that I didn't understand them and there was ample documentation detailing exactly why I never will.

Python experiences
wood choppin
theanphibian
Foreword

I'm a fairly experienced programmer. However, I'm a fairly novice computer guy. Most of what I has never dealt with the crazy details of how software works, I just have what I works and I know how to use it.

Nonetheless, I want to expand my horizons and Python has long-since attracted me for an alternative to many *cough* Microsoft applications and the like, which are fast going obsolete.

Fortran too. The fact that you used to be programed on punch cards does not make you timeless. Fortran will be obsolete someday and my preferred method would be to use another (better) language to write my code in Fortran, until it actually goes obsolete. And yes, it will.

The Story So Far

First I downloaded the most recent version of Python, Python 3.0.

So, first thing: I go through the official tutorial, and I get to the point of being able to do Hello World in the Python interpreter. Well horray and hoo ha.

But truth be told, I don't want to use the Python interpreter. Silly me, I wasted a lot of time trying to figure out the basic command to run on the command line or implement tools for in crimson editor. The interpreter is good for a novelty, but if I can't get a command to compile and run programs then it doesn't fulfill my needs. Eventually, I found the c:\{Python directory}\python.exe, and that it can take the ".py" file as an input. Now we're getting closer.

Next, I need to graph. I can do all the tutorial stuff with other tools, I need to be shown something with novelty. I was trying to replicate this tutorial. They had just the kinds of graphs that I want to see.

I was having a lot of trouble, and finally - lo and behold, I need the packages. Okay, not a big deal, I've done object oriented programing before and I know that we need the wheels to spin. That lead me to downloading NumPy and SciPy. Of course, I already had gnuplot manually installed before.

But wait! None of these work!!

Why, oh why? Well, I had downloaded 3.0, the latest, remember. Well, after just about getting NumPy and SciPy installed, they tell me they want 2.6 and not 3.0 :( So where's the 3.0 version of the tools? They don't exist Ok, fine. So I'm back to the drawing board and reinstalling the correct version for the widgets I need.

Do the codes run? Of course not! I got the SciPy and NumPy, but the codes were still trying to import PyLab. So this is just like the other two I installed right? No. PyLab is confusingly very very different. I can't even tell you how to specifically install PyLab, I still don't know how. But, lucky for me, I do find something that is purported to do what I need - the behemoth called Python(x,y). I know it has PyLab, and I find that it's free, good. Start download. It's over 500 Mb big :( So I go get my other errands done and come back after it's finished downloading. Right, so install and...

Apparently Python(x,y) contains the entirety of Python, and I'm forced to uninstall the version I have already installed on my computer (for the 2nd time). Fine. I uninstall Python and reinstall it using the Python(x,y) package, getting along with it three different editors, gnuplot (again), and a huge host of tools that I'll doubtfully ever use, but are nonetheless appreciative to have. In retrospect, before this adventure I was fooled by Python's home page saying:

"batteries included"


They're not. At least not for me.

So, now I'm humming. I've got all the modules I need to run this and just about anything else that I should ever need. I run. Doesn't work. The message I get is that gplt is missing. What? How? I have everything I'm supposed to have.

Turns out, the problem was that I didn't have something that I wasn't supposed to have.

I discover... that those examples are for an obsolete SciPy. Alas, this is the exact thing that puts people off of open source stuff. The answer to the question I asked long ago was "just don't do it". My packages were never incorrect, but the examples were obsolete.

Thankfully, I have found other examples that do get me where I'm going and I am so far very happy with what I'm seeing. But I would like to point out that no matter how easy Python claims it is - nothing that does something that awesome is easy.

Future Work

A friend had noted to me that he's used vPython and I was like "V what??". I was then told that it does slick 3D stuff. Well, I check, and apparently Python(x,y) already put VTK on my computer. Do I know how it works? No. But it looks one of the most bad-A programing tool kits I've ever had my hands on.

Anyway, yes, I DO plan to post with neat programs whenever I manage to make them (did I hear someone say Torus World?).

Advices

  • If you want to do more than "Hello World", go straight to Python(x,y).
  • If you don't want to do much more than "Hello World", don't learn Python, use Fortran.
  • If the example you're trying to replicate doesn't work, the website might just suck. Find better examples. They exist.
  • Look for python.exe in the folder you installed it to make compiling tools, Windows batch files, or that kind of thing.
  • Don't use Python 3.0 unless you're reading this in like 2011. You know what, just use whatever Python(x,y) uses.

The circle of fortune cookies
wood choppin
theanphibian
Those of us who frequent Chinese restaurants have the pleasure of indulging in the (American) cultural phenomena of unwrapping a piece of paper containing infinite wisdom every time. Who knows what kind of message they would convey to you collectively if all stung together and decoded by Conan? Now, out of this group, the more organized or more pack-rat-ish of us will develop methods to hang onto them in an organized fashion.

Somewhere around a year (or two) ago I joined this sub-group. Thus, it is only appropriate to make one entry sharing the fruits of my labor (infinite wisdom):



One look is worth
ten thousand words.

All the news you receive will
be positive and uplifting.

Face facts
with dignity.

Do not give a man a fish,
but teach him how to fish.

You will become a great
philanthropist in your later years.

You happiness is intertwined
with your outlook on life.

You will be fortunate in the
opportunities presented to you.

The path of life shall lead
upwards for you.

Not even a school teacher notices
bad grammar in a compliment.

The world belongs to the
enthusiast who keeps cool.

Everybody is ignorant,
only on different subjects.

When winter comes
heaven will rain success on you.

To go too far is as bad
as to fall short

Practice an attitude
of gratitude.

Take that first
step today.

Where there is no vision
the people perish.

Stubbornness is not
a good virtue.

Only a life lived for others
is a life worthwhile.

Now is the time to be candid
and aboveboard in all things.

Laziness is nothing more than the
habit of resting before you get tired.

Electric cars: 100 years and no progress.
wood choppin
theanphibian
Exhibit A:

9 Electric Cars 100 Years Old or More
http://gas2.org/2009/04/19/9-electric-cars-100-years-old-or-more/


Exhibit B:

5 electric cars you can buy now
With gas prices soaring, plugging in has its appeal. But there are trade-offs: high costs and low speed.
http://money.cnn.com/galleries/2008/autos/0806/gallery.electric_cars_now/index.html


Ever since I was a child, I've wondered to myself while learning about cars, fossil fuels, and society "did I miss something?". Yes, we are now dependent on fossil fuels and it seems to be a world-wide consensus that we have an imperative to change that... well... it has seemed this way for the last 50 years. And yet nothing happened. In fact, we seemed to go the other way.

But this doesn't even capture the entirety of the nonsense world we live in. It is an undersold fact that electric cars were prevalent way back when... before the personal use of cars even took off. In fact, they happened to be quieter, easier to maintain, and were visibly less polluting than their gasoline competitors, which are facts that were apparent to our great grandparents.

People say that electric cars are making a comeback now. Well... ... they will make a comeback. Supposedly. But we have some on the market now? Right?? Like that link I posted above. Well, read through the list in detail... if you want something to make you pull your hair out. I thought I would make a comparison in a livejournal post of the merits of the options available to us now, in the 2000s, versus those in the 1900s.

Today
100 years ago

GEM Car:
Range:
30-40 miles

Top speed:
25 mph

Seats:
4 people

1891 Morrison:
Range:
50 miles

Top speed:
20 mph

Seats:
6-12 people

Dynasty iT Sedan:
Range:
30 miles

Top speed:
25 mph

Seats:
4 people

Electrobat 1894-1899:
Range:
25 miles

Top speed:
20 mph

Seats:
2 people

ZENN:
Range:
30-50 miles

Top speed:
35 mph

Seats:
2 people

1909 Babcock:
Range:
100 miles

Top speed:
14-17 mph (average)

Seats:
2 people?

Zap Xebra:
Range:
25 miles

Top speed:
40 mph

Seats:
2 people

1901 Riker Torpedo:
Range:
??

Top speed:
57 mph

Seats:
1 person?


Now, I'll be the first to admit that this list is pretty cruddy. I don't have all the metrics, and I haven't included all the cars by any stretch of the imagination. But this is a cross section of what's available and what was available that illustrates my point (I'm not claiming very much). Our options are not exactly better, that's the point.

I'm sure the apologists are ready to disprove my claim, but really, think about the entire situation. The old cars are slower in some cases, but the average speed of cars at the time was lower. The old cars are crazy unsafe. Well, driving then didn't expose you to other drivers whizzing by you at 70 mph, and horses are pretty good at yielding.

The sheer magnitude of our failure in regards to electric cars makes me numb to excuses. So they're required to meet safety standards, keeping the speeds low. Well, that's still a circumstance society co-created that prevents people from getting more useful electric cars. And really, we should have an advantage or two over car manufactures from 100 years ago. You know, ideally we should have a thing or two going for us, added onto the fact that we already use cars and know that we need to be switching to electric.

Just back up a little bit, and I just want you to think to yourself next time you start up your car: "Why are we still driving these?"

Heterotrophs and autotrophs are symbiotes
wood choppin
theanphibian
I can clearly remember from middle school physical science (and high school biology) the definition for what a hetrotrophic (animal essentially) and an autotrophic (plant) life form is. I won't get this perfect, but it went something like:

Autrotrophs make their own energy.

Hetrotrophs don't make their own energy and must get it from other sources.


Stemming from this definition, a profound thought occurred to me lately. So I thought I should leave a short blog entry on it.

To begin with, the above definition has holes. Not in an academic biology sense, but a glaring logical flaw. That flaw is; why? Why did the Earth evolve to have half of the organisms harness energy wile the other half merely spends the energy? An instructive extension of that thought would be; why wouldn't the energy making organisms just spend their own energy?

In 7th grade or so, it seemed that my teacher was telling me that we, as animals, have just been leaching off plant life for our livelihood for the past 500 million years. But looking at this with a more adult outlook, it's already obvious that such an agreement is patently unstable. The reason is easy to understand. Imagine that a plant species evolves that doesn't dump huge amounts of energy back into the atmosphere and instead used that energy to enhance its own chances of survival. That species would be very successful and its decedents would soon crowd out all other plant life on Earth (standard evolution thinking). But this doesn't happen for some reason. The reason is that all plant life explicitly gains something from the existence of animals.

The description of what plants gain from us is queer. I won't blame you if you don't believe it. I didn't entirely swallow it at first. Well, the number one thing that plants gain is... Carbon. We, as people, need oxygen for energy, and plants need Carbon to build more plant mass. Plants take in CO2 from the atmosphere, and in turn, do not breathe out any Carbon. Mass is not conserved. We do not conserve mass either, unless we are eating or drinking we are leaking mass into the air.

One of the most insightful experiments I can remember doing is growing a spider plant in a bottle of water. Why? Because new matter is apparently created out of nowhere. You start with: water, air, and a small plant. You end with: less water, air (for more or less), and a large plant. Where did the large plant come from? Did it get it's building blocks from the water? No. Water does not have enough materials. You can't build a plant from Hydrogen and Oxygen, and the minerals are (I assure you) completely insufficient to build a plant of the size observed. In my experience people often give 'the minerals in the water' as an answer. Don't listen to them, that's wrong! The large plant, in fact, got it's mass from the air.

This is actually an experiment that we have all observed over and over again. Ever notice that your houseplants seem to grow to a size larger than the dirt mass that you put in it? They do. It uses the junk that you breathe out of your mouth to make itself.

Why is this important?

Well, one reason is that plants can't move. It is an energetically loosing process to capture Carbon from the atmosphere. Thus it is a price the plant pays for being, well.... stationary. Why are we so much better at gathering organic substrates? Well, that's pretty much our job - to go scavenge for stuff. Needless to say, hetrotrophs come in all shapes and sizes, not all of which have the freedom of motion, but suffice it to say that placement is important. You can't be hetrotrophic and not have access to food. Plants, on the other hand, commonly thrive in environments where there is little matter that can be thought to constitute 'food'. Their right to live in these such environments is granted by the toil of hetrotrophs.

Thus, we (as hetrotrophs) don't 'leach' off of plants in any way, and I would revise the previous definitions for the sake of a middle or high school class. I would do this to avoid confusing students about the universal questions that they may not yet understand are bothering them (I was sure confused). But this task is not easy, I was perplexed as to how to revise this definition. I will allow myself more discussion after a first attempt:

Autrotrophs make their own energy, but rely on other life for distributing of matter.

Hetrotrophs gather their own matter, but rely on other life for enegy.


The part I am troubled by lies in the phrases "distributing of matter" and "gather matter" seen here. Surely there is some more eloquent phrase to use for this, right? School children will sure have a hard time with this one. Heck, physicists will have a hard time with this one. I've described what plants get from us, but what is it that plants get from us?? We should put it into some hard physical quantity, right? Entropy? Oh heck no! It's the opposite way around, hetrotrophs create more entropy by scattering Carbon into the atmosphere. Mobility? Eh... not quite so, though it does play a role in this. Perhaps, organization? My brain contorts around applicability of this word, as it is both the answer and the anti-answer. Hetrotrophs travel to the furthest corners of forests and livable spaces and find organic substrates to burn. The matter they burn can then be used by plants to create more plant matter.

Imagine this applied to industrial processes. Think about a world where burning fuels is actually doing a favor to someone by redistributing our Earth's raw materials. New skyscrapers are built without a single truck moving material, but instead Carbon sequesters located at the pinnacle of the structure churns out Carbon nanotubes. When the building has reached the end of its life, you don't have to pay anyone to take it away, but are instead are paid by someone to let them take it away and burn it for energy. The energy is used and the Carbon atoms once again float to the top of our highest buildings.

This elegant and strange process, my friends, isn't just the miracle of life. This is the miracle of gas. This is the miracle of liquids. The ability for things to disperse in certain conditions, while at the same time not disperse under other conditions, is the bane of our existence.

While we, as humans, use fluid processes to efficiently accomplish many isolated processes, a world-wide balance that contributes to our productivity is something that human society is far far away from. But someday we will live in such a balance. For now, I would encourage the reader to prep themselves for the kind of mentality that such a society will require. Think long and hard about what plants gain from our chemistry.

You are viewing theanphibian