Free State Project Forum

Please login or register.

Login with username, password and session length
Pages: [1] 2 3 4 5   Go Down

Author Topic: Re-Examination of the Spreadsheet  (Read 31726 times)

JasonPSorens

  • Administrator
  • *****
  • Offline Offline
  • Posts: 5724
  • Neohantonum liberissimum erit.
    • My Homepage
Re-Examination of the Spreadsheet
« on: January 07, 2003, 01:12:48 pm »

The spreadsheet has been updated with new figures from the State Data page:
http://www.freestateproject.org/files/statecomparisons.xls
http://www.freestateproject.org/files/statecomparisons2.xls (Mac friendly version)

I've also written an essay using the new data to analyze the prospects for each state.  I don't argue for a single state as "best"; rather, I show what you would need to argue in order to argue that each state is best.
http://freestateproject.org/stateanalysis2.htm
Logged
"Educate your children, educate yourselves, in the love for the freedom of others, for only in this way will your own freedom not be a gratuitous gift from fate. You will be aware of its worth and will have the courage to defend it." --Joaquim Nabuco (1883), Abolitionism

Zxcv

  • ****
  • Offline Offline
  • Posts: 1229
Re:Re-Examination of the Spreadsheet
« Reply #1 on: January 08, 2003, 12:51:38 am »

Thanks for the refresh on this matrix, Jason. Looks like you put a lot of work into it, and your analysis. I will now shoot holes through all your arguments.   :)

Quote
If this analysis succeeds in its purpose, henceforth debates about the state the FSP should choose will be driven by certain regular arguments and considerations.

Quite the optimist, aren't you?  :D

The problems with the analysis or spreadsheet follows:
 
1) As Ted and I have discussed, the normalization function is, in our opinion, wrong. Therefore he and I (and others who agree with us) will have to rework the spreadsheet for our own normalization function, and we may come to different conclusions than you have, in the bottom section of your paper.

2) Still many rows of data missing, not even all of the items on the state data page, let alone all of the others that have been subsequently discovered (things like number of NEA members, a big item).

3) Urb row uses gross urbanization rather than the better measure which we discussed on that other thread. If we had a state with nothing but farms, it would be rated the best even though that is questionable given the high cost of campaigns that would result. Urbanization needs to be weighed heavily, too.

4) Some possible variables are quite important, yet hardly quantifiable. For example, land owned by federal government - a useful irritation factor, yet it can get to the point there is little private land available and therefore too much of a good thing.

5) FYI, the jobs criterion has a pretty good correllation with voting population (the only large deviation for this correlation is Maine, which has poor job creation considering its large population). Therefore if you want small voting population you must accept low job creation. But with low population, you don't need as many FSPers to affect things, so the tradeoff is acceptable. However, with the changes we hope to make to the state, we should drive job creation higher than projections, which will help us to support more FSPers. However, if we went into a high population/high job creation state, our positive economic effect will not help us (there already being enough jobs to support all FSPers) and may hurt us (driving the population even higher, including immigration of statists). So for FSP, it seems a low population/low job creation state would be better than a high population/high job creation state. One other point about job creation: we may have a significant percentage of retirees or people who can live with no visible source of income.  ;)  So maybe we don't need 20,000 jobs. Actually we may have two FSPers in a one-income family - has that been considered?

Another problem with Maine is its large population of NEA members (and probably a leftist-leaning teacher population as well). 18,288 of them, quite a large number of activist opponents (over 3 times as many as Wyoming's 5713). To me, Maine really seems to be out.
Logged

JasonPSorens

  • Administrator
  • *****
  • Offline Offline
  • Posts: 5724
  • Neohantonum liberissimum erit.
    • My Homepage
Re:Re-Examination of the Spreadsheet
« Reply #2 on: January 08, 2003, 09:39:13 am »


1) As Ted and I have discussed, the normalization function is, in our opinion, wrong. Therefore he and I (and others who agree with us) will have to rework the spreadsheet for our own normalization function, and we may come to different conclusions than you have, in the bottom section of your paper.

Well, the normalization function is right ;), but even if it weren't, your solution should yield exactly the same results, assuming that you've transformed the weightings correctly.

Quote
2) Still many rows of data missing, not even all of the items on the state data page, let alone all of the others that have been subsequently discovered (things like number of NEA members, a big item).

I think it contains the most important items, but of course we could always add new ones.  Just think of those new variables as "non-quantifiable" factors in my analysis.  You can take them all roughly into account while looking at what we get with the numbers already there.

Quote
3) Urb row uses gross urbanization rather than the better measure which we discussed on that other thread. If we had a state with nothing but farms, it would be rated the best even though that is questionable given the high cost of campaigns that would result. Urbanization needs to be weighed heavily, too.

I'll try out "urbanized areas" too, though I'm not convinced by your argument that urban clusters are not equally bad, politically.  I'm leery of expanding the # of variables to the point that the spreadsheet becomes unwieldy to the average user.  But the fact is that a couple more variables here or there make little difference in the final results. ;)

Quote
4) Some possible variables are quite important, yet hardly quantifiable. For example, land owned by federal government - a useful irritation factor, yet it can get to the point there is little private land available and therefore too much of a good thing.

Both amount of private land and federal land ownership are quantifiable.

Quote
5) FYI, the jobs criterion has a pretty good correllation with voting population (the only large deviation for this correlation is Maine, which has poor job creation considering its large population). Therefore if you want small voting population you must accept low job creation. But with low population, you don't need as many FSPers to affect things, so the tradeoff is acceptable. However, with the changes we hope to make to the state, we should drive job creation higher than projections, which will help us to support more FSPers. However, if we went into a high population/high job creation state, our positive economic effect will not help us (there already being enough jobs to support all FSPers) and may hurt us (driving the population even higher, including immigration of statists). So for FSP, it seems a low population/low job creation state would be better than a high population/high job creation state. One other point about job creation: we may have a significant percentage of retirees or people who can live with no visible source of income.  ;)  So maybe we don't need 20,000 jobs. Actually we may have two FSPers in a one-income family - has that been considered?

Well, and there may be one FSPer in a 2-income family in a lot of cases... Job creation does correlate with voting population, but when you run a regression line between the two, some states fall above the line and some below.  Idaho is way above the line, and WY, ND, and VT are a little below the line, meaning they get less job creation than they should for their size.  However, I'm not convinced this is a deal-breaker for those states, since fewer than 20,000 may end up moving.  The purpose of my analysis was simply to show both sides: what would be the case if you thought that was a deal-breaker, and what would be the case if you thought that was not a deal-breaker.
Logged
"Educate your children, educate yourselves, in the love for the freedom of others, for only in this way will your own freedom not be a gratuitous gift from fate. You will be aware of its worth and will have the courage to defend it." --Joaquim Nabuco (1883), Abolitionism

TedApelt

  • FSP Participant
  • ***
  • Offline Offline
  • Posts: 117
  • Free 50 states - one at a time.
Re:Re-Examination of the Spreadsheet
« Reply #3 on: January 08, 2003, 12:27:27 pm »

Well, the normalization function is right ;), but even if it weren't, your solution should yield exactly the same results, assuming that you've transformed the weightings correctly.

Where exactly did you get the "normalization function"?  I have never heard of that before.  How is this normally used?

Quote
Quote
2) Still many rows of data missing, not even all of the items on the state data page, let alone all of the others that have been subsequently discovered (things like number of NEA members, a big item).

I think it contains the most important items, but of course we could always add new ones.  Just think of those new variables as "non-quantifiable" factors in my analysis.  You can take them all roughly into account while looking at what we get with the numbers already there.


I would like to see:

1.  Climate (done somewhat the way geography is done)
2.  The distance that must be driven to reach half the population.  This data can be gotten from another thread.
Logged
How much political experience do you have?  Probably not enough.  Get some!  DO THIS NOW!!!

JasonPSorens

  • Administrator
  • *****
  • Offline Offline
  • Posts: 5724
  • Neohantonum liberissimum erit.
    • My Homepage
Re:Re-Examination of the Spreadsheet
« Reply #4 on: January 08, 2003, 01:17:34 pm »

Where exactly did you get the "normalization function"?  I have never heard of that before.  How is this normally used?

It's frequently used in statistics whenever you need to rebase variables so that they can be compared to each other.  It isn't the only normalization function you can use, but it has some advantages, such as being unaffected by scale.

Quote
I would like to see:

1.  Climate (done somewhat the way geography is done)
2.  The distance that must be driven to reach half the population.  This data can be gotten from another thread.


OK.  The wish list is getting long... ;)

Maybe the appropriate solution is to create 1 small spreadsheet with only the 'most important' variables and then 1 big one with everything that might be useful.
Logged
"Educate your children, educate yourselves, in the love for the freedom of others, for only in this way will your own freedom not be a gratuitous gift from fate. You will be aware of its worth and will have the courage to defend it." --Joaquim Nabuco (1883), Abolitionism

Zxcv

  • ****
  • Offline Offline
  • Posts: 1229
Re:Re-Examination of the Spreadsheet
« Reply #5 on: January 08, 2003, 04:47:12 pm »

I am working on an "everything but the kitchen sink" spreadsheet, so you don't have to add these requested rows to the old one. I have already taken every item on the state data page and put that into it (I realize some of those items are either derived or not so interesting, but I got it all for the sake of completeness). I will now start looking through the threads for other criteria. In general most of this stuff will be turned off via weight=0, so it doesn't add too much to the confusion, but it will be there for use if people want it.

I am also doing the normalization within the sheet, using Ted's normalization. You can replace that with your own easily enough, Jason, with a cut and paste. The advantage of doing the normalization in the sheet (besides avoiding errors) is that with the raw data in the sheet, rows can be replaced with updated raw data as that becomes available, and adding new rows is straightforward.

BTW my updated version of Ted's normalization allows for middle values as most desireable. An example might be state area, where DE is too small for various reasons, and AK way too large. I personally picked a size between ME and ND. This column will be available for modification if others (Ted, for instance likes really small states  ;)  ) have different criteria for goodness.

If people would point me at the various threads with their favorite additional criteria, it would be a help.
Logged

TedApelt

  • FSP Participant
  • ***
  • Offline Offline
  • Posts: 117
  • Free 50 states - one at a time.
Re:Re-Examination of the Spreadsheet
« Reply #6 on: January 08, 2003, 05:14:04 pm »

Where exactly did you get the "normalization function"?  I have never heard of that before.  How is this normally used?

It's frequently used in statistics whenever you need to rebase variables so that they can be compared to each other.  It isn't the only normalization function you can use, but it has some advantages, such as being unaffected by scale.

Isn't there a problem with small differences being magnified?

Quote

Maybe the appropriate solution is to create 1 small spreadsheet with only the 'most important' variables and then 1 big one with everything that might be useful.

I like that idea.  Another thing you could do is have a spreadsheet that only has states that have enough jobs or something else that is very critical.

For example, right now, I have eliminated WY, ND, and VT from further consideration, because if we can't get enough jobs in those states the project will fail.  I also think that we will never get 20,000 people if ND or AK is chosen.  (I'm also unsure how many jobs we can get in AK, because of the kind of jobs you get there.  Also, many AK jobs are highly seasonal.)  This leaves six states - DE, SD, MT, ID, NH, ME.  Since three are western and three are eastern, it seems to be a pretty fair mix.

Of course, once the state is chosen, you still might have those who have opted out of it to start a FSP in another state.  In that case, they might pick WY if an eastern state is chosen and they don't want to go east, or VT if a western state is chosen and they don't want to go west.  (In each case they might concentrate on a county.)  If that happened, there would be fewer people, and the lack of jobs might not be such a problem.

Logged
How much political experience do you have?  Probably not enough.  Get some!  DO THIS NOW!!!

TedApelt

  • FSP Participant
  • ***
  • Offline Offline
  • Posts: 117
  • Free 50 states - one at a time.
Re:Re-Examination of the Spreadsheet
« Reply #7 on: January 08, 2003, 05:29:02 pm »

Logged
How much political experience do you have?  Probably not enough.  Get some!  DO THIS NOW!!!

Zxcv

  • ****
  • Offline Offline
  • Posts: 1229
Re:Re-Examination of the Spreadsheet
« Reply #8 on: January 08, 2003, 06:50:42 pm »

Quote
Well, the normalization function is right , but even if it weren't, your solution should yield exactly the same results, assuming that you've transformed the weightings correctly.

Well, they don't yield the same results, that is my problem with the 10-0 system.

Take this example. We are down to two states, A and B, and 3 equally-important criteria (in an abstract sense), a, b and c. Let's further state that A is (very) slightly less good on criteria a and b, and B is very much less good on criterion c.

The correct answer is state A, of course; we know that intuitively. And that is what Ted's normalization gives, with no fudging of weights (which we assume here are all 1):

  Criteria ->         a       b       c         Total
State A              9.9   9.9      10        29.8
State B              10     10       2         22


The 10-0 normalization gives the wrong answer, State B:

  Criteria ->         a       b       c         Total
State A               0       0      10         10
State B              10     10        0         20

Now you will say, we need to boost the weight for criterion c (or depress a and b) to give the right answer, and of course we can do that. But this is a simple example, we already know what the right answer is intuitively, so we pick the appropriate criterion and keep boosting away until the spreadsheet yields the correct answer.

But then, why bother with the spreadsheet? The whole point of the thing is for it to tell us the right answer, not the other way around. We don't know the right answer, and we don't know which criteria weight to fudge and how much, and anyway when we are fudging to get one state in line we inadvertently get another state out of line, so the whole thing becomes an unwieldy mess.

Sorry, Jason, I just don't believe the function of weights is to compensate for unhelpful normalizations. It is to decide what is important to you, and that's it.

you wrote in that other thread,
Quote
I don't think it's possible to consider how to weight variables without some knowledge of the base numbers & what they mean.
OK, I'll buy that, now that I think about it. You can get some idea what your weight should be by knowing what the data are saying. But that is a far cry from fudging them to compensate for inadvertently having knocked a viable state out of contention!

You also wrote this:
Quote
But even if you think you should just consider variables abstractly to determine their weightings, the current method of interpolation is better than Ted's suggested method, which requires not just a consideration of the variables' importance "abstractly" or "in themselves" but also how the scale of the base variable is affecting the transformation of the data."
I don't know what that means. The normalization is supposed to deal with the scale.

You wrote this, which I didn't look at too closely at the time:
Quote
But Ted's solution can create paradoxes if you're not very sophisticated about the way in which you do the ranking.  For example, imagine the ranking is "government ownership above 48%."  Then state A scores 2% and state B scores 1% even though the fundamental, underlying concept is the same.  Ted's solution would give B a 5 and A a 10.  But any good normalization should not change if the scale of the fundamental variable changes.
Examining this again, I see that your implementation of Ted's method in fact does give a more reasonable result. The 10-0 algorithm would assign a 10 to A and a 0 to B. A and B are really near equivalent, and Ted's algorithm more closely approximates that reality than the 10-0 algorithm does. But of course the real way this would be normalized would be proportionately (referenced to 0, not 48), with a simple extra provision that zero is returned if the value is 48 or under.

I will provide in my cut of the spreadsheet, a cell with a 10-0 algorithm so you can cut and paste it as you like.
Logged

JasonPSorens

  • Administrator
  • *****
  • Offline Offline
  • Posts: 5724
  • Neohantonum liberissimum erit.
    • My Homepage
Re:Re-Examination of the Spreadsheet
« Reply #9 on: January 08, 2003, 07:09:41 pm »

Isn't there a problem with small differences being magnified?

Well, it depends on the context.  Since we're adding together variables, the appropriate way to deal with that is to give a very small weighting to variables where the substantive differences are very small.  There's no non-arbitrary way to deal with the substantiveness of differences.  Creating a scale based on 0 to highest value is just as arbitrary as creating a scale based on lowest to highest value.  Neither method is wrong; you just have to be clever about how you assign the weightings.

Quote
For example, right now, I have eliminated WY, ND, and VT from further consideration, because if we can't get enough jobs in those states the project will fail.

Well, that's a legitimate view, but a very controversial one of course!  The counterargument is that if fewer than 20,000 people move, we can still have big political influence in these states & jobs may be available for all those who do move.
Logged
"Educate your children, educate yourselves, in the love for the freedom of others, for only in this way will your own freedom not be a gratuitous gift from fate. You will be aware of its worth and will have the courage to defend it." --Joaquim Nabuco (1883), Abolitionism

JasonPSorens

  • Administrator
  • *****
  • Offline Offline
  • Posts: 5724
  • Neohantonum liberissimum erit.
    • My Homepage
Re:Re-Examination of the Spreadsheet
« Reply #10 on: January 08, 2003, 07:30:03 pm »


Well, they don't yield the same results, that is my problem with the 10-0 system.

They do if you weight the criteria a and b in your example very small. ;)  Neither method is wrong, but if we use Ted's method, then we have to be equally clever about the weightings.  For example, if states are scored 0-10 on my normalization on a very important variable, but they're scored 3.5 to 10 on Ted's normalization, then using Ted's spreadsheet we should give that variable 65% of the weighting that we gave it on my spreadsheet.  More fundamentally, Ted's normalization doesn't take substantive differences into account any more than mine does. You've created your example so that the difference between 9.9 and 10 or 9 and 10 is the same no matter what raw variable you're looking at.  But maybe the difference between 9 and 10 on variable a is more important or relevant than the difference between 5 and 10 on variable b.

Quote
Sorry, Jason, I just don't believe the function of weights is to compensate for unhelpful normalizations. It is to decide what is important to you, and that's it.

But this is impossible to do in abstraction.  You have to look at what the raw variables mean substantively.  Weightings cannot, under any normalization system, have to do simply with how important you think that variable is, without knowing something about what those data look like.  

Quote
Quote
I don't think it's possible to consider how to weight variables without some knowledge of the base numbers & what they mean.
OK, I'll buy that, now that I think about it. You can get some idea what your weight should be by knowing what the data are saying. But that is a far cry from fudging them to compensate for inadvertently having knocked a viable state out of contention!

But it's not fudging.  You're basing the weighting on how important you think the differences among the states are.  Maybe states vary from 5 to 5 billion on some raw measure.  But maybe that kind of difference is not very important.  So you give it a low weighting.  That's how it works: you weight the variables on the basis of how important you perceive the observed variation to be.  No fudging involved.  Let me use an example directly from the State Data page.  How important is state & local government spending?  Well, moderately important, I guess.  It's a cultural factor.  So we look at the data: 6.2% of state GDP to 10.8%.  Those are some moderate but probably meaningful differences.  So we give the variable a medium-to-low weighting.  But imagine now that there are some pure communist states under consideration, with spending as % of GDP up around 70%.  Suddenly this variable becomes quite important: we don't want to pick a communist state!  So we give the variable a higher weighting, after normalization of course.  Now imagine the reverse: state government spending ranges from just 6.2 to 6.3% of GDP.  We decide that's not meaningful variation, and we give the variable a very low weighting.  That's exactly what I'm suggesting we do with the spreadsheet: and we will have to do that no matter what normalization method is used, Ted's or mine.  You automatically have to consider the range of data before making a weighting decision: that's not fudging, and there's no way around it.

Quote
You also wrote this:
Quote
But even if you think you should just consider variables abstractly to determine their weightings, the current method of interpolation is better than Ted's suggested method, which requires not just a consideration of the variables' importance "abstractly" or "in themselves" but also how the scale of the base variable is affecting the transformation of the data."
I don't know what that means. The normalization is supposed to deal with the scale.

Ted's method assumes that 0 is the natural minimum.  It's arbitrary to scale.  But some variables may have possible negatives, or maybe they have above-zero floors.  Population density is an example of this: a population density of zero is impossible.  Hence the example below...

Quote
You wrote this, which I didn't look at too closely at the time:
Quote
But Ted's solution can create paradoxes if you're not very sophisticated about the way in which you do the ranking.  For example, imagine the ranking is "government ownership above 48%."  Then state A scores 2% and state B scores 1% even though the fundamental, underlying concept is the same.  Ted's solution would give B a 5 and A a 10.  But any good normalization should not change if the scale of the fundamental variable changes.
Examining this again, I see that your implementation of Ted's method in fact does give a more reasonable result. The 10-0 algorithm would assign a 10 to A and a 0 to B. A and B are really near equivalent, and Ted's algorithm more closely approximates that reality than the 10-0 algorithm does. But of course the real way this would be normalized would be proportionately (referenced to 0, not 48), with a simple extra provision that zero is returned if the value is 48 or under.

Well, I was working with an example you gave of states that are very close together.  But why should 0 be the reference point?  In my view, the appropriate reference point is whatever the lowest value is.
Logged
"Educate your children, educate yourselves, in the love for the freedom of others, for only in this way will your own freedom not be a gratuitous gift from fate. You will be aware of its worth and will have the courage to defend it." --Joaquim Nabuco (1883), Abolitionism

JasonPSorens

  • Administrator
  • *****
  • Offline Offline
  • Posts: 5724
  • Neohantonum liberissimum erit.
    • My Homepage
Re:Re-Examination of the Spreadsheet
« Reply #11 on: January 08, 2003, 07:42:27 pm »

Here's an example of how Ted's normalization method can trap the unwary, using actual State Data numbers.

Let's suppose in a very abstract sense you consider gun freedom and # of jobs to be equally important variables, and you give them both a weighting of 2 on this basis, assuming that the spreadsheet allows you to make this judgement on the basis of an abstract consideration of the variables.

The spreadsheet you're creating will yield the following values for the high and low states on the gun freedom and jobs variables:

Top on gun freedom: New Hampshire, 10
Bottom on gun freedom: Delaware, 6.7
Top on jobs: Idaho, 10
Bottom on jobs: Wyoming, 1.7

If you give an equal weighting to both, then Wyoming gets hurt bad, and Delaware doesn't get hurt much.  But then you look at the raw data.  Even though the differences in gun ratings are small, from Delaware at 7 to New Hampshire at 10.5, they represent pretty substantial departures in policy on the most important gun issue, concealed carry.  So those differences are important.  By contrast, you look at the jobs variable and (let's suppose) realize that all states have enough jobs and that Idaho's benefit in this area is superfluous.  If that's your view, then you want to rate gun freedom a good bit higher than jobs, but if the person hadn't looked at the raw data & assumed that there were real differences that matter among the states in jobs, then your spreadsheet would yield the exact wrong result.  The fatal assumption is that the difference between 6.7 and 10 on one variable is substantively less than the difference between 1.7 and 10 on another.

So you always have to look at the raw data to determine a weighting, whether you use Ted's normalization procedure or mine to create the spreadsheet.
Logged
"Educate your children, educate yourselves, in the love for the freedom of others, for only in this way will your own freedom not be a gratuitous gift from fate. You will be aware of its worth and will have the courage to defend it." --Joaquim Nabuco (1883), Abolitionism

TedApelt

  • FSP Participant
  • ***
  • Offline Offline
  • Posts: 117
  • Free 50 states - one at a time.
Re:Re-Examination of the Spreadsheet
« Reply #12 on: January 08, 2003, 10:43:29 pm »

Before I say anything else, I would like to say that I have always considered myself pretty good on number logic, ways of measuring things, and statistical fallicies that trip people up.

Having said that, I must admit to being over my head now.  I am befuddled.

The only thing I can think of is that bright mathematicians before me must have studied this problem many years, probably even centuries, before and have come up with the answer.  Can someone quote me it from a statistics book or other book that deals with this very thing?
Logged
How much political experience do you have?  Probably not enough.  Get some!  DO THIS NOW!!!

Zxcv

  • ****
  • Offline Offline
  • Posts: 1229
Re:Re-Examination of the Spreadsheet
« Reply #13 on: January 09, 2003, 02:20:26 am »

Ted, I'm beginning to get the impression you have a "hard science" background, like me. This is not hard science, it is pretty mooshy, with lots of guesses and judgement calls. So we are having difficulty with this point (I don't think it's math, but some soft science like social studies or whatever the terminology is these days). But I'm getting a faint glimmering at what Jason is talking about.

Well, since as Jason pointed out, the two normalizations will yield the same results given that you compensate appropriately with the weighing, and that you do have to fiddle with it, then I might just stick with yours anyway since it is easier to put into the spreadsheet!

Unless Jason slaps me around again.  ;)
Logged

JasonPSorens

  • Administrator
  • *****
  • Offline Offline
  • Posts: 5724
  • Neohantonum liberissimum erit.
    • My Homepage
Re:Re-Examination of the Spreadsheet
« Reply #14 on: January 09, 2003, 09:23:17 am »

haha I don't mean to slap you around. ;)  I guess this is more 'soft science' since the hard sciences don't often rely on tallying up variables to make predictions.  If the initial conditions hold, the result is a given, in the hard sciences.  If there's demand for it, I can ask around statistical circles to see what they say.
Logged
"Educate your children, educate yourselves, in the love for the freedom of others, for only in this way will your own freedom not be a gratuitous gift from fate. You will be aware of its worth and will have the courage to defend it." --Joaquim Nabuco (1883), Abolitionism
Pages: [1] 2 3 4 5   Go Up