Free State Project Forum

Please login or register.

Login with username, password and session length
Pages: 1 2 3 4 [5]   Go Down

Author Topic: Wireless high speed ISP  (Read 23803 times)


  • ****
  • Offline Offline
  • Posts: 14
  • Testing whether that nation can long endure...
Re:Wireless high speed ISP
« Reply #60 on: June 02, 2004, 05:36:23 pm »

Please, not open-source, guys.

Before you get your flame-throwers warmed up, consider the following: the school I used to work at had a network built entirely of 'on the cheap' Linux routers running BGP mostly with static routes.  Latency was horrific (latency is the time it takes a packet to cross a network).  Throughput wasn't stellar, partly due to latency reducing the efficiency of the client and partly due to a low backbone speed.  In three of the worst days of my adult work life, we ripped out that entire network and replaced it with HP switches and dedicated routers.  The cost wasn't all that much greater, given the fact that the hardware was considerably faster, employed a mesh-style network, and ran multiple redundant gigabit backbones.  The result was astounding.  On campus, the network was faster than greased lightening.  We still couldn't get off campus because we'd put in an open source bandwidth limiter.  That thing never worked right.  We bought a Packeteer to replace it with and, presto, all the problems went away.

What I'm trying to say is that open source is often a good idea but it is no silver bullet.  I, for one, am prepared to pay for quality service, which is why I use a Macintosh.

The network infrastructure is the most critical part.  Bad quality of service (qos) will result in slow adoption and stagnation.  Good qos will result in rapid adoption even if that degrades qos.  This is how cablemodems got such a lead over DSL.  For this to work, the company running it must dedicate itself to best services, meaning, as much as possible, a dedicated backbone of a combination of fiber (run by the company) and microwave (80.16a is a good candidate for outlying areas).

The reason for this is simple: one 802.11b connection is not much, at 11 Mbps.  Two are 22 Mbps, and so on.  However, if they are strung together, the one at the end has 11 Mbps, the next guy has 11 mbps minus whatever the end guy's using, and so on, all the way back to the one connected to the wire, who can be starved by everyone.  The only way to prevent this is to employ some sort of qos hardware.  The best way to do it is with a separate mesh and edge design where the edge sits on the other side of qos from the mesh, meaning that we control bandwidth where it connects to the network.  Cisco makes hardware that can do this.  Cisco also makes the switchgear and routers.  They all work together.

Cisco also charges liscence fees.  These are recurring charges.  Bandwidth to the internet is a recurring charge, too.  Besides, upgrading such a network is a recurring charge.  If we're actually fine with 802.11b as the edge of the network, and never want anything more, then capital investment is most of the issue.  Otherwise, we're looking at replacement costs for failed equipment as a percentage of the capital cost as an ongoing cost, as well.  No way to get away from those recurring costs.

Wi-fi would be great in a city with dense coverage or along a road with relatively few obstructions and easy access to power (trees mean solar power is difficult, as well).

However, why not deploy EDGE or 3GSM?  The faster 3GSM is supposed to do 2.5Mbps, if I remember correctly, and solves all the problems with mobiles using wi-fi, as well.  Build out backbone using wi-max but put the edge of the network on 3GSM (or WCDMA or whatever your pleasure is).

Another thought: powerline transmission doesn't have adequate bandwidth for more than point to point.  For the forseeable future, we're going to need fiber as a backbone, but that fiber can easily be run at gigabit speeds...
Pages: 1 2 3 4 [5]   Go Up