NET Neutrality in the News Posted by: Dale Franks
on Tuesday, May 02, 2006
TThere were two Op/Eds on Net Neutrality today, one in the New York Times, and another in the Washington Post.
The NYT version, is a fairly uninspired pro-neutrality editorial. The WaPo, version, however, requires some comment.
Yet perhaps without realizing it, those who are now advocating "net neutrality"—the notion that those who shell out the big bucks to build new much higher speed networks can't ask the websites that will use the networks intensively to help pay for them—could keep this new world from becoming a reality. Further, they could deprive the websites themselves of the benefits of being able to use the networks to deliver their data-heavy content.
That is not, in fact, what we are advocating. Moreover, I'd bet that when the guys at Google or Yahoo get their bandwidth bills every month, they think they are already paying for the network. In fact, I bet they think that's the whole purpose of those bandwidth charges.
I run a web hosting company myself. With every plan, you get a fixed amount of bandwidth included. If you exceed that bandwidth, even by one byte, you get charged for an additional 5GB of bandwidth. You can have all the bandwidth you want...you just have to pay for it.
That's the standard for practically every hosting company in the world. Bandwidth isn't free, and never has been. Everybody gets charged for it. And by everybody, I don't just mean content providers. Individual users pay for the bandwidth they use, too. Everybody pays for bandwidth, coming and going.
No one on the Net Neutrality side is, as far as I'm aware, arguing that network providers can't charge for bandwidth. Nor are any of the network providers giving away any of that bandwidth for free.
What the network providers are after is not the ability to charge for the bandwidth that content providers use. What they are seeking is to obtain rent from content providers, by charging more than the bandwidth is worth, and restricting the bandwidth of content providers who are unwilling to pay that rent. This is supposed to be done under the guise of rationing our precious, precious bandwidth.
But, there's already a method of rationing bandwidth. It's called "price". That's already an adequate rationing mechanism without resorting to rent-seeking.
Economists are fond of saying that there are no "free lunches," which is to say that new products and services don't magically appear. Those who benefit from them pay for them. A corollary of this simple principle is that markets will not work efficiently— that is, they will not generate the maximum output at the least cost— unless prices fully reflect all of the costs of products sold or services delivered.
And your point would be...?
Is it that bandwidth prices are too low? OK, then raise the price of Bandwidth from $X per GB of transfer to $X+ per GB.
Are you arguing that Yahoo! is getting some free benefit from all the bandwidth they use? If so, then I'd bet the Yahoo! bean counters would disagree when they write out those checks every month to pay for their bandwidth.
Oh, and by the way, without content providers, who would be using any of that bandwidth in the first place? If the content didn't exist, the network would be pretty darned useless.
There are well known externalities associated with the Internet. One positive externality is that the more users there are, the more beneficial it is to be plugged in, and the more profitable it is to write software it is for Net applications. But increasingly, as content like movies, real-time games, and other data-heavy services like remote disease monitoring are made available, some data imposes negative externalities— traffic congestion, if you will— that adversely affect the ability of others to use the Net reliably.
Until recently, traffic congestion on the Net was not a problem. There was so much excess capacity in the fiber optic cables and other parts of the complex telecommunications network that additional data heavy traffic delivered from one site did not threaten the reliability of traffic delivered from other sites and routed through the Net. But that blissful world is gone now. The existing networks are rapidly running out of excess capacity. We need new cyber-highways if the brave new world of movies, fast Google searches, and telemedicine— to take a few examples— is to become at all viable.
The question, then is: who should pay for these much higher speed networks? Asking all users to pay the same amount, regardless of how much they data they download, hardly seems fair.
And, who precisely, is charging for network access without reference to the bandwidth used? I know I'm not. Nor do I know of anyone who is.
So, again, the point would be...?
Why should telecoms companies that want to build the next-generation cyber-highways be treated any differently? Shouldn't they at least be allowed to charge data heavy sites more than others so that the many of us who don't download lots of data don't get socked?
Again, they already do charge data heavy sites more than little web sites. They do so through charging for every GB of transfer, which means that sites that use more bandwidth pay more in bandwidth charges.
If it helps, look at it this way:
Let's say you're a SBC telephone customer. You pick up the phone to call a local business. After you dial the number, you get a message that says, "You have chosen to call a number that uses GTE as its local phone service. We are unable to connect this call between the peak hours of nine AM to five PM. If you wish to connect to this business during non-peak hours, you must pay an additional service charge."
So, let's say you're lucky, and the business remains open until 5:30 PM. Now, when you call, you get this message: "To connect to this number with a low-quality connection at the rate of twenty-five cents a minute, please press 'one' now. To connect to this number with a regular, high quality connection at fifty cents per minute, please press 'two' now."
Do you think you'd like a phone service that operates like that? If not, then why would you want a tiered Internet that works like that?
Treating phone companies as common carriers has some advantages that I doubt you really want to give up. Treating the Internet that way carries the same advantages.
There may be some good arguments against Net Neutrality, but they won't be found in this op/ed piece.
How does such an editorial get published? Do they bother to research anything? It’s unbelievable. It’s so utterly devoid of even the most basic understanding. Apparently in journalism/punditry school, no one is require to take any courses in math, econ, or basic logical thinking.
Imagine if calling an different phone company’s sales office, or for that matter any service that competed in any way with your phone company or one of its business partners, was impossible. Imagine if calling a customer of a different phone company gave you a good connection if they paid some money to your phone company, and a static-filled and unreliable connection otherwise. Because that’s pretty much exactly what the phone and cable companies want to do to your internet connection.
Jeff, I must say that is a typical socialist red herring, that the for-profit corporations are out to get us.
Maybe. But the fact the ISP have, in the recent past, intentionally blocked sites for business reasons hints that the suspicion is warranted.
This isn’t just speculation — we’ve already seen what happens elsewhere when the Internet’s gatekeepers get too much control. Last year, Telus — Canada’s version of AT&T — blocked their Internet customers from visiting a Web site sympathetic to workers with whom the company was having a labor dispute. And Madison River, a North Carolina ISP, blocked its customers from using any competing Internet phone service.
We don’t have to make up bogeyman stories about the network providers. They are considerate enough to provide us with them.
What regular costs do telecomms/internet access providers have that justifies their charges ? Isn’t the whole bandwidth thing a concoction - I mean, it makes no difference to the cables/servers/routers how much data travels across them, right ?
I fear that eventually this market will compete itself into the ground and we will be left with two or three major players who will then proceed to hold the rest of us to ransom for access (and we will see a lot more of this conditional access nonsense, and more interference with our comms).
Isn’t the whole bandwidth thing a concoction - I mean, it makes no difference to the cables/servers/routers how much data travels across them, right ?
The equipment doesn’t care. The people using it do.
Since both my neighbor and I are on the same subnet, if there wasn’t some bandwidth restriction, my neighbor’s desire for internet porn would have him oinking up all the bandwidth on the subnet. Then I couldn’t access the Net while he was doing so and my ISP would then lose a customer.
Yes, the Telcos care. For their "important" clients they supply SLA contracts guaranteeing a specific level of service for lots of money. There are huge, huge specs inthe TMN and AIN spaces for call control and bandwidth utilization negociation. So someone cares because they pay a fortune for it
I work in the P2P space in distributed computing and I think the real reason is because of the explosion of P2P traffic. P2P traffic consumes something like 60% of all internet traffic worldwide. Bit Torrent and eDonkey comprise the lion share of this. Our measurements indicated that most of it is video traffic where people are downloading 500M VCD format files and MP3 ( a lot les MP3 content that you think, video is the killer).
The telcos see this eating up thier usable bandwith and attempts toeliminate it has failed utterly. I think that if they can setup multiple tiers then they can reduce its effect on their networks. However, as Vint Cerf mentioned (I think), that the internet would see this as damage and route around it.
First off, "deregulation" only works with healthy competition. Currently bandwidth has very few players. First off you have the duopoly for ISDN/Cable high speed access: Local Telco, Cable Company. Second, remember, all of your ISPs must themselves pay for bandwith to the "backbone" (a somewhat innaccurate, but useful term). This means taht even if you use a local ISP, they may be buying their bandwidth from AT&T anyways.
"I mean, it makes no difference to the cables/servers/routers how much data travels across them, right?"
Actually, it does. Each packet must be processed, and that processing does take time and resources. Not much, but enough that there is limits. The current "pay per max throughput" is roughly like "pay per axle" on toll roads.
What AT&T and others want to start doing is being able to charge $2 for a Ford and $4 for a Saturn, because Ford gave them some extra payment (bribe) on the side. (Apologies to Saturn and Ford for using their names in the analogies.)
My guess is the Internaps, NTT, and L3s of the world want to bill more to the ISPs that are undercutting their pricing such as Cogent and using them for content delivery. If the ISPs could charge more to excessive users of fixed price cable/ADSL (usual story 10% of users use 90% of bandwidth) I’m sure they would be delighted to do so..I guarantee every ISP would drool at Telestra’s revenue model for retail home customers. It’s a similar story to the utilities subsidizing users who use electricity only during peak hours in summer months at the expense of customers who use electricity off peak (which is also true). Just a matter of aligning revenue models to the traffic patterns of your network. Is more peering issues like L3’s delink of Cogent in the future?