|
|
|
Netli, Linux Take Web to Warp Speeds The Joys of Sublight
LinuxPlanet, By Brian Proffitt: July 7, 2003
In physics, distance in space often translates into longer durations of time. The farther you go, the longer it will take you to get there. Unless you change the way you're going.
If a new video store opens up a mile farther from your home than the one you usually go to, you would have to walk the extra 20 minutes or so to get there--or you could simply jump in your car or a cab and take only another minute or so to peruse this new store's video collection.
The Internet, too, is afflicted by the distance/time problem. Even though data moves at very close to the speed of light, we all constantly run into examples of distance-induced delays. If I were to pull up a Web page from a server in Chicago, in less than an eyeblink it will be in my Mozilla browser, because Chicago is right up the pipe that services my hometown. If I tried to pull the same size page from a similarly powered computer in Malaysia, however, I can reasonably expect delays of 4, 6, even 9 seconds before the page content even hits my browser.
Nine seconds is not a lot of time in the grand scheme of our lives, and occasional delays of this nature seem acceptable to us as we surf around the world. But if you are running a site where commerce or connectivity is absolutely paramount, nine seconds is an eternity where customers can get dropped, packets lost, and all the other problems that can occur with lengthy HTTP requests. Multiply that nine seconds by millions of page views per day, and pretty soon you are talking real money lost to the ether.
Distance equals time, and time, as always, equals money.
To change the way the Internet handles data would be like changing the laws of physics around us; you can build more pipes, you can build better software, but the fundemental infrastructure for getting data from point A to points B though 1AZZ is still the same.
But one Silicon Valley company is achieving the impossible, using the same infrastruture, to get global round trips for Web page delivery consistently down to less than one second. Time after time.
Their secret? A new way of delivering data over the same pipes, and the customability of Linux.
The company's name is Netli, a three-year old start-up from Palo Alto, California. Like a lot of Silicon Valley companies, it's founded on big dreams and high technology. But unlike some SV firms, Netli is already delivering its product to its customers--big names like Neilsen, HP, and Millipore--customers who need to have lightning-fast connectivity for their Web pages for browsers anywhere on the planet.
Adam Grove, Netli's CTO and co-founder, is one of the people with the big dream and when you talk to him you get the sense that a lot of what he and the company are doing is making use of a lot of technical know-how and more than a little common sense.
Netli addresses what is a universal problem for those who's revune and lifeblood are tied to the Internet: the problem of delays in delivering Web content.
According to Grove, there are essentially three things that cause delivery delays on the Internet. First, there are the server delays, inherent in the software and hadrare of the Web server itself. You can juice up a Web server to the nth degree, but there will always be a tiny, sub-second delay in getting pages out the virtual door when a request comes in.
On the other end of the connection are the last-mile delays. There are the delays caused by the type of connection closest to the end-user's computer. Dial-up is still very prevalent in the world as far as connection speeds go, but for some Web pages and applications, even DSL and cable connections might slow the delivery down.
And between all this are what Grove refers to as the middle-mile delays. The middle mile is where the data will do most of its traveling as it crosses the planet. It is roughly the distance traveled between the Web server's ISP and the end-user's ISP. It is here, in the middle mile, that the third category of delays appears: the distance-induced delays.
Distance-induced delays can be caused by many things: router delays, traffic congestions, packet loss. Any number of things can cause lost or misrouted data packets. Now, the nature of the TCP/IP prorocol that most of the Internet uses for communication is such that whenever a packet is lost, the packet is retransmitted by the Web server upon request of the end-user. This is the reason why the Internet works so well. Unless there is some sort of overload or mechanical problem with the server, you can be reasonable sure a request to that Web server will eventually get you a page to your browser.
But this compulsive completeness is also a big reason why the Internet can get so slow, even over backbone connections, Grove explained. He described an example of how this happens: delivering a 70-Kb Web page with 25 objects from a server in Atlanta to an end-user in Tokyo. In the first mile (Web server to Web server's ISP), the round-trip delivery time would be .25 seconds. In the last mile, from the Tokyo ISP to the Tokyo end-user, the round-trip is .1 seconds. In the middle mile, from ISP to ISP, the round-trip time should ideally be .2 seconds. But because of packet drops and congestion, the middle mile trip is actually made 31 times--which jacks up the duration of the middle-mile leg to 6.2 seconds. What should be a .55 second trip now takes 6.55 seconds.
This is, of course, one example, but Grove maintains that it is indicative of a problem that happens with each passing moment on the Internet. The traffic delays inherent in the middle mile are what drag the transmission times down for transoceanic or transcontental server requests and deliveries.
Traditionally, the way distance-induced delays are handled is with either with better bandwidth or through replication of a Web site closer to where the end-users are.
The problems with the first solution are obvious. Even if you build huge backbones all the way around the world (a phenomenal task, to be sure), there are still the congestion problems that kick in when packets get to their destination ISP or leave their departure ISP. Build bigger pipes, and the ensuing flood of data would just make the problem worse.
Web-site replication is an elegant solution, but it only really works if the site's content is static in nature. LinuxPlanet could be replicated in Europe, for example, because the site's content does not change very often. But if there is a database involved in the site's dynamics, and if that database is hit often, then replication quickly becomes a huge nightmare, if not impossible.
Netli approaches the problem of these distance-induced delays in the middle mile not head on, as you would expect, but rather side ways. In other words, it does not go through the problem, but rather around it.
|