ETech – Swarmcasting

Justin F. Chapweske, The Swarming Web
HTTP hasn’t changed much since the early days of the Internet. But it has it’s warts. You need load balancers, routers, caches, fault tolerant servers, etc to be able to scale to run enterprise applications.

How about remixing HTTP? What happens when you want to send real big amounts of data over HTTP? To send a 1GB file, your chance of failure is 60%. You should be able to combine public available bandwidth into new bandwidth when you need it. Kind of like a RAID for IP. Swarming is like RAID but for web content.

Today – fault tolerant servers, then need load balancers to make sure that your system is always up. Expensive. Another option is Akamai (content delivery network). It’s even more expensive. The poor mans version of Akamai is the mirror network (like mirrored downloads on SourceForge). What we need is self heal data transfer (if corruption occurs on data, it needs to be able to self heal immediately).

What is swarming content delivery? Popularized by BitTorrent. We need a new transport. An adhoc content delivery system. Self scaling. Extension to HTTP by headers. This concept will be ubiquitously deployed over the next several years (prediction). A good current trend – generating static files on systems that they can store easily on disk (CSS, RSS, Google Maps) which can be swarmed. Infro on SwaremStream here.