Parallel Internet

The Internet is all about transferring files/data from one place to another.

FTP (File Transfer Protocol) and HTTP (HyperText Transfer Protocol) are among the oldest and widely used protocols. There is an effort to update HTTP to 2.0, what I'd like to see is a similar effort for FTP.

One thing that bugs me is I have to decide on a server to server basis what the maximum number of simultaneous connections is. Clearly the server is the best judge as to how many resources it has available for me to download or upload at once.

The SPDY (Speedy protocol) is an effort to improve that by Google but FTP also needs to be more adaptive.


Bob H said…
I suppose it comes down to the fact that the server application is probably not best placed to know how much resource it can take from the host machine, or in the case of multiple instances that the server might struggle to self-limit resources.

Would be an interesting thing to write a plug-in for Apache which dynamically adjusted for server load. Perhaps a new FTP server which was suited for ARM micro-servers but which scaled to thousands of threads on multi-core systems?

Popular posts from this blog

Windows Server and the Task Scheduler Error Code 0x3

IPv6 Ready!

The living wage failure