I have observed this exact behavior before. Fun story time:
In 2015 I was living in North Korea and teaching computer science over there. Part of my job was to download youtube videos, linux distros, and other big files to give to the students over there. (I basically had full discretion about what to give and never experienced censorship... but that would surely have changed if I had been downloading transgressive material.) I discovered that a single tcp connection could get only about 100 kbps, but if I multiplexed the connection to do the download I could get >1gbps. The school was internally on a 10gps network, and I was effectively maxing out the local network infrastructure. I eventually diagnosed the problem as there was an upstream firewall that was rate limiting my connections. Despite what you might think, the firewall wasn't doing any meaningful filtering of the content (these were https connections, so there wasn't a way to do that beyond just blocking an IP, and basically no IPs were blocked; all content filtering at the time was done via "social" mechanisms). But the firewall did rate limit the connections. The firewall was configured to rate limit on a per connection basis and not on a per user basis, and so by multiplexing my downloads over many connections, I was able to max out the local network hardware. At the time, there was only a single wire that connected all of North Korea to the Chinese internet, and the purpose of the firewall rule was to prevent one user from bringing down the North Korean internet... which I may or may not have done... eventually I started doing my downloads on a wifi connection which provided a natural rate limiting that didn't overwhelm the wired connections.
I suspect that you are observing a similar situation where something in between your source and destination is throttling the network speed on a per connection basis instead of per user basis. My best guess about how this happens is that a device somewhere is allocating a certain amount of resources to individual connections, and by using multiple connections, you are accidentally getting more of the device's resources.
Aside: I am an avid user of LLMs (and do research on them professionally). Non-trivial networking is an area where I would be shocked to find LLMs providing good answers. Stackoverflow is full of basic networking setups, but it doesn't have a lot of really good debugging of non-trivial problems, and so these types of problems just aren't in the training data. The solutions usually require relatively simple debugging steps that build off of basic foundational knowledge, but the LLMs don't have the ability to reason through this foundational knowledge well, and I don't expect the transformer architecture to ever get that reasoning ability.
I have observed this exact behavior before. Fun story time:
In 2015 I was living in North Korea and teaching computer science over there. Part of my job was to download youtube videos, linux distros, and other big files to give to the students over there. (I basically had full discretion about what to give and never experienced censorship... but that would surely have changed if I had been downloading transgressive material.) I discovered that a single tcp connection could get only about 100 kbps, but if I multiplexed the connection to do the download I could get >1gbps. The school was internally on a 10gps network, and I was effectively maxing out the local network infrastructure. I eventually diagnosed the problem as there was an upstream firewall that was rate limiting my connections. Despite what you might think, the firewall wasn't doing any meaningful filtering of the content (these were https connections, so there wasn't a way to do that beyond just blocking an IP, and basically no IPs were blocked; all content filtering at the time was done via "social" mechanisms). But the firewall did rate limit the connections. The firewall was configured to rate limit on a per connection basis and not on a per user basis, and so by multiplexing my downloads over many connections, I was able to max out the local network hardware. At the time, there was only a single wire that connected all of North Korea to the Chinese internet, and the purpose of the firewall rule was to prevent one user from bringing down the North Korean internet... which I may or may not have done... eventually I started doing my downloads on a wifi connection which provided a natural rate limiting that didn't overwhelm the wired connections.
I suspect that you are observing a similar situation where something in between your source and destination is throttling the network speed on a per connection basis instead of per user basis. My best guess about how this happens is that a device somewhere is allocating a certain amount of resources to individual connections, and by using multiple connections, you are accidentally getting more of the device's resources.
Aside: I am an avid user of LLMs (and do research on them professionally). Non-trivial networking is an area where I would be shocked to find LLMs providing good answers. Stackoverflow is full of basic networking setups, but it doesn't have a lot of really good debugging of non-trivial problems, and so these types of problems just aren't in the training data. The solutions usually require relatively simple debugging steps that build off of basic foundational knowledge, but the LLMs don't have the ability to reason through this foundational knowledge well, and I don't expect the transformer architecture to ever get that reasoning ability.
More options
Context Copy link