site banner

Small-Scale Question Sunday for October 12, 2025

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

1
Jump in the discussion.

No email address required.

Is there a tactful way to ask your boss to lay off something? My boss, a smart guy whom I respect, has become obsessed with LLMs. Literally every conversation with him about work topics has become one where he says "I asked (insert model) and it said..." which adds no value to the conversation. Worse, he responds to questions with "have you tried asking AI?". For example the other day I asked him if he knows why multiple TCP streams are faster than one (when you would naively think they would be slower due to TCP overhead), and he asked if I asked AI. Which of course I didn't, because I actually wanted to know the answer, not get something plausible which may or may not be correct. And he's like that with every question posed lately, even when we had legal documents we had questions on he was like "did you try feeding it to Gemini and asking?"

It's frankly gotten incredibly annoying and I wish he would stop. Like I said, I actually have a lot of respect for the man but it's like he's chosen to outsource his brain to Grok et al lately. I suspect that my options are to live with it or get a new job, but figured I'd ask if people think there's a way I can tactfully address the situation.

For example the other day I asked him if he knows why multiple TCP streams are faster than one (when you would naively think they would be slower due to TCP overhead),

I would think there'd be no difference, ideally.

If there is a difference I would expect it's because the flow control heuristic on a single stream is a bit wrong and not properly saturating your link. That, or by opening multiple streams you are recruiting more resources on the remote end to satisfying you (e.g. it's a distributed system and each stream hits a different data center)

Mostly I would Google it ask ChatGPT to Google it.

it also might depend on what you mean by 'faster' or what you are doing. but if you are multiplexing streams inside of TCP like HTTP2 then this can be slower than separate HTTP/1.1 streams because a single missing packet on the HTTP2 TCP stream will block all the substreams whereas a single missing packet on a HTTP/1.1 TCP stream will only effect that one HTTP/1.1 TCP stream. by 'block' i mean the data can't be delivered to the application until the missing packet arrives. the data can still be buffered in the OS so you can imagine if you were just looking at a very large transfer with a very small amount of missing packets and you were only worried about the overall transfer time then this is not really 'slower'. but if you are very worried about the time it takes for small amounts of data to reach the other side then this can be 'slower'. a good example of this would be some kind of request-response protocol.

cc @dr_analog

The thing which motivated the question was that we were doing iperf tests from one location on our network to others, and observed that there was a significant difference in speed between one stream and 10. With one stream we might see a 200 Mbps speed, but with 10 we might see 400 Mbps. That seemed odd because like I said, you would think a single stream would be faster due to less overhead.

I have observed this exact behavior before. Fun story time:

In 2015 I was living in North Korea and teaching computer science over there. Part of my job was to download youtube videos, linux distros, and other big files to give to the students over there. (I basically had full discretion about what to give and never experienced censorship... but that would surely have changed if I had been downloading transgressive material.) I discovered that a single tcp connection could get only about 100 kbps, but if I multiplexed the connection to do the download I could get >1gbps. The school was internally on a 10gps network, and I was effectively maxing out the local network infrastructure. I eventually diagnosed the problem as there was an upstream firewall that was rate limiting my connections. Despite what you might think, the firewall wasn't doing any meaningful filtering of the content (these were https connections, so there wasn't a way to do that beyond just blocking an IP, and basically no IPs were blocked; all content filtering at the time was done via "social" mechanisms). But the firewall did rate limit the connections. The firewall was configured to rate limit on a per connection basis and not on a per user basis, and so by multiplexing my downloads over many connections, I was able to max out the local network hardware. At the time, there was only a single wire that connected all of North Korea to the Chinese internet, and the purpose of the firewall rule was to prevent one user from bringing down the North Korean internet... which I may or may not have done... eventually I started doing my downloads on a wifi connection which provided a natural rate limiting that didn't overwhelm the wired connections.

I suspect that you are observing a similar situation where something in between your source and destination is throttling the network speed on a per connection basis instead of per user basis. My best guess about how this happens is that a device somewhere is allocating a certain amount of resources to individual connections, and by using multiple connections, you are accidentally getting more of the device's resources.

Aside: I am an avid user of LLMs (and do research on them professionally). Non-trivial networking is an area where I would be shocked to find LLMs providing good answers. Stackoverflow is full of basic networking setups, but it doesn't have a lot of really good debugging of non-trivial problems, and so these types of problems just aren't in the training data. The solutions usually require relatively simple debugging steps that build off of basic foundational knowledge, but the LLMs don't have the ability to reason through this foundational knowledge well, and I don't expect the transformer architecture to ever get that reasoning ability.

The solutions usually require relatively simple debugging steps that build off of basic foundational knowledge, but the LLMs don't have the ability to reason through this foundational knowledge well, and I don't expect the transformer architecture to ever get that reasoning ability.

That is one of my big skeptic points with LLMs. They don't (and can't) reason, they are producing what is likely to be correct based on their training data. When having this discussion with my boss he argued "they know everything about networking", and I don't see how they can be accurately said to know anything at all. They can't even be counted on to reliably reproduce the training data (source: have witnessed many such failures), let alone stuff that follows from the training data but isn't in it. Maybe we will get there (after all, cutting edge research is improving almost by definition), but we aren't there yet.

Thanks for the story, as well. I hadn't considered an explanation like that so I'll have to take a look at that if we ever want to dig deep and find the root cause.