site banner

Small-Scale Question Sunday for October 12, 2025

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

1
Jump in the discussion.

No email address required.

Is there a tactful way to ask your boss to lay off something? My boss, a smart guy whom I respect, has become obsessed with LLMs. Literally every conversation with him about work topics has become one where he says "I asked (insert model) and it said..." which adds no value to the conversation. Worse, he responds to questions with "have you tried asking AI?". For example the other day I asked him if he knows why multiple TCP streams are faster than one (when you would naively think they would be slower due to TCP overhead), and he asked if I asked AI. Which of course I didn't, because I actually wanted to know the answer, not get something plausible which may or may not be correct. And he's like that with every question posed lately, even when we had legal documents we had questions on he was like "did you try feeding it to Gemini and asking?"

It's frankly gotten incredibly annoying and I wish he would stop. Like I said, I actually have a lot of respect for the man but it's like he's chosen to outsource his brain to Grok et al lately. I suspect that my options are to live with it or get a new job, but figured I'd ask if people think there's a way I can tactfully address the situation.

For example the other day I asked him if he knows why multiple TCP streams are faster than one (when you would naively think they would be slower due to TCP overhead),

I would think there'd be no difference, ideally.

If there is a difference I would expect it's because the flow control heuristic on a single stream is a bit wrong and not properly saturating your link. That, or by opening multiple streams you are recruiting more resources on the remote end to satisfying you (e.g. it's a distributed system and each stream hits a different data center)

Mostly I would Google it ask ChatGPT to Google it.

it also might depend on what you mean by 'faster' or what you are doing. but if you are multiplexing streams inside of TCP like HTTP2 then this can be slower than separate HTTP/1.1 streams because a single missing packet on the HTTP2 TCP stream will block all the substreams whereas a single missing packet on a HTTP/1.1 TCP stream will only effect that one HTTP/1.1 TCP stream. by 'block' i mean the data can't be delivered to the application until the missing packet arrives. the data can still be buffered in the OS so you can imagine if you were just looking at a very large transfer with a very small amount of missing packets and you were only worried about the overall transfer time then this is not really 'slower'. but if you are very worried about the time it takes for small amounts of data to reach the other side then this can be 'slower'. a good example of this would be some kind of request-response protocol.

cc @dr_analog

The thing which motivated the question was that we were doing iperf tests from one location on our network to others, and observed that there was a significant difference in speed between one stream and 10. With one stream we might see a 200 Mbps speed, but with 10 we might see 400 Mbps. That seemed odd because like I said, you would think a single stream would be faster due to less overhead.

it could also be you have a bunch of bad options for the TCP connection. tho, i suspect iperf would should have good defaults. a common problem with TCP application is not setting TCP_NODELAY and be a cause of extra latency. the golang language automatically sets this option but i'm sure a lot of languages/libraries do not set it. you can also have problems between userspace and kernelspace (but maybe not at this speed?). like if you can only shift 200 Mbps between the kernel and userspace because of syscall overhead on a single thread and in the multiple stream case you are using multiple threads then maybe that is why the performance improves. also, if you are using multiple streams you are going to have a much larger max receive window. there is some kind of receive buffer configuration (tcp_rmem?) that controls how large the receive buffer is and the thus the receive window. its possible this is not large enough and so using 10x connections means you effectively now have 10x the max receive window. also, there is tcp_wmem configuration that controls the write buffer in a similar way. cloudflare has an article on optimizing tcp_rmem https://blog.cloudflare.com/optimizing-tcp-for-high-throughput-and-low-latency/ which shows their production configuration.