Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?
This is your opportunity to ask questions. No question too simple or too silly.
Culture war topics are accepted, and proposals for a better intro post are appreciated.

Jump in the discussion.
No email address required.
Notes -
Is there a tactful way to ask your boss to lay off something? My boss, a smart guy whom I respect, has become obsessed with LLMs. Literally every conversation with him about work topics has become one where he says "I asked (insert model) and it said..." which adds no value to the conversation. Worse, he responds to questions with "have you tried asking AI?". For example the other day I asked him if he knows why multiple TCP streams are faster than one (when you would naively think they would be slower due to TCP overhead), and he asked if I asked AI. Which of course I didn't, because I actually wanted to know the answer, not get something plausible which may or may not be correct. And he's like that with every question posed lately, even when we had legal documents we had questions on he was like "did you try feeding it to Gemini and asking?"
It's frankly gotten incredibly annoying and I wish he would stop. Like I said, I actually have a lot of respect for the man but it's like he's chosen to outsource his brain to Grok et al lately. I suspect that my options are to live with it or get a new job, but figured I'd ask if people think there's a way I can tactfully address the situation.
I would think there'd be no difference, ideally.
If there is a difference I would expect it's because the flow control heuristic on a single stream is a bit wrong and not properly saturating your link. That, or by opening multiple streams you are recruiting more resources on the remote end to satisfying you (e.g. it's a distributed system and each stream hits a different data center)
Mostly I would
Google itask ChatGPT to Google it.it also might depend on what you mean by 'faster' or what you are doing. but if you are multiplexing streams inside of TCP like HTTP2 then this can be slower than separate HTTP/1.1 streams because a single missing packet on the HTTP2 TCP stream will block all the substreams whereas a single missing packet on a HTTP/1.1 TCP stream will only effect that one HTTP/1.1 TCP stream. by 'block' i mean the data can't be delivered to the application until the missing packet arrives. the data can still be buffered in the OS so you can imagine if you were just looking at a very large transfer with a very small amount of missing packets and you were only worried about the overall transfer time then this is not really 'slower'. but if you are very worried about the time it takes for small amounts of data to reach the other side then this can be 'slower'. a good example of this would be some kind of request-response protocol.
cc @dr_analog
The thing which motivated the question was that we were doing iperf tests from one location on our network to others, and observed that there was a significant difference in speed between one stream and 10. With one stream we might see a 200 Mbps speed, but with 10 we might see 400 Mbps. That seemed odd because like I said, you would think a single stream would be faster due to less overhead.
If you're doing TCP, even small amounts of latency can have bizarre impact when you're dealing with relatively large bandwidth compared to the underlying MTU size, window size and buffer size (and if going past the local broadcast domain, packet size, though getting any nontrivial IPv6 layout to support >65k packets is basically impossible for anyone not FAANG-sized). I can't say with much confidence without knowing a lot about the specific systems, and might not be able to say even with, but I've absolutely seen this sort of behavior caused by the receiving device taking 'too long' (eg, 10ms) to tell the sender that it was ready for more data, and increasing MTU size and sliding window size drastically reduced the gap.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link