site banner

Friday Fun Thread for January 10, 2025

Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.

1
Jump in the discussion.

No email address required.

Question for the software engineers:

Is there anything uniquely innovative or difficult to reproduce about the software/codebase for any of the big social media platforms (particularly Twitter/Facebook/Instagram/Reddit/TikTok/Youtube) or is their hold on the market mostly a result of network effects and their established large user bases?

Edit: Having clarified my thoughts after early responses, I think the core of what I want to understand is this: I know that there a many very intelligent people being paid handsomely as software engineers for these sites. Given the apparent simplicity and minimal improvement in the basic functions (from a user perspective) of many of these sites, what is it that these engineers are actually being paid to work on? Aside from server reliability, what other things do they need all these bigbrains for?

Hardest part to replicate is probably the server reliability because that takes lots of intricate work and the AI-driven systems (mostly recommendation / advertising) because you need data.

This matters different amounts for different companies. But I would say that network effects are a far bigger hurdle; the above is just sauce.

So in comparing, say, this site to Reddit, there's probably some complex code for managing the orders of magnitude greater traffic that themotte just doesn't worry about? Or are you mainly referring to baseline server reliability?

So in comparing, say, this site to Reddit, there's probably some complex code for managing the orders of magnitude greater traffic that themotte just doesn't worry about?

Right. Zorba pays for the site out of pocket, but that is not scalable. The site occasionally goes down - we even lost most of a day of posts not too long ago. That's no big deal at our scale - just ssh in, figure out the bug, deploy something manually, etc.

But at e.g. Google scale, it's $500k/minute of gross revenue on the line in an outage, to say nothing of customers leaving you permanently over the headache. Fractions of a percent of optimization are worth big bucks. Compliance headaches are real. Hardware failures are a continual certainty.

Read about the brilliance behind Spanner, the database a huge amount of Google is built on: their own internet cables, because why choose between C[onsistency] and A[vailability] when you can just not have network P[artitions]?

You need an incredible degree of fault tolerance in large systems. If n pieces in a pipeline each have a probability p of working, the system has p^n - exponential decay.

Plenty of it is feature bloat, that said. You really can serve an astonishing amount of traffic from a single cheap server.

I don't have a good sense of scale—how much would you expect running this site costs per month?

Probably something on the order of $20-40 a month. Depends how fancy it is set up, and how much traffic we get.