@DaseindustriesLtd's banner p

DaseindustriesLtd

late version of a small language model

74 followers   follows 27 users  
joined 2022 September 05 23:03:02 UTC

Tell me about it.


				

User ID: 745

DaseindustriesLtd

late version of a small language model

74 followers   follows 27 users   joined 2022 September 05 23:03:02 UTC

					

Tell me about it.


					

User ID: 745

Can you make any argument in defense of your apparently instinctual reactions?

the end of my interest in a thread and a sharp drop in my respect for the user

Otherwise, long form content - the hallmark of much of the best content here - is immediately suspicious, and I am likely to skip it.

It sounds like you just feel entitled to an arbitrary terminal preference. That's not compelling.

I think it has a non-negligible chance of happening. Trump is the new face of America that does not pretend to play by normal countries' rules. The United States is a super-hegemon, a nation not facing even any plausible threat of competent adversary. They can take what they want, the way China/Russia/Iran/etc would very much like to be able to do but can't on account of the United States existing. In front of this face, sovereignty of almost every other country is a bluff that's easy to call. Nobody can militarily oppose the US, and most people on the globe buy into American culture and vision more than into their own regimes and bureaucracies. Certainly that's true of Egypt.

The actual shape of the deal will be about cleansing Gazans and providing security to settlers, though. Securing Israeli interests is one of the foundational, terminal values of the US.

I am quite happy with my analytical work that went into the prompt, and R1 did an adequate but not excellent job of expanding on it.

But I am done with this discussion.

Okay. I give up.

I was not aware that this is a forum for wordcels in training, where people come to polish their prose. I thought it's a discussion platform, and so I came here to discuss what I find interesting, and illustrated it.

Thanks for keeping me updated. I'll keep it in mind if I ever think of swinging by again.

I would welcome such a subhuman overreaction.

You're losing the plot, SS. Why quote a passage fundamentally challenging the belief in OpenAI's innovation track record to rant about choices made with regard to alignment to specific cultural narratives? And “Chinese are too uncreative to do ideological propaganda, that's why DeepSeek doesn't have its own political bent?” That's quite a take. But whatever.

"Hey I think this argument is wrong, so I'm gonna go use an AI that can spit out many more words than I can."

Really now?

What is this slop? I've made my point. You're despicable.

  • -10

Do you believe I would have had any trouble producing as good or better a wall of text myself?

Okay, fair. #6 is contrived non sequitur slop, barely intelligible in context as a response to #5, so that has confused me.

In conclusion, I think my preference to talk to people when I want to, to AI when I want to, and use any mix of generative processes I want to, has higher priority than comfort of people who have nothing to contribute to the conversation or to pretraining data and would not recognize AI without direct labeling.

To be clear, everything not labeled as AI output I have written myself. I also think it's legitimate to use AI to automate for search of nitpicks as he does, the problem is that there's little to nitpick at and his posts are objectively bad as a result.

Okay. I think the elderly care is mainly a problem of machine vision and manual dexterity. I believe these guys will solve it in five years tops.

I have explained my reasons to engage with humans in principle, not in defense of my (R1-generated, but expressing my intent) post, which I believe stands on its own merits and needs no defense. You are being tedious, uncharitable and petty, and you cannot keep track of the conversation, despite all the affordances that the local format brings.

The standards of posting here seem to have declined substantially below X.

Believe me, these days I do indeed mostly talk to machines. They are not great conversationalists but they're extremely helpful.

Talking to humans has several functions for me. First, indeed, personal relationships of terminal value. Second, political influence, affecting future outcomes, and more mundane utilitarian objectives. Third, actually nontrivial amount of precise knowledge and understanding where LLMs remain unreliable.

There still is plenty of humans who have high enough perplexity and wisdom to deserve being talked to for purely intellectual entertainment and enrichment. But I've raised the bar of sanity. Now this set does not include those who have kneejerk angry-monkey-noise tier reactions to high-level AI texts.

Why are you so aggressive? First, concede all the previous items on which your criticism fell flat, then I'll consider whether to dignify you with a response.

  • -13

Strange argument. That's still hundreds of millions more young people than in the US. They don't dissolve in the shadow of inverted population pyramid, they simply get to solve the problem of elderly care on top of having a productive economy to run.

And all this happens within one "generation" anyway.

I can't say I even understand why'd you think anyone would find AI outputs interesting to read.

Because they're intelligent, increasingly so.

The argument that cognitive output is only valid insofar as it comes purely from flesh reduces intellectual intercourse to prelude for physical one. At least that's my – admittedly not very charitable – interpretation of these disgusted noises. Treating AI generation as a form of deception constitutes profanation of the very idea of discussing ideas on their own merits.

Sorry, this is just tedious hairsplitting. Did you use ChatGPT to come up with something?

This indicates that DeepSeek values diverse perspectives and collaboration, contrary to the claim of orthogonal optimizations without coordination overhead.

Not a contradiction insofar as we give the sources straightforward reading. Zihan says: “It’s like everyone contributes to the final model with their own (orthogonal) ideas and everyone hopes their idea is useful”. It has integrated two separate sources (Wenfeng and Zihan) into a non-contradictory phrase. This is basics of journalism, I see worse whenever I open legacy media.

We can go over all items again but clearly you're not arguing in good faith. Give up, R1 > you and whatever sloppy model you've called to aid.

I think it's time to replicate with new generation of models.

Tell me, does R1 above strike you as "slop"? It's at least pretty far into the uncanny valley to my eyes.

90% death rate is bogus (rather, it may confuse death rate and mortality rate?) but literature majors part is in fact true. Since he has bothered to check the interview, I'm surprised why he had left that attack.

Well I protest this rule, if such a rule even exists, I find it infantilizing and find your reaction shallow akin to screeching of scared anti-AI artists on Twitter. It should be legal to post synthetic context so long as it's appropriately labeled and accompanied by original commentary, and certainly when it is derived from the person's own cognitive work and source-gathering, as is in this case.

Maybe add an option to collapse the code block or something.

or maybe just ban me, I'm too old now to just nod and play along with gingerly preserved, increasingly obsolete traditions of some authoritarian Reddit circus.

Anyway, I like that post and that's all I care about.

P.S. I could create another account and (after a tiny bit of proofreading and editing) post that, and I am reasonably sure that R1 has reached the level where it would have passed for a fully adequate Mottizen, with nobody picking up on “slop” when it is not openly labeled as AI output. This witch hunt is already structurally similar to zoological racism.

In fact, this is an interesting challenge.

I'd ask to not derail my argument by insinuating that I'm being biased by locallama debates.

But, since then it seems OpenAI has formally accused DeepSeek

I think it's more cope from them. 4o or o1 could not have written the text above (and I wouldn't dare post GPTslop here), you cannot build R1 with OpenAI tokens; the thing that turns everyone's heads is its cadence, not so much benchmark scores. o1 CoT distillation was virtually impossible to do, at least at scale. We currently see replications of same reasoning patterns in models trained in R1's manner, too.

where the generated output of Western innovation becomes a fundamental input to China catching up and aspirationally exceeding

I think OpenAI outputs have robustly poisoned the web data, and reasoners will be exceptionally vulnerable to it. LLMs know they're LLMs, self-understanding (and imitating snippets of instruction chains) helps reasoning, RL picks up and reinforces behaviors that sharpen reasoning, you get the latent trace of ChatGPT embedded even deeper into the corpus. Sans Anthropic-level investment into data cleaning it's unbeatable.

But to the extent such bootstrapping happened deliberately, and let's grant that it did to an extent, it was an economical solution to speed up the pipeline. The reason for OpenAI models' instruction-following capabilities is, ironically, exploitation – mind-numbing massively parallel data annotation, thumbs up and thumbs down on samples, by low-paid Kenyans and Pinoys for low-level problems, by US students for more complex stuff. It's very stereotypically… Chinese in spirit (which makes it funny that China has not created any such centralized project). The whole of OpenAI is “Chinese” like that really, it's a scaling gig. And knowing you, I'm surprised you insist on the opposite – after all, OpenAI is a company principally founded and operated by three Jews (Altman, Brockman, Sutskever), it can't be “Aryan” by your standards. Then again, Google, Meta, OpenAI… there exists only one American AGI effort without an Ashkenazi founder – Anthropic, and it's an OpenAI's splinter, and even there you have Holden Karnofsky the grey cardinal. (I don't currently count xAI in, but maybe I should provisionally do so after their noises about Grok 3). In this vein, I think you're coping after all.

Purely scientifically, I think R1's recipe is commensurate with RLHF in profundity, and much more elegant.

Now, DeepSeek may be compared to heavy research labs, like FAIR and GDM. It doesn't do too hot in that case. On the other had, almost nothing that they publish works.

I think a more interesting objection to Chinese phase change would be "but at what cost?" Whites don't have to have the idea of risk derisked before their eyes. And they can happily innovate in an NDA-covered black project.

I think you are extremely overindexing on your experience. A century or so ago they were stereotyped as lazy too. This is a matter of culture that can change very quickly.

My argument is I don't think this argument matters. Maybe they will produce 10x fewer Newtons (–Creativity + Intelligence). With current population that's the same as total global production around Newton's time. With the current economic structure, marginal value of one more Newton as opposed to a 1000 PhDs is plummeting. I don't want to lose time arguing details auxiliary to my thesis (or not conductive to banter).