site banner

Small-Scale Question Sunday for June 23, 2024

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

4
Jump in the discussion.

No email address required.

Preface: I'm not the most technically knowledgeable AI person in the world.

Does the recent release of Deepseek v2 mean that China is at parity with the US on AI models?

https://github.com/deepseek-ai/DeepSeek-Coder-V2?tab=readme-ov-file#2-model-downloads

According to the stats they give, it's comparable to GPT4o in most things (slightly behind) but ahead on some coding questions. I know benchmarks can be gamed and/or deceptive but it's an open-source project, I don't know why you'd go to the effort of lying. They also give extremely low API prices, which suggests that it's quite cheap to run or they somehow have more money than the US tech juggernauts.

I know that US labs might not have released everything they have for the public and the new Claude Sonnet is also getting a lot of attention. But new Claude seems roughly on-par with GPT-4o too, maybe a little bit ahead. And why would Deepseek be the best AI model in China? Isn't the general rule that open-source is behind closed-source? I get the sense that China is quite secretive and their biggest tech companies aren't exactly eager to have another volley of sanctions hitting them, wouldn't they stay under the limelight. "This isn't even my final model" should roughly apply to both sides.

Theories:

  1. US labs are still well ahead because Deepseek v2 is gaming metrics or otherwise bad in various ways compared to top models
  2. US labs are ahead because they're sitting on GPT-5 (which is dangerous since it puts millions of people out of work tomorrow and starts a giga-arms race) or racing for superintelligence
  3. China has closed the gap

Related: https://x.com/teortaxesTex/status/1804571746550366264

Maybe the Chinese dumped political 'alignment' and are pulling ahead? I've heard various praise for pre-RLHF'd GPT-4s cognitive abilities.

OK, this is the last straw, I'll write up in detail on the condition of open source in AI, as I promised. Mods, would that be best for the roundup or a separate post? I don't have an opinion.

I happen to know a bit about this specific issue.

For now, in short:

Deepseek-Coder is, as far as anyone can tell, for real, and a bigger deal than Meta's LLaMA3-70B. Claims to the opposite are mostly red-faced nationalistic sputtering and cope, in the vein of "Unitree robots are CGI, Choyna fakes and steals everything". (Indeed, we're at the stage where Stanford students, admittedly of Indian extraction, steal from Chinese labs). It even caused Zvi to update. Aran, the main librarian of the whole field, says that "It has the potential to solve Olympiad, PhD and maybe even research level problems, like the internal model a Microsoft exec said to be able to solve PhD qualifying exam questions."

It arguably, but pretty credibly, reaches parity with SoTA models like GPT-4 in the most utilitarian application of LLMs so far, which is code completion. It's comparably good in math and reasoning (even on benchmarks that have been released after it got uploaded to huggingface, from Gaokao to open-ended coding workloads). It's substantially more innovative than any big Western open source release (small ones like SigLIP, Florence2 etc. can compete), more open and more useful; it's so damn innovative we haven't figured out how to run it properly yet, despite very helpful papers. Design-wise, I'd say it's one year ahead of Western open source (not in raw capabilities though). It's been trained on maybe 60% more compute than LlaMA-3-8B, while being 30 times bigger and significantly more capable, and it might well only be 2x more expensive to run.

The issue of inference economics is unclear, but if their papers do not lie (and they don't seem to, the math makes sense, the model fits the description, at least one respected scientist took part in the development of this part and confirms everything), they can serve at those market-demolishing costs with a healthy margin, like 50% margin actually (if we ignore R&D costs at least). Their star developers seem very young. A well-connected account, that leaked Google project Gemini and Google Brain/Deepmind merger months prior to it being announced, made a joke (of the haha-kidding-not-kidding-variety) that "deepseek's rate of progress is how US intelligence estimates the number of foreign spies embedded in the top labs".

We don't understand the motivations of Deepseek and the quant fund High-Flyer that's sponsoring them, but one popular hypothesis is that they are competing with better-connected big tech labs for government support, given American efforts in cutting supply of chips to China. After all, the Chinese also share the same ideas of their trustworthiness, and so you have to be maximally open to Western evaluators to win the Mandate of Heaven.

Interested to see your thoughts. I also saw the Unitree robots, thought they were real but couldn't really tell. On reflection, Chinese CGI has a certain artificial look to it that was missing.