domain:slatestarcodex.com
Another week, another humiliation for Britain.
https://thecritic.co.uk/exclusive-osborne-to-give-elgin-marbles-to-greece/
The Critic understands that George Osborne, Chairman of the British Museum, has agreed to give the Elgin Marbles to Greece.
The move is unlikely to be blocked by the Government since the Prime Minister has expressed several times his commitment “not to stand in the way” of a deal between the Greek government and the British Museum.
In order to give the Marbles to Athens permanently, the government would need to amend the British Museum Act 1963 which prevents the deaccession of items. But it is thought that Osborne’s plan to give them away on loan would side-step this requirement.
Since the Greek government claims legal ownership of the sculptures, it is extremely unlikely that they would ever return to Britain.
Spain has to be salivating at this point, not to mention Argentina. There's oil in the Falklands.
they will come up with ways to automate away research or engineering tasks
This is already happening. Papers have been published on it! This is partly why the AI safety people start to sound so deranged, because people are confusing reality with science fiction, not the other way around.
Research and engineering is being automated, piece by piece. R1 can write helpful attention kernels: https://developer.nvidia.com/blog/automating-gpu-kernel-generation-with-deepseek-r1-and-inference-time-scaling/
Also consider this paper:
Many promising-looking ideas in AI research fail to deliver, but their validation takes substantial human labor and compute. Predicting an idea's chance of success is thus crucial for accelerating empirical AI research, a skill that even expert researchers can only acquire through substantial experience. We build the first benchmark for this task and compare LMs with human experts. Concretely, given two research ideas (e.g., two jailbreaking methods), we aim to predict which will perform better on a set of benchmarks. We scrape ideas and experimental results from conference papers, yielding 1,585 human-verified idea pairs published after our base model's cut-off date for testing, and 6,000 pairs for training. We then develop a system that combines a fine-tuned GPT-4.1 with a paper retrieval agent, and we recruit 25 human experts to compare with. In the NLP domain, our system beats human experts by a large margin (64.4% v.s. 48.9%). On the full test set, our system achieves 77% accuracy, while off-the-shelf frontier LMs like o3 perform no better than random guessing, even with the same retrieval augmentation. We verify that our system does not exploit superficial features like idea complexity through extensive human-written and LM-designed robustness tests. Finally, we evaluate our system on unpublished novel ideas, including ideas generated by an AI ideation agent. Our system achieves 63.6% accuracy, demonstrating its potential as a reward model for improving idea generation models. Altogether, our results outline a promising new direction for LMs to accelerate empirical AI research.
Are there caveats on this? Yes. But are AIs running AI research hilarious? No. Nothing about this is funny or deserving of casual dismissal.
Oh I don't agree with it, I just like the surreal nature of the video. Like the commenter says, it's like you're strapped down as a prisoner watching these guys looming over you.
Mildly interesting autopsy report related in a court opinion:
The murder weapon was a Ruger revolver of a caliber not specified in the opinion.
More options
Context Copy link