site banner

Small-Scale Question Sunday for September 04, 2022

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

26
Jump in the discussion.

No email address required.

I think this is misguided because it's the opposite of what currently happen, with theorem provers doing so much of the evaluation work relative to the 'creative' work. I can definitely see AI expanding the search space, though, with mathematicians working with the machine to find more novel or interesting results as a consequence. Much like art, I think AI are at the present time both a job-destroyer for the bottom end of the market (want your fursona fucking a famous politician? No longer do you have to pay $50, you can just get the machine to do it!) such as commissions but will ultimately enable people who understand art (colour, composition, etc, and consequently how to more reliably get the machine to do the work you want) to create more interesting and varied things at the top end.

Academic mathematicians are towards the top end of what you'd consider 'stem jobs' IQ-wise, so I'd anticipate a similar effect there.

Theorem provers do all of the evaluation work ... for those specific results which have been painstakingly translated into the theorem prover's language, which to a first order approximation is zero percent of the new and interesting results.

Training a transformer AI on MetaMath (or on Coq results, whatever large database has both complete proofs and adequately verbose comments), combined with the verifier itself, might be enough at this point to create a "math-paper to formal-proof" translator. Skimming through comments I see a lot of links back to the papers which originally published each theorem, which certainly ought to qualify as "adequately verbose" even if the database comments themselves are fairly terse.

Doing creative work would of course be more interesting ... if we could only define what's "interesting". 378+135=513 is a theorem among infinite others, but nobody cares about it. We tend to like math if it eventually has endpoints with real-world applications, and it's a bit hard to put that into an evaluable loss function. We also tend to like theorems if they're more general, and if they're short to state but long to prove, and if they're on the shortest path to proving other theorems, and maybe there's something to those criteria that could be quantified well enough to point a Neural Net Monte Carlo Tree Search in the right direction?

AI is more likely to replace technical jobs than manual jobs though. It's easier to program an AI to figure out a trillion parameters of data then to teach it to walk. Welcome to the upside down where the safest job will be factory worker.

Said tongue in cheek.

To a degree, yes. People who have a high-level understanding of their field, however, are those best placed to use new AI tools. Likewise, statisticians didn't disappear because we built better tools for statistical analysis, rather the demand for statistical education has never been higher. The tools are still used by someone and we tend to see the lowest rung automated away and smaller numbers of usually better educated employees getting productivity increases. Usually what this looks like is a lot of the lowest-skill (or those with a very narrow skillset) employees lose their jobs - the invention of the mechanical (and later electronic) calculator removed the need for human calculators, but engineers and mathematicians are still a thing.