aqouta
No bio...
Friends:
User ID: 75

This is silliness. Maybe you'd have a point if @TracingWoodgrains used his credibility to push the story but he didn't. LOTT ate bait posted by an anonymous source with zero attempt at verification. He did not pimp out his name. There is no reason to believe anything he writes is a hoax. The only lesson one can reasonably draw from the whole thing is that you shouldn't take the word of random anonymous people or those who do.
If conservatism is when you refuse to address entitlements, blow up the budget deficit, tarrif our allies because you don't understand trade policy, behave like a petulant child in every possible situation and fall for lowest common denominator X slop posts then what even remains of conservatism? What is Trump conserving exactly?
Simply scaling existing methods, while potentially achieving impressive results, cannot achieve AGI.
Why do you believe this? Is it an article of faith?
It seems like we absolutely do know what lies ahead on the path to AGI and it's incrementally getting better at accomplishing cognitive tasks. We have proof that it's possible too because humans have general intelligence and accomplish this with far fewer units of energy. You can, at this very moment, if you're willing to pay for the extremely premium version, go on chat gpt and have it produce a better research paper on most topics than, being extremely generous to humanity here, 50% of Americans could given three months and it'll do it before you're back from getting coffee. A few years ago it could barely maintain a conversation and a few years before that it was letter better than text completion.
This is rather like having that LW conversation after we'd already put men into orbit. Like you understand that we did actually eventually land on the moon right? I know it's taking the metaphor perhaps to seriously but that story ends up with Alfonso being right in the end. We can, in fact, build spaceships that land on the moon and even return. We in fact did so.
Now we have some of the greatest minds on earth dedicated to building AGI, many of them seem to think we're actually going to be able to accomplish it and people with skin in the game are putting world historical amounts of wealth behind accomplishing this goal.
I concur that this is a pretty bad look from a moderator, and would really like the mods to look past the +44 upvotes and fawning u-go-girl responses and consider that this sort of thing is enabling/deepening bad tendencies in the community.
Obligatory low heat take drop that moderators not using the mod hat are allowed to make low quality posts. We are moderated by men, not gods.
Maybe you do but I consistently find that the sorts of people who resist thought experiments tend to have deeply conflicted world views that they never examine. As I said, if you're being accosted by some rude stranger feel free to dodge out and stick to small talk. But With people you know well who are curious about how you think? On a discussion forum where the whole purpose is battling out ideas? What's the point? You could just go do something else with your time.
I understand the catharsis in cheating to win the Kobayashi Maru challenge but it really is the cop out answer. Oh, so you're guarded and cynical and don't want to discuss sacred values? That's fine, you can use this maneuver to get out of it when it's an inappropriate time to have the discussion but are you genuinely just committed to never exploring which of your values plays master to the others? Too afraid of judgement for making a call?
Fighting the hypothetical is small talk, it's a dodge. It exchanges a kind of low grade cleverness to avoid substance.
There wasn't ever a mechanism by which cars would start improving themselves recursively if they were able to break 100 mph. There were very good laws of physics reason in the 60s to assume we couldn't even in theory get to FTL. No such reasons exist today. You're not fighting the prediction that cars will be able to go ftl, you're fighting the prediction that mag lev trains would ever be built.
If there were lots of natural creatures casually traveling around at light speed through mere evolution those predictions would have been much better founded. It seems like quite the unfounded prediction to have witnessed LLMs rapidly surpass all predictions with the pace not appearing to slow down at all and assume it's going to stop right now. Which kind of must be your assumption if you think we aren't going to hit agi. It rather seems like you're declaring those automobiles will never compete with horses because of how clunky they are. We're at the horse vs car stage where cares aren't quite as maneuverable or fast as horses and maybe will just be a fad.
Thanks for your thorough reply!
Yes and no. Clearly, things are better than even three years ago with the original release of ChatGPT. But, the economic and practical impact is unimpressive. If you subtract out the speculative investment parts, it's almost certainly negative economically.
And look - I love all things tech. I have been a raving enthusiastic nutjob about self-driving cars and VR and - yes - AI for a long time. But, for that very reason, I try to see soberly what actual impact it has. How am I living differently? Am I outsourcing much code or personal email or technical design work to AI? No. Are some friends writing nontrivial code with AI? They say so, and I bet it's somewhat true, but they're not earning more, or having more free time off, or learning more, or getting promoted.
I think you're a little blinkered here. It takes more than a couple years to retool the whole economy with new tech. It was arguably a decade or more after arpanet before the internet started transforming life as we know it. LLMs are actually moving at a break neck pace in comparison. I work at a mega bank and just attended a town hall where every topic of discussion was about how important it is to implement LLM in every process. I'm personally working to integrate it into our department's workflow and every single person I work with now uses it every day. Even at this level of engagement it's going to be months to years cutting through the red tape and setting up pipelines before our analyst workflows can use the tech directly. There is definitely value in it and it's going to be integrated into everything people do going forward even if you can't have it all rolled out instantly. We have dozens of people whose whole job is to go through huge documents and extract information related to risk/taxes/legal/ect, key it in and then do analysis on whether these factors are in line with our other investments. LLMs, even if they don't progress one tiny bit further, will be transformative for this role and there are millions of roles like this throughout the economy.
I think that is the crux of our disagreement: I hear you saying "AI does amazing things people thought it would not be able to do," which I agree with. This is not orthogonal from, but also not super related to my point: claims that AI progress will continue to drastically greater heights (AGI, ASI) are largely (but not entirely) baseless optimism.
Along with these amazing things it comes with a ripple of it getting steadily better at everything else. There's a real sense in which it's just getting better at everything. It started out decent at some areas of code, maybe it could write sql scripts ok but you'd need to double check it. Now it can handle any code snippet you throw at it and reliably solve bugs one shot on files with fewer than a thousand lines. The trajectory is quick and the tooling around it is improving at a rate that soon I expect to be able to just write a jira ticket and reasonably expect the code agent to solve the problem.
Nothing has ever surpassed human level abilities. That gives me a strong prior against anything surpassing human level abilities. Granted, AI is better at SAT problems than many people, but that's not super shocking (Moravec's Paradox).
Certainly this is untrue. Calculators trivially surpass human capabilities in some ways. Nothing has surpassed humans in every single aspect. There is a box of things that AI can currently do better than most humans and a smaller box within that of things it can do better than all humans. These boxes are both steadily growing. Once something is inside that box it's inside it forever, humans will never retake the ground of best pdf scraper per unit of energy. Soon, if it's not already the case, humanity will never retake the ground of best sql script writer. If the scaffolding can be built and the problems made legible this box will expand and expand and expand. And as it expands you get further agglomeration effects. If it can just write sql scripts then it can just write sql scripts. If it's able to manage a server and can write sql scripts now it can create a sql server instance and actually build something. If it gains other capabilities these all compliment each other and bring out other emergent capabilities.
The number of people, in my techphillic and affluent social circle, willing to pay even $1 to use AI remains very low.
If people around you aren't paying for it then they're not getting the really cutting edge impressive features. The free models are way behind the paid versions.
It has been at a level I describe as "cool and impressive, but useless" forever.
AGI maybe not, but useless? You're absolutely wrong here. With zero advancement at all in capabilities or inference cost reductions what we have now, today, is going to change the world as much as the internet and smart phones. Unquestionably.
No, and that's exactly point! AI 2027 says well surely it will plateau many doublings past where it is today. I say that's baseless speculation. Not impossible, just not a sober, well-founded prediction. I'll freely admit p > 0.1% that within a decade I'm saying "wow I sure was super wrong about the big picture. All hail our AI overlords." But at even odds, I'd love to take some bets.
Come up with something testable and I am game.
Absolutely not. Deep research is a useful tool for specific tasks, but it cannot produce an actual research paper. Its results are likely worthless to anyone except the person asking the question who has the correct context.
This clears the bar of most Americans.
If you build a bigger rocket and point it at the moon, it will get incrementally closer to the moon. But you will never reach it.
If you have some of the smartest people in the world and a functionally unlimited budget you can actually use the information you gain from launching those rockets to learn what you need to do to get to the moon. That is was actually happened after all so I really don't see how this metaphor is working for you. The AI labs are not just training bigger and bigger models without adjusting their process. We've only even had chain of thought models for 6 months yet and there is surely more juice to squeeze out of optimizing that kind of scaffolding.
This is like claiming moore's law can't get us to the next generation of chips because we don't yet know exactly how to build them. Ok, great but we've been making these advancements at a break neck pace for a while now and the doubters have been proven wrong at basically whatever rate they were willing to lay down claims.
Speaking of claims you've decided not to answer my questions, that's fine, continue with whatever discussion format you like but I'd be really interested in you actually making a prediction about where exactly you think ai progress will stall out. what is the capability level you think it will get to and then not surpass?
I dunno, I think my gay friends would stand up for me.
I don't know why this claim keeps coming up, Bernie's path to the nomination required the rest of the pool to cooperate with him to split the vote of the majority position. When it came to having to win one on one he didn't. period. End of story. It's not ratfucking to notice you're splitting a position and stop doing that so someone with minority support who you don't agree with doesn't take the nomination.
when he was the beneficiary of an elite attack on Sanders and RFK and Dean Phillips
Do the elites now include random middle class people in the mid west who don't like the anti-vaxx guy who had a worm starve to death on his brain or an avowed socialist? If the elite defended him against some of these people it's because of how incredibly embarrassing they are.
I don't really get what the problem here is. The effort required is basically just to actually put together the currently publicly available information and describe why people would be interested in discussing it. It's the kind of thing a college bound high schooler should be expected to be able to do in 20 minutes. And for this effort bar we filter out a lot of fluff. The cost is that we will have to wait 20 minutes for someone to do this before we have a discussion about breaking news, but we're not aiming to be a breaking news platform so this is a very low cost.
It aids discussion a lot to have a rough draft of the facts that can then be directly disputed, it channels discussion in a less free form way.
You write like you're an AI bull, but your actual case seems bearish (at least compared to the AI 2027 or the Situational Awareness crowd).
I was responding to a particularly bearish comment and didn't need to prove anything so speculative. If someone thinks current level ai is cool but useless I don't need to prove that it's going to hit agi in 2027 to show that they don't have an accurate view of things.
I think this is true too, in a decade. The white-collar job market will look quite different and the way we interact with software will be meaningfully different, but like the internet and the smartphone I think the world will still look recognizably similar. I don't think we'll be sipping cocktails on our own personal planet or all dead from unaligned super intelligence any time soon.
well yes, that world is predicated on what I think is a very unlikely complete halt in progress.
Yeah, LSD is too icon to actually change the name, it's just a trap for out of towners using voice gps navigation now when it suddenly interrupts their song for like 10 seconds while it says the fake name.
Love the audio version, a good break from the monotone tts player nearly everything else goes through for me.
If you're a high-school student or literature major with zero background in computer science looking to build a website or develop baby's first mobile app LLM generated code is a complete game changer. Literally the best thing since sliced bread.
You have to contend with the fact that like 95+% of employed programmers are at this level for this whole thing to click into place. It can write full stack CRUD code easily and consistently. five years ago you could have walked into any bank in any of the top 20 major cities in the united states with the coding ability of o3 and some basic soft skills and be earning six figures within 5 years. I know this to be the case, I've trained and hired these people.
If you are decently competent programmer working in an industry where things like accuracy, precision, and security are core concerns, LLMs start to look anti-productive as in the time you spent messing around with prompts, checking the LLM's work, and correcting it's errors, you could've easily done the work yourself.
I did allude that there might be a level of programming where one needs to see through the matrix to do but in SF's post and in most situations I've heard the critique in it's not really the case. They're just using it for writing config files that are annoying because they pull together a bunch of confusing contexts and interface with proprietary systems that you need to basically learn from institutional knowledge. The thing LLMs are worst at. Infrastructure and configuration are the two things most programmers hate the most because it's not really the more fulfilling code parts. But AI is good at the fulfilling code parts for the same reason people like doing them.
In time LLMs will be baked into the infrastructure parts too because it really is just a matter of context and standardization. It's not a capabilities problem, just a situation where context is splined between different systems.
Finally if you're one of those dark wizards working in FORTRAN or some proprietary machine language because this is Sparta IBM/Nvidia/TMSC and the compute must flow, you're skeptical of the claim that an LLM can write code that would compile at all.
If anything this is reversed, it can write FORTRAN fine, it probably can't do it in the proprietary hacked together nonsense installations put together in the 80s by people working in a time where patterns came on printed paper and might collaborate on standards once a year at a conference if they were all stars. but that's not the bot's fault. This is the kind of thinking that is impressed by calculators because it doesn't properly understand what's hard about some things.
I feel like I'm taking crazy pills here. No one's examples about how it can't write code are about it writing code. It's all config files and vague evals. No one is talking about it's ability to write code. It's all devops stuff.
The point with the pipes is that obviously hamas is trying to make rockets and bombs using any material they can get their hands on. We're in agreement that they've dug up water pipes and made rockets out of them.This is the justification for blockading and inspecting things thing into Gaza.
It’s not unhinged to know, as a fact, the facts of the day: that many houses were destroyed by tanks, that a survivor testified to tanks firing at their house and killing their family, that many cars were destroyed which were on their way into Gaza, that there were instances of friendly fire. It’s unhinged to have any opinion on the conflict without knowing this, unhinged to hide from it because you find it uncomfortable to acknowledge.
Your claim is that you wouldn't be surprised if only military aged men were killed by Hamas. We have videos of the indiscriminate killing. Not that killing military aged festival goers is somehow justified. You're out of your mind here.
And while it’s not unhinged to distrust “footage” that came out weeks after the attack, I find it inadvisable, because every developed nation has the ability to fabricate footage
You're trusting hearsay about tanks but not video footage? For real? There is plenty of testimony of indiscriminate killing as well, do you trust that testimony or only the rumors spread by pro hamas accounts?
Have you actually used the latest tooling? What tasks have you actually had it try? This seems incredibly unlikely to me.
Woke feminists want ugly, disabled women in the top tier media, and anti-woke coomers want sexy eye candy. Those desires are mutually exclusive, and so one or the other of them will be disappointed.
Not totally related to the thrust of your point but this isn't even true. Skin packs already exist. Very little of the games cost is actually making a few extra models. There really could be a woke and non-woke addition of any AAA game. Hell, this is already done in practice for some international copies that remove LGBT flags or less radioactively the chinese version of WoW that gave a bone dragon flesh because of Chinese sensibilities around exposed skeletons.
Right, I have in the past argued that it is actually not too much to ask for young people to not have sex in high school. I just didn't want to make this a post about that argument so I gave theoretical ground.
This was debunked, I’m pretty sure.
There's video of it happening and hamas claims it happens. The only thing really up for debate is if they use specifically EU funded pipes.
But more to the broader point you understand they still regularly lobbed rockets at Israel right? You can't just let your neighbor that's doing that get easy access to more serious weapons.
https://www.telegraph.co.uk/world-news/2023/10/10/eu-funded-water-pipelines-hamas-rockets/
Personally, I wouldn’t be surprised if every non military age male was killed in the Hannibal Directive rather than by Hamas. Because I don’t think Hamas went in with the RPGs required to
Are you under the impression that most Israelis on October 7th died in cars on the way back to gaza? This is a totally unhinged thing to believe. There is footage, you can watch it. Should go without saying but very NSFW https://www.hamas-massacre.net/
I think so, yes
I genuinely don't understand how you could convince yourself of this. Hamas leadership had been very consistent in denying this.
Moore's law was originally a doubling every two years and has basically kept pace, although it may have slowed now to every 3 years in just the last few years. If the speed of ai progress continues at this trend for half a century and then merely 2/3rds this speed then we're hitting agi for sure.
I can't find this analog SF graph you're talking about and don't see how it's related to this prediction. We knew about brute facts of physics that would prevent ftl travel back then and know of no fact of physics thst would stop ai from gaining human level general intelligence. You think we're at the looking at cars and predicting ftl level when we seem much more at the looking at horses and predicting cars given knowledge of engines. Was it possible that engines were just too heavy and a functional car wasn't really possible? I sure, could have happened. And horses still outclass cars in maneuvering in some cases. But we did get the car, and they do dominate horses in most ways.
- Prev
- Next
He is for sure not innocent. You can certainly argue that it was a politically motivated prosecution of the 10 felonies a day type but Trump really did commit a crime.
More options
Context Copy link