domain:themotte.org
the grid is getting worse and is going to keep getting worse due to Green energy mandates.
I'm pretty optimistic that much of that is going to resolve itself in the short/mid-term. They're just a little behind on the battery front, but those are getting so absurdly cheap, they just have to pull their heads out of their asses and connect them. But it's Germany we're talking about here, so this will take time. Getting permission to connect a boatload of cheap Chinese batteries to the grid will take them a couple of years. Still, I'm optimistic they'll manage by 2030.
Because once you add serious battery capacity to a renewable grid, it gets more stable very, very quickly. It also gets cheaper. Texas and California have been doing that, and the results are immediate: "In 2023, Texas’ ERCOT issued 11 conservation calls (requests for consumers to reduce their use of electricity), [...] to avoid reliability problems amidst high summer temperatures. But in 2024 it issued no conservation calls during the summer." They achieved that by adding just 4 GW (+50%) of batteries to their (highly renewable in summer) grid.
Not to mention that it's the automated town crier that's doing it.
I personally hate the Tiktok/Vine style short video-algo-doomscroll shit with a passion and would rather the whole concept and its copycats get the axe, complete with youtube shorts and facebook's whatever the fuck they have going on. But I'm not sure it's doable with the legal framework we have right now.
I meant things such as not being aware that combatants in a war release constant lies and assuming their press releases are not almost straight bullshit.
No doubt this piece of information is somewhere in there but unless reminded to it's happily oblivious.
Wan 2.1 What's that?
Will have to look it up.
Yes, I made the bot do a programming task.
I ALSO observed it write long-form fiction. This is not an advanced reading comprehension task. It should be obvious that programming and creative writing are two different things.
I think I've explained myself adequately?
You said this:
I call them nonsense because I think that sense requires some sort of relationship to both fact and context. To be sensible is to be aware of your surroundings.
Normal people would think that 'fact' and 'context' would be adequately achieved by writing code that runs and fiction that isn't obviously derpy 'Harry Potter and the cup of ashes that looked like Hermione's parents'. But you have some special, strange definition of intelligence that you never make clear, except to repeat that LLMs do not possess it because they don't have apprehension of fact and context. Yet they do have these qualities, because we can see that they do creative writing and coding tasks and as a result they are intelligent.
I believe a lot of the lack of institutional pushback was down to the election of Trump, which made plenty of liberals go insane and abandon their principles. There was both this radicalising force and a desire to close ranks.
Wokism wouldn't have disappeared without Trump but I believe his election supercharged an existing movement that wouldn't have had the same legs without such a convenient and radicalising enemy. For any narrative to really catch on you need the right villain and Trump was just that.
I can't actually tell what you asked a bot to do. You asked a bot to 'create a feature'? What the heck is that? A feature of what? At first I assumed you meant a coding task of some kind, but then you described it as writing 'thousands of words of fiction', which sounds like something else entirely. I have no idea what you had a bot do that you thought was so impressive.
At any rate, I think I've explained myself adequately? To repeat myself:
But I think that written verbal acuity is, at best, a very restricted kind of 'intelligence'. In human beings we use it as a reasonable proxy for intelligence and make estimations based off it because, in most cases, written expression does correlate well with other measures of intelligence. But those correlations don't apply with machines, and it seems to me that a common mistake today is for people to just apply them. This is the error of the Turing test, isn't it? In humans, yes, expression seems to correlate with intelligence, at least in broad terms. But we made expression machines and because we are so used to expression meaning intelligence, personality, feeling, etc., we fantasise all those things into being, even when the only thing we have is an expression machine.
Yes, a bot can generate 'thousands of words of fiction'. But I already explained why I don't think that's equivalent to intelligence. Generating English sentences is not intelligence. It is one thing that you can do with intelligence, and in humans it correlates sufficiently well with other signs of intelligence that we often safely make assumptions based on it. But an LLM isn't a human, and its ability to generate sentences in no way implies any other ability that we commonly associate with intelligence, much less any general factor of intelligence.
The problems of LLMs and prompt injection when the LLM has access to sensitive data seem quite serious. This blog post illustrates the problem when hooking up the LLM to a production database which does seem a bit crazy: https://www.generalanalysis.com/blog/supabase-mcp-blog
There are some good comments on hackernews about the problem especially from saurik: https://news.ycombinator.com/item?id=44503862
The problem seems to be if you give the LLM readonly access to some data and there is untrusted input in this data then the LLM can be tricked into exfiltrating the data. If the LLM has write access to the data then it can also be tricked into modifying the data as well.
More options
Context Copy link