Only in a very exotic sense that I just can't fathom.
You can't fathom people believing that there is no such thing as a third gender? I don't know what to tell you; AMA I guess.
I mean, I don't really object to other people making whatever honourary or even dubious claims make them happy, including you -- the current state of trans-affairs is more like your anarchist going around getting people fired for considering themselves subjects of the King. Or monarchists forcing the anarchists to pledge fealty every morning before work; also bad.
Nonbinary people are still either men or women -- he/her. Asking for ze is asking for a lie.
Are you seriously saying you're fine with a man getting bottom surgery, breast implants and estrogen shots, renaming himself 'Alice', and wearing dresses - but once he demands to be addressed as 'she', that's where you draw the line?
Yes?
None of the other stuff impacts me in the slightest; it's (aspirationally) a free country. "Demands to be addressed as she" is maybe the least sticky of the demands that are being made IRL, but it's still sticky enough.
I was talking more about the lady in the pub -- now the tiller is steering away from the greedy landlord!
the problem with that behavior was the lying.
Many people find this to be their main sticking point with the pronoun stuff. Not only is somebody lying, they want everyone else to lie too.
My solution would be to use the preferred pronouns but somehow mark them as being specifically-requested pronouns. That way when you say She it can be read as sarcastic.
Chess notation has a lot of useful nuance here -- She(?), She(!), She(!?), and so on. Hard to verbalize though; maybe some crossover with the African tongue clicks that are similarly notated?
now I'm a bit miffed about the exorbitant rent.
The Invisible Hand doesn't sleep.
I also argued that implementing this feature wouldn't require Grok developers to do anything special
This part is wrong -- Grok is designed to take text input, and the developers would definitely need to 'do something' for it to ingest youtube audio instead. (and further work would be required for the model to make any judgement as to how Cash was pronouncing "gambler")
My purpose here is to clear your misconceptions as to how the technology works, and what is possible.
I never said that it was impossible to build a near-instantaneous transcription service -- I said that Grok has no reason to do so, and therefore almost certainly didn't.
youtube will serve all necessary files at the same time
I don't think it will -- have you interacted with the youtube API at all?
I'm guessing you never had to deal with business decisions before.
That's a really bad guess -- I've even had to deal with people who don't want to (and/or seem unlikely to be able to!) pay my business for services rendered.
I'm not asking about whether you think it was a good idea to deliver the coal or not -- I'm saying that, legally speaking -- the (alleged) buyer was seeking to enforce a contract for ongoing delivery that they appear to have already violated by not paying the late fees that were laid out in the contract. My understanding of these situations is that once a party fails to fulfill some part of their responsibilities under a contract, the counterparty is not no longer bound by the rest of it.
Now they have to find someone else to buy 12,000 tons of metallurgical coal per week
I thought the point of the case was that the coal company didn't want to deliver the coal, and the buyer wanted it -- the business decision seems to have been made?
Sounds are not text though -- nothing is free, and nothing is instant.
Why don't you try it? Ask Grok to transcribe a song from a youtube link and see what it does -- preferably a song that differs from the published lyrics somehow, maybe a live version or something.
I don't think so -- you are wrong on this one.
OK, but will Grok? I guess it would be pretty easy to try, but it might refuse on copyright grounds or something.
whisper_print_timings: total time = 67538.11 ms
OK? 67 seconds is not instant -- like, at all. Even 6.7s (assuming the resources assigned to this task were as you suggest) is not instant.
I'm not arguing that Grok 3.0 does in fact do all of this with the Johnny Cash song. All I'm saying is that it could.
Of course it could! But it doesn't, and the fact that it responded instantly is evidence of that. Do you really think Grok is spending resources (mostly dev time, really) to add features allowing the model to answer weird questions about song lyrics?
LLMs lie man -- we should get used to it I guess.
Twitter's not technically FAANG, but I think they need to compete with those salaries -- for which (especially in the Bay Area) $300K is nowhere near top-end.
Stock grant of that much again would also be nothing special for somebody at all in demand -- so $.5-1M TC sounds about right.
LLMs were developed as tools to automatically generate transcripts and sub-titles
Interesting assertion, but it doesn't really have any bearing on whether or not Grok can do this -- it takes text input from the user, and generates a text response. What makes you think it even has an interface to bring in audio inputs? (on the training end, they might -- given the hunger for data -- but it seems like an odd thing to include in a chatbot. Even for training, it would probably be better to do something like, oh, IDK -- run a transcripting algo on as much YouTube content as you can grab and then feed the text from that into your training set. You might even include some timestamps!)
Yes, but serving and parsing videos from youtube is not one of those things.
Warren had never once paid "on time" but had waited until the last minute and withheld the late fee.
How come they hadn't repudiated the contract if they didn't pay the late fees?
IDK, some of those Joe Biden "get in, loser" memes were pretty funny.
LLMs != AI.
Agreed!
(that means that there is no AI at all though -- and the sheer effort/$ being devoted to LLMs is if anything making it less likely that there will be anytime soon.)
Yes, and for the LLM to parse these bits, first youtube needs to locate them, then serve them to the llm. If the llm can convince youtube to serve the bits as fast as bandwidth will allow, it still needs to run those bits through some transcription algo -- which typically are borderline on lagging at 1x speed.
In the instant case, it would also need that algo to make some sort of judgement on the accent with which some of the words are being pronounced -- which is not a thing that I've seen. The fact that it goes ahead and gets this wrong (Cash pretty clearly says gam-bel-er in the video) makes it much more likely that the llm is looking at some timestamped transcript to pick up the word "gambler" in the context of country songs, and hallucinating a pronunciation.
This would be more convincing if humanoid robots existed -- or llms were able to control them. If you ask an LLM "how do you break down a chicken?" it will probably give you a pretty good description that a human could follow -- this sort of thing is well represented in its training set. If you ask it for a program to activate the servos of a hypothetical knife-wielding humanoid robot such that a chicken if front of it will be disassembled, it will give you utter trash. (if it doesn't demur)
It's a pretty good example of the difference between an intelligence and language model actually -- a language model can describe things, and AI can do things.
All that to say, if you want your chicken factory automated, waiting for a humanoid robot so you can drop it into place is not a very effective approach. Buying some machines from the Dutch would work much better.
True (and interesting about the Chrome extension; what is the usecase for 10x browser playback of youtube videos, I wonder?) but I'm quite sure Grok is not currently programmed with anything like this.
I don't care what Alice identifies at -- I identify Alice as a woman or a man, and referring to him/her otherwise would be untrue.
More options
Context Copy link