@P-Necromancer's banner p

P-Necromancer


				

				

				
0 followers   follows 0 users  
joined 2024 October 03 03:49:51 UTC

				

User ID: 3278

P-Necromancer


				
				
				

				
0 followers   follows 0 users   joined 2024 October 03 03:49:51 UTC

					

No bio...


					

User ID: 3278

1 million tokens is a lot! (Gemini 2.0 had 2 million, but good luck getting it to function properly when it's that full). That is 750k words. All of Harry Potter is just over a million.

You know, I hadn't really internalized just how big this is. You got me curious about it. I uploaded something I'm working on -- 240k words, which, with Gemini 2.5 Pro, came out to about 400k tokens.

Honestly, I'm impressed that it works at all and very impressed how fast it works. Thought I'd at least have time to get up and get a drink, but it was already responding to my question inside 30 seconds. Just being able to throw compute at (essentially) reading a book feels magical, like nine women making a baby in a month.

Unfortunately, that's where my praise ends. It... has a general idea what happened in the text, certainly. I wouldn't give it much more than that. I'm used to 2.5 being impressively cogent, but this was pretty bad -- stupider than initial release GPT 4, I want to say, though it's been long enough I might be misremembering. If you ask it concrete questions it can generally give you something resembling the answer, complete with quotes, which are only ~30% hallucinations. Kind of like talking to someone who read the book a few months ago whose memory is getting a bit hazy. But if you ask it to do any sort of analysis or synthesis or speculation, I think it'd lose out to the average 10-year-old (who'd need OOMs longer to read it, to be fair).

(Also, the web front end was super laggy; I think it might have been recounting all the tokens as I typed a response? That feels like too stupid an oversight for Google, but I'm not sure what else it could be.)

Not sure where the disconnect is with the medical textbooks you say you tried. Maybe the model has more trained knowledge to fall back on when its grasp on the context falls short? Or you kept to more concrete questions? As of now I think @Amadan's semantic compression approach is a better bet -- whatever you lose in summarization you make up in preserving the model's intelligence at low context.

(Royal Road makes it so you can't export an epub of your own fic without paying, and without that option, I'd be doing a lot of copying and pasting)

FanFicFare can do this for free. It's also available as a calibre plugin, if you want a gui.

Though, bizarrely, Gemini (at least via Google AI Studio) doesn't support epub uploads. Concerns about appearing to facilitate the upload of copyrighted material? Kind of dumb considering epub is an open format and they allow PDF, but I could see how it might be spun in a lawsuit. Anyway, RTF should work, but didn't for me. Eventually got something workable out of pandoc:

pandoc -f epub -t markdown_strict-smart-all_symbols_escapable --wrap=none

Your so-and-so’s heir is the most important thing about you no matter what you do.

This is an odd framing: that heir has a (great great...) grandfather as well as grandmother. There is only one of you and (potentially) many of your progeny, so it's overwhelmingly likely that the most important thing about any given man will be his children too. And a woman (or a man) can trivially escape this 'shadow' by not having children, which is in the modern day very much an option.

I suppose the distinction is meant to be that women invest more in their children? Or that that investment has more of an impact? Or are less likely to be important otherwise?