@Scimitar's banner p

Scimitar


				

				

				
0 followers   follows 0 users  
joined 2022 September 05 21:17:20 UTC

				

User ID: 716

Scimitar


				
				
				

				
0 followers   follows 0 users   joined 2022 September 05 21:17:20 UTC

					

No bio...


					

User ID: 716

No but inversion tables have been shown to be helpful

The OpenAI team use it internally, so it can't do it by itself but at least it can help!

Yeah, just like mobile dev exploded and was lucrative over the last decade, the next few years might be the age of the "applied AI engineer".

I get good summaries by just pasting the content along with the word "Summarize" or "Summarize the above" at the end.

If you use a more imperative rather than conversational tone, it is more likely to do what you ask, rather than pad out the response with empty patter.

What are the current usage limits? Does the history work? What is the maximum input length for a single comment (presume 8k tokens)?

Yes last it heard it was reduced to 25 messages/3 hours, and that they were possibly reducing it further.

What's your definition of AGI? The label feels more like a vibe than anything else.

For me, the multi-modal capabilities of GPT-4 and others [1][2] start to push it over the edge.

One possible threshold are Bongard problems[3]. A year ago I thought, while GPT-3 was very impressive, we were still a long way from AI solving a puzzle like this (what rule defines the two groups?) [4]. But now it seems GPT-4 has a good shot, and if not 4, then perhaps 4.5. As far as I know, no one has actually tried this yet.

So what other vibe checks are there? Wikipedia offers some ideas[5]

  • Turing test - GPT3 passes this IMO

  • Coffee test - can it enter an unknown house and make a coffee? Palm-E[1] is getting there

  • Student test - can it pass classes and get a degree? Yes, if the GPT-4 paper is to believed

Yes, current models can't really 'learn' after training, they can't see outside their context window, they have no memory... but these issues don't seem to be holding them back.

Maybe you want your AGIs to have 'agency' or 'conciousness'? I'd prefer mine didn't, for safety reasons, but would guess you could simulate it by continuously/recursively prompting GPT to generate a train of thought.

[1] https://ai.googleblog.com/2023/03/palm-e-embodied-multimodal-language.html

[2] https://arxiv.org/pdf/2302.14045.pdf

[3] https://metarationality.com/bongard-meta-rationality

[4] https://metarationality.com/images/metarationality/bp199.gif

[5] https://en.wikipedia.org/wiki/Artificial_general_intelligence#Tests_for_testing_human-level_AGI

This is commonly used technology IMO - the same thing as the Pledge of Allegiance, or daily prayers. Many cultures have regular rituals designed to align the individual's psychology towards the group's values. There's no reason why this couldn't work to align the individual to their own values.

My gut feeling is that using "I am ..." statements would work better than just reciting the factual benefits of exercise. E.g. "Exercising is a priority of mine. I enjoy exercise. I am someone who exercises regularly. I am often in the mood to exercise, and even if I'm not, I will do so anyway." and so on. These target your identity, how you think of yourself, so are more likely influence your actions (e.g. I think you want to compile a list of the scientific benefits because you have as part of your identity "I am someone who updates their behaviour based on scientific evidence").

You can kinda do this in chatGPT - ask a question as a chain-of-thought prompt, then a follow up asking it to extract the answer from the above.

I say okay, and ask how I am supposed to close the account and transfer the remaining balance. He said I can close the account and withdraw the remaining balance only in cash. Cash? At this point, I literally asked: "like, green paper money cash?" He says yes. The balance in the account is somewhere around $1M.

[...]

This manager is very helpful, if not a bit gruff. He explains to me that each local branch has some sort of performance metric based on inflows and outflows at the given branch. Therefore, funding a $1M cash withdrawal was not attractive to them. I'm learning a lot in a really condensed period of time at this point. I don't even know if what he's telling me is true, or legal, all I hear is "this is going to be hard to do if you want it all at once."

But we do want it all at once. And we want to close the account. Now. He is not happy, but he says he'll call me back in 24 to 48 hours. True to his word, he calls me back the next day. He says that he had to coordinate to ensure his branch had the proper funding to satisfy this transaction, and that the funding would be available at a specific date a few days hence. He said I have to do the withdrawal that day because his branch will not hold that amount in cash for any longer.

He also subtly suggested I hire personal security or otherwise deposit those funds somewhere with haste. I believe his exact words were "if you lose that check, I can't help you." Again, this was a one time event, and I don't know how true that all is, but it was said to me.

A few days later, I walk into the branch (I did not hire personal security). I tell the teller my name and there is a flicker of immediate recognition. The teller guides me to a cubicle, the account is successfully closed, I'm issued a $1M cashier's check, and I walk out the door.

https://mitchellh.com/writing/my-startup-banking-story - Interesting story about a startup founder's interaction with the world of banking.

See also https://www.lesswrong.com/posts/nmxzr2zsjNtjaHh7x/actually-othello-gpt-has-a-linear-emergent-world

The headline result is that Othello-GPT learns an emergent world representation - despite never being explicitly given the state of the board, and just being tasked to predict the next move, it learns to compute the state of the board at each move.

IMO, LLMs are "just" trying to predict the next token, the same way humans are "just" trying to pass on our genes. It does not preclude LLMs having an internal world model, and I suspect they actually do.

It sometimes helps me to do something I've never done before. Like travel or psychedelics, doing something novel can help shake up your brain and get you out of old thought habits.

Here are some ideas, it can be very simple:

  • cook a meal you've never tried before (or get takeout that you haven't tried before)

  • travel to the gym via different mode or route. Do a routine you've not done before.

  • go for a walk, if you don't normally.

  • rearrange your room/house. tidy up. buy some new art or a new rug

  • open netflix and watch a highly-acclaimed film/series that you haven't seen before

  • visit a local place you've never been to before, a park, an art gallary, then have lunch somewhere new to you

  • go to the cinema and watch something you wouldn't normally watch. (If you normally buy popcorn, get a hotdog instead)

  • get a cheap room in the next town over, and spend a day or two there

Really the challenge is only "something new".

Also, you should try to do things even if you don't feel like it (perhaps you know this already). Do things as an experiment and see what happens. You should make a schedule for the week, perhaps just pick one activity per day. Look up Behavioural Activation - when you feel low, you do less, which makes you feel low. But if you do more, you might feel better. So you should do things even if you don't completely feel like doing them, because the act of doing is itself the medicine.

This is a Sybil attack https://en.wikipedia.org/wiki/Sybil_attack

I don't know how, or if, it relates to the Collatz conjecture

I play piano/keyboard casually. Mostly pop songs to entertain myself. They are simple and satisfying to play, and easy to adapt and improv on. And on the rare occasion I have an audience, a pop song will get more of a reaction than say Rachmaninoff, and takes 1% of the effort to learn and play.

Yes true. My sense of magnitude is not fully calibrated

I guess either you have product-market fit, or you don't. PMF = customers knocking down your door, can't keep up with demand, servers on fire, bottlenecked by scaling, etc. The company struck gold and you must dig it out as fast as possible. No PMF = nobody cares, no users, no problems.

Is it? I personally found the human aspect awkward and embarrassing, and could have done without it. Admittedly I never found therapy useful.

In the VICE article, Dan stays up till 4am talking to it, while Gillian says the words are empty. The Discord users that Koko fooled presumably skew male, so may be a gender thing. I know that my very male approach was "I have a problem that needs to be fixed", not "I need to spend an hour talking to an empathetic human".

It sounds like an anxiety disorder. Has no one offered to treat it as such (diazepam, CBT, etc)? Almost all your symptoms can be manifestations of anxiety.

Here are some examples

https://80000hours.org/articles/what-could-an-ai-caused-existential-catastrophe-actually-look-like/#actually-take-power

You can do a lot with intelligence. By inventing Bitcoin, Satoshi is worth billions, all while remaining anonymous and never leaving his bedroom. What could a super human intelligence do?

This guy is great. I was just reading his very long "Notes on Nigeria" published yesterday

Kamala will just implement policies that give current big players a regulatory moat

I'm not sure it matters because the Copilot code that has been committed has been filtered by a developer, so it's a bit like RLHF. The human is still in the loop, so the only qualities that get amplified are the ones the humans want.

How do you keep track of what you've read online? How do you manage your personal research? Do you have a system? Do you have collections, and keep notes? Is this an actual problem you have?

Personally, I often read something and then struggle to find it again months later, or open something in a new tab to read later only to lose the tab. Or I might write down my thoughts but then lose the context of what I was looking at when I had them. It just feels like things here could be much better.

Have you seen Four Lions? https://youtube.com/watch?v=xR5rKr-p6lc

I got top 0.22%, better than I was expecting since I have relatively weak verbal skills. But hey, apparently vocab tests are highly g loaded, so I'll take my suggested 142(?) IQ!

I'm not much of a reader. I only knew avarice because of Dark Souls (ring of avarice), and alacrity because of Dota (an Invoker spell).