site banner

Friday Fun Thread for March 8, 2024

Be advised: this thread is not for serious in-depth discussion of weighty topics (we have a link for that), this thread is not for anything Culture War related. This thread is for Fun. You got jokes? Share 'em. You got silly questions? Ask 'em.

1
Jump in the discussion.

No email address required.

What's the stack overflow for LLM prompts? Large parts of my work is unironically "prompt engineering", and I plan to offload significant portions of thinking to gpt.

There doesn't seem to be any forum that focussed on good prompt engineering.

I second the excellent question. Chatbot threads on imageboards have some insights into prompt engineering, but they're not exactly technical because their goal is not automating some abstract task. They still do have some useful info though, and roleplay is honestly underrated as a medium for interacting with LLMs, wearing masks seems to come very naturally to a shoggoth. There's a reason many simplistic prompts for e.g. coding tell the shoggoth "you are a very smart coding assistant" and things to that effect, likewise why many Stable Diffusion prompts begin with "masterpiece", "high quality", etc. Funny how that works, but hey, as long as it works.

If you have access to Claude, Anthropic's documentation on it is fairly solid and grounded in reality, people have been putting it to use and described methods have real effects.

Anthopic has a great prompt library.

What is currently the best LLM implementation on which to use these prompts?

Given these are anthropics prompts, shouldn't the prior be the Claude models ?

I guess they would indeed be best suited for Claude(s), but they are quite generic so they should work all right on any decently large model.

OP I'd be interested to know if you found other useful resources

Why not just ask ChatGPT for a good prompt?

Edit: Just realized why this would fail terribly.

  1. ChatGPT isn't going to help you hack itself

  2. ChatGPT doesn't know much about ChatGPT because it's not inside its own training data

So, either prompt engineering will be the last area of human expertise or, rather, we'll simply ask bot A how to prompt bot B.

Why not just ask ChatGPT for a good prompt?

I'm past that stage.