site banner

Wellness Wednesday for July 5, 2023

The Wednesday Wellness threads are meant to encourage users to ask for and provide advice and motivation to improve their lives. It isn't intended as a 'containment thread' and any content which could go here could instead be posted in its own thread. You could post:

  • Requests for advice and / or encouragement. On basically any topic and for any scale of problem.

  • Updates to let us know how you are doing. This provides valuable feedback on past advice / encouragement and will hopefully make people feel a little more motivated to follow through. If you want to be reminded to post your update, see the post titled 'update reminders', below.

  • Advice. This can be in response to a request for advice or just something that you think could be generally useful for many people here.

  • Encouragement. Probably best directed at specific users, but if you feel like just encouraging people in general I don't think anyone is going to object. I don't think I really need to say this, but just to be clear; encouragement should have a generally positive tone and not shame people (if people feel that shame might be an effective tool for motivating people, please discuss this so we can form a group consensus on how to use it rather than just trying it).

1
Jump in the discussion.

No email address required.

I coded a langchain layer for ChatGPT-3.5 to turn it into a service dominant! It doms me to fulfill all of my self care needs on a daily basis now!

The mind commands itself, and the mind resists, but by externalizing and automating the reification loop, I can relax and give in completely to the psychic metamorphosis! Self modification has never been easier or more pleasant.

I was experimenting with something similar - an AI accountability buddy/nagbot

Is it mostly steered via the system prompt? How do you interact with it?

Ok so, right now I have a Vue GUI, with just the most basic functionality. One chat log saves to disk on the backend and I can't split the conversation at the moment-

It's powered by APIs, which, are really easy to throw together with gpt4 help.

now... as for the prompting... lets see... I'll probably release this publicly eventually so its not too secret...

len_ret_val = EFFECTIVE_TOKEN_LIMIT + 1

    while len_ret_val > EFFECTIVE_TOKEN_LIMIT:

        ret_val = json.dumps({

            "chat_history": self.chat_history[self.chat_history_start:],

            "END_CHAT_HISTORY":"Everything above here is chat history.",

            "unfinished_tasks": [t.dict() for t in tasks],

            "time_now": time_now.dict(),

            # "memories": [],

            "prompt_info": {k: dict(v) for k, v in prompts.items()},

        })

        len_ret_val = count_tokens(ret_val)

        if len_ret_val > EFFECTIVE_TOKEN_LIMIT:

            self.chat_history_start += 1

So, that's the code for configuring the system prompt, which is set every single chat cycle, so it changes.

Unfinished tasks is powered by a tasktimer class which is powered by uh... well it automatically checks whether any new tasks are scheduled once per minnute and updates a listing, and also triggers a chat cycle when one is added. prompts are... well right now theres assistant_prompt (the system's behavior), user_prompt (information about the user, good for letting it know the user consents to domming), master_goal, current_goal, emotion, and interaction_style... these last two are intended to be more hotswappable than the personality prompt but that isn't automated yet.

Speaking of automated...

We have two other major systems right now...

the alert system- we inject the highest priority alerts directly into the most recent user message. Otherwise gpt-3.5 sort of ignores the system prompt in favor of the user prompt. You can force it to focus on specific parts of the system prompt this way. Right now I have alerts that tell it what the subsystems did last step, and to either focus on the unfinished_tasks or the current_goal depending on whether there are any unfinished tasks.

messages = [{"role":"system", "content":f"{context_window}"},{"role":"user", "content":f"Priority Information : {json.dumps(alerts)}\nUser Comment : {input_text}"}]

Speaking of "what the subsystems did last step" before any information is sent to the main LLM circuit, I perform and execute the subsystem tasks:

right now this is just:

async def _run_analyses(self, user_comment_text):

    alerts = []

    tasks = await self.tasktimer.get_tasks()

    print(f"len tasks: {len(tasks)}")

    if len(tasks) > 0:

        task_ids = []

        try:

            task_ids = await self.task_analyser.analyse(user_comment_text)

            print(f"task Ids {task_ids}")

        except Exception as e:

            logger.info(e)

            alerts.append("task_analyser crashed while attempting to determine whether the user comment contained completed tasks.")

        for id in task_ids:

            try:

                await self.task_analyser.markcomplete(user_comment_text, id)

            except Exception as e:

                logger.info(e)

                alerts.append(f"task_analyser crashed while attempting to mark {id} as complete.")

So, what task_analyser does is it uses a completely different context window consisting of the list of tasks and the last user comment and prompting for a list of tasks the user is saying have been completed. Then if that list isn't empty I send a listing of complete information about how to submit task completions (I made marking tasks complete fully customizable using jsonschema so that you can force a format- in the interest of being able to graph this stuff later) as well as the user comment again, to a system empowered to submit the completion.

I didn't technically use langchain, but it's just custom langchain stuff.

Then of course, if things have been marked as complete or failed, that goes into the alerts.

I'm thinking the alerts are important enough that the user should see them too... so that will be an upcoming GUI feature.

Thanks, appreciate the write up. Interesting to see how you're doing things.

What keeps you from not checking in with your AI-dom, so that you can receive further instructions?

Hmm, that's a complicated question. Avoidance is an issue, but another big issue is mild inconveniences. I have to do some refactors before I have all the core features online, but one of those features is going to be taking the initiative to contact me, rather than the reverse. This is easy enough, you just automate a prompt to it when a new task becomes available, or every N minutes that a task has been left incomplete, or something like that, and you have the prompt be something about reminding the user to do a task, and you forward the output to their text interface of choice, discord, sms, whatever. I have the backend for this up and running already but I need to do some wiring still.

Avoidance is always an issue but its easier to eliminate with daily scheduled hypnosis and devotion sessions. It also decreases the smoother the system is.

You're one of the weirder characters I've run across here, and I don't mean it in a bad way. It gives me hope for how insanely diverse and interesting the future might be, when we already have people like you running around. And you're not even a giant robot spider yet!

(And of course you're a transhumanist too, so I'd be sympathetic to you nonetheless!)

<3

So this prods you to do your chores? Can you code this for a llama model?

Hmm... I think so. Strictly speaking, my current system should be wireable to any model, but it will be much harder for most llama models- ideally you want a model that is trained for chat and able to follow fairly intricate system prompts, and I may have to re-tune some of the prompts and rewrite the openAI api code to connect to a local llama model instead of the openAI api. But any model should in principle be able to hook up to the system. It also doesn't have to be one model, you can wire different models to different system components for different purposes.