Drawing assistant
Here is an empty canvas and a prompt. What do you want to be drawn on the canvas?
Hint: use incremental steps to bring your ideas to life.
-
Enter an OpenAI API key. It will only be used by this page to call the OpenAI API.
-
Enter instructions for the drawing assistant. Results are generally better by adding instructions in several steps. Example:
-
Fill a small section towards the bottom with a dark green color
Assistant working…
-
Add a red house with a black roof standing on the ground
…
While the assistant is working the canvas operations are printed in the log area.
-
-
Have fun!
Tech notes
Assistant creation, Thread state management and calling the tool implementation. Versioning to enable reusing existing assistant without storing state.
import OpenAI, { BadRequestError } from "openai";
import type { RunSubmitToolOutputsParams } from "openai/resources/beta/threads/runs/runs.mjs";
import { assert, sha1Hash, timeout } from "./util";
import { CanvasTools, getAssistantTools } from "./canvas-tools";
const models = ["gpt-4-1106-preview", "gpt-3.5-turbo-1106"];
export const getAssistant = async (openai: OpenAI) => {
for (const model of models) {
try {
return await getAssistantWithModel(openai, model);
} catch (e) {
if (e instanceof BadRequestError && e.code === "model_not_found") {
console.debug(`[assistant] model not found: ${model}`);
} else {
throw e;
}
}
}
throw new Error(`getAssistant: no assistant found or created`);
};
const createBody = {
instructions: `You are a drawing assistant. You have access to the specified CanvasRenderingContext2D methods and properties as function tools. You interpret drawing instructions from the user and translate them into actions using the provided tools. You can ask for clarification if instructions are ambiguous. Your message should not include code. Your message should not include links. Your message should not include images.`,
name: "Drawing Assistant",
tools: getAssistantTools(),
};
const getAssistantWithModel = async (openai: OpenAI, model: string) => {
const versionHash = await sha1Hash(createBody);
const name = `${createBody.name} v-${versionHash} ${model}`;
return (
(await findAssistant(openai, name)) ??
(await createAssistant(openai, name, model))
);
};
const createAssistant = async (openai: OpenAI, name: string, model: string) => {
const assistant = await openai.beta.assistants.create({
...createBody,
name,
model,
});
console.debug(`[assistant] created assistant ${assistant.id} / '${name}'`);
return assistant;
};
const findAssistant = async (openai: OpenAI, name: string) => {
for await (const assistant of openai.beta.assistants.list()) {
if (assistant.name === name) {
console.debug(
`[assistant] using existing assistant ${assistant.id} / '${name}'`
);
return assistant;
}
}
return null;
};
export const createThread = async (openai: OpenAI) => {
return await openai.beta.threads.create();
};
export const runAssistant = async (
openai: OpenAI,
assistant: OpenAI.Beta.Assistants.Assistant,
thread: OpenAI.Beta.Threads.Thread,
tools: CanvasTools,
userMessage: string
) => {
await openai.beta.threads.messages.create(thread.id, {
role: "user",
content: userMessage,
});
let run: OpenAI.Beta.Threads.Run | null =
await openai.beta.threads.runs.create(thread.id, {
assistant_id: assistant.id,
instructions: `The canvas dimensions are ${tools.width}x${tools.height}.`,
});
while ((run = await handleRun(openai, thread.id, run, tools)) !== null) {
await timeout(500);
}
const messages = await openai.beta.threads.messages.list(thread.id);
const assistantMessage = messages.data.at(0);
assert(assistantMessage, "handleCompleted: unexpected missing last message");
return assistantMessage.content.reduce(
(acc, content) =>
acc + ((content.type === "text" && content.text.value) || ""),
""
);
};
const handleRun = async (
openai: OpenAI,
threadId: string,
run: OpenAI.Beta.Threads.Run,
tools: CanvasTools
) => {
switch (run.status) {
case "queued":
case "in_progress":
return await openai.beta.threads.runs.retrieve(threadId, run.id);
case "requires_action":
assert(
run.required_action,
"handleRun: unexpected missing run.required_action"
);
return await handlRequiredAction(
openai,
threadId,
run.id,
run.required_action,
tools
);
case "completed":
return null;
}
const message = `unexpected run status ${run.status}`;
console.error(`[assistant] ${message}`, run);
throw new Error(message);
};
const handlRequiredAction = async (
openai: OpenAI,
threadId: string,
runId: string,
action: OpenAI.Beta.Threads.Run.RequiredAction,
tools: CanvasTools
) => {
assert(
// eslint-disable-next-line @typescript-eslint/no-unnecessary-condition
action.type === "submit_tool_outputs",
`unknown action type: ${action.type}`
);
const tool_outputs: RunSubmitToolOutputsParams.ToolOutput[] = [];
for (const toolCall of action.submit_tool_outputs.tool_calls) {
assert(
// eslint-disable-next-line @typescript-eslint/no-unnecessary-condition
toolCall.type === "function",
`unknown tool call type: ${toolCall.type}`
);
tools.call(toolCall.function);
// what _should_ the result be?
tool_outputs.push({ tool_call_id: toolCall.id, output: "OK" });
}
return await openai.beta.threads.runs.submitToolOutputs(threadId, runId, {
tool_outputs,
});
};
Defining assistant function tools for a subset of the Web Canvas API. Such as:
{
name: "fillRect",
description:
"The CanvasRenderingContext2D.fillRect() method draws a filled rectangle at the specified coordinates.",
parameters: {
type: "object",
properties: {
x: {
type: "number",
description:
"The x-axis coordinate of the rectangle's starting point.",
},
y: {
type: "number",
description:
"The y-axis coordinate of the rectangle's starting point.",
},
width: { type: "number", description: "The rectangle's width." },
height: { type: "number", description: "The rectangle's height." },
},
required: ["x", "y", "width", "height"],
},
},
{
name: "fillStyle",
description:
"The CanvasRenderingContext2D.fillStyle property of the Canvas 2D API specifies the color, gradient, or pattern to use inside shapes. The default style is #000 (black).",
parameters: {
type: "object",
properties: {
fillStyle: {
type: "string",
description:
"A DOMString parsed as CSS <color> value, a CanvasGradient object, or a CanvasPattern object. It defines the color, gradient, or pattern to use inside shapes.",
},
},
required: ["fillStyle"],
},
},
ChatGPT generated all the tool definitions 😁
Describe the following CanvasRenderingContext2D instance methods: fillStyle, (...)
They should be described in json the following way:
```
{
"name": "roundRect",
"description": "The CanvasRenderingContext2D.roundRect() method of the Canvas 2D API adds a rounded rectangle to the current path.",
"parameters": {
"type": "object",
"properties": {
"x": {"type": "number", "description": "The x-axis coordinate of the rectangle's starting point, in pixels."},
"y": {"type": "number", "description": "The y-axis coordinate of the rectangle's starting point, in pixels."},
"width": {"type": "number", "description": "The rectangle's width. Positive values are to the right, and negative to the left."
"height": {"type": "number", "description: "The rectangle's height. Positive values are down, and negative are up."}
},
"required": ["x", "y", "width", "height"]
}
}
```
Mapping of tool calls to the actual canvas context methods/properties.
Parameters needed some patching, in particular because of Math.PI
arithmetic
occurring 😬
private parseParams(
fn: RequiredActionFunctionToolCall.Function
): Record<string, unknown> {
if (fn.arguments === "") {
return {};
}
try {
const args = fn.arguments
.replace(/[\d\s.*/+-]*Math\.PI[\d\s.*/+-]*/g, match => `${eval(match)}`)
.replace(/[\d\s.*/+-]*[*/+-]+[\d\s.*/+-]*/g, match => `${eval(match)}`);
const data = JSON.parse(args);
assert(
typeof data === "object",
`parseParams: unexpected non-object ${args}`
);
return data;
} catch (error) {
console.error(`[tools] JSON.parse error`, fn);
throw error;
}
}