Sam Breed

Product Developer, Investor

AI Coding-ish

Thoughts on new tools

First posted on

Last updated on

I’ve been using LLMs for general coding tasks since late 2020. This is where I’ve started to corral together thoughts and techniques for “doing computers” with LLMs in the passenger seat.

LLM as Search Engines

Plain language questions can be easier to write than good search queries.

Sometimes describing a situation is necessary to create good search terms anyway. Here’s a prompt about finding empty markdown files compared with equivalent search terms. I went to ChatGPT first, then followed by searches with DuckDuckGo and Google.

The results were roughly equal.

The prompt quickly wrote a bash script with a single call to find with appropriate arguments and detailed instructions for saving and running the file. It jogged my memory, that yes, of course this a one-liner with find that I have never bothered committing to memory. It was a bit wordy for the one-line answer that my question really needed, but I was satisfied none the less.

The searches led to the same StackOverflow question, and incorporated the meat of the content directly into the results listing, effectively one-shotting the question by printing the find call from the answer. It doesn’t tailor my answer to markdown files, so if I was missing some basics I might struggle to substitute the correct glob for my markdown files and have a bad time. So overall more of a hit than a miss but probably lacked a bit of specific context on the question.

Two notable observations:

  1. The LLM returns passable results for both the prompt and the search query.
    • In contrast, the search engine only performs well with the search query and delivers lower quality results with the prompt. I actually prefer the LLM’s response from the search query, as it leaves out supplementary details about saving and running the bash script in the first prompt.
  2. The search query is a compressed expression of the problem statement in the prompt.
    • I probably made the prompt a bit more explicative and redundant for the sake of the model, meaning I feel like this is nakedly “prompt language” rather than a “true” expression of the problem statement, but it came out in one fluid motion and I didn’t linger on any of the details.

Recipes

Here are a few examples of novel / interesting workflows and I’ve come across and found helpful. I’ve included some embeds of chats using one of the tools I maintain for work.

Tools

#!/bin/bash
TOKEN=<your open ai token goes here>
PROMPT="You are the best at writing shell commands. Assume the OS is Ubuntu. I want you to respond with only the shell commands separated by semicolons and no commentary. Here is what I want to do: $@"
RESULT=`curl -s https://api.openai.com/v1/chat/completions \
  -H 'Content-Type: application/json' \
  -H "Authorization: Bearer $TOKEN" \
  -d "{
  \"model\": \"gpt-3.5-turbo\",
  \"messages\": [{\"role\": \"user\", \"content\": \"$PROMPT\"}]
}" | jq '.choices[] | .message.content' -r`
echo $RESULT
read -rp "Execute? [n]: " input_var
input_var=${input_var:-n}
[ "$input_var" = "y" ] && bash -c "$RESULT"

Categories