A lot of tools are what I would call intention multipliers: they help you achieve a particular goal more efficiently or easily. My definition of “tool” here is an expansive one – beyond power drills and screwdrivers, some things I would include are:
- Everything described in my post on useful tech products
- Delaware C-corps, chargeback, judicial review
- Freedom of Information requests, the ability to escalate complaints to the ICO
- Social technologies like date-me docs
- Formal languages for maths / programming / logic
Handy as these tools all are, you might very well not realise they exist without someone telling you about them, even if they would solve a problem you’re struggling with. One of the reasons people hire lawyers is that they are much better-acquainted than laypeople with the legal structures and tools available, and so can get you on the right track quickly. In general, domain knowledge is an extremely valuable kind of meta-tool.
LLMs can perform this role much more cheaply and conveniently than traditionally-hired experts, and my impression is that people massively under-use them on a day-to-day basis. While search engines generally aren’t helpful for answering questions like “How do I set up a new UK AI safety university group so that we can receive grant funding and open a bank account?”, LLMs are – and this can often un-stuck your projects.
Which model?
- Currently I prefer Claude Sonnet 3.5 to GPT-4o for most tasks.
- I basically never use o1 because I don’t subscribe to ChatGPT Plus; possibly I should pay for that too for the optionality
- If you’re on the free version and run out of credits, you can use the Poe wrapper for more messages
- (But you should probably just pay for the Pro/Plus version!)
- Pay-as-you-go with an API key would work out cheaper, but it’s sufficiently more hassle that I expect it’s not worth it for most people.
- A while ago I installed LWE to use through the terminal; now I never use it because the web chat interfaces are much cleaner. It was pretty fiddly to set up too.
- There are more nice “features” with ChatGPT, e.g. sharing chats, making images (if you subscribe), voice mode, web browsing, Python code execution, reliably tidy markdown & LaTeX rendering
- Claude is notably better at working with PDFs, however
- For Claude, ensure Artifacts are enabled as it makes copying across code files a lot easier
- Perplexity is great for quick answers to numeric questions (e.g. “how much was Airtable valued at in its last fundraising round?”) and finding films/books from a very vague description
- See also Elizabeth’s writeup
- There is also Elicit, focussed on scientific papers, but I haven’t used it much
Things you can do
- Proofreading long documents
- Beware that in my experience it will hallucinate even when heavily prompted not to and miss some points, but did surface some issues that I wouldn’t have spotted otherwise
- Transcription of images, cleaning up text copied from PDFs
- Answering factual legal / tax questions, e.g. Roth IRA vs Traditional IRA; how corporate redundancy packages typically work
- Doing basic due diligence on contracts you’re signing, e.g. “summarise the provisions in this NDA”; “identify any differences between this document and the standard YC SAFE attached”
- Getting recommendations for potential frameworks / libraries / APIs to use in a particular software project
- You’re much less likely to get a decisive answer than when asking a friend, though – so if you’re prone to analysis paralysis this can be counterproductive
- Making English worksheets for tutees (have found that for maths often the problems are too hard/easy and Dr Frost works great anyway)
- Figuring out keyboard shortcuts / how to do things in online software like GDocs
- Cowriting boring documents, e.g. a participants’ agreement for a weekend retreat I’m running
- The key here is to get it to redraft and redraft until it looks like how you want. Or just make the changes yourself, if you know what to change it to – cf Cunningham’s Law, maybe the best way to write a doc well is to have an LLM write it badly first.
- Coming up with recipe ideas given time & ingredient constraints
- Doing interview preparation e.g. by uploading some articles I was meant to read in advance, explaining them, and suggesting potential questions I’d be asked
- Reviewing my code and suggesting improvements to it (ChatGPT voice mode is very good for practising explaining your thinking out loud as you’re doing it and getting feedback)
- Reconstructing names of people/companies/concepts I misheard but know the context of
- Responding to customer service email threads where the other person really just isn’t understanding my point – just paste in the whole thread and tell it to draft a reply 1
- You can then open a new window and pretend you’re the customer service agent asking for what to write back with! This is great for pre-empting their objections and also making sure you’re being reasonable.
- Might also be good for times when you’re asking LLMs for social advice (although in those cases you’re usually describing the situation rather than pasting in the objective facts from a single email chain, and so you’re probably at risk of distorting reality in addition to getting a sycophantic answer.) Sometimes prompting with “are you totally sure? isn’t the opposite true?” will surface considerations you hadn’t thought of, though obviously it’s not that helpful for coming to a judgement.
- You can then open a new window and pretend you’re the customer service agent asking for what to write back with! This is great for pre-empting their objections and also making sure you’re being reasonable.
- Summarising and collating natural-language feedback from a CSV dump of form responses
- Replying to emails
- Use sparingly! I think a lot of the time it is faster/better to do this yourself, because you have way more context on the recipient & conversation than it’s feasible to type out for the LLM.
- If the email is important, you should at the very least edit it to be in your own tone after starting from an initial LLM draft.
I don’t really use LLMs for academic work because they seem to be pretty bad at doing philosophy & explaining economic intuitions. They’re OK as rubber ducks for me to explain & zoom in on confusion to though, and they’re sometimes helpful for formulating thought experiments.
See also
The classic “Things you’re allowed to do”, and Saul’s university edition (which has more links at the bottom).
-
I hadn’t heard of it until recently but Patrick McKenzie describes this as adopting the tone of a “dangerous professional”. I think I arrived at it through the route of “eleven-year-old who’s spent too much time reading about the Consumer Contract Regulations in Which? & enjoys citing legal clauses with fancy section names”, but in any case it often works. Resolver is sometimes useful here if you can’t find an easy way to contact the company. ↩︎