Natural language is familiar and flexible — it’s always been the interface between people and people. Now, with the recent developments in large language models (LLMs), we are finally entering a world where it can be the interface between people and machines.
When LLMs were first launched, their primary UI was a chatbot. Though many 3rd party apps continue to provide their own customized version of this, developer-friendly LLM APIs have resulted in new interaction patterns. You may have seen many of your go-to apps displaying a new “✨Generate with AI ” button (sparkle emoji prefix and all).
UIs aside, when a lot of companies say they’re using generative AI — what are they actually using it for? In this post, we highlight the five main ways we’ve seen LLMs integrated into products, including ten real-world examples.
A Brief Overview of NLP Tasks
Many products we’ve seen have integrated a variation/combination of natural language processing (NLP) tasks. After being trained on so much data, LLMs capture many of the patterns inherent in language — and consequently excel at these tasks.
When it comes to NLP, all tasks can be boiled down to some sort of “transformation”, where information is changed from one form to another. Each of these transformations lies somewhere on the spectrum of “reducing the input” to “enriching the input”:
Most approaches we’ve seen fall into the following five categories, which we’ll cover in this post:
|1. Information Retrieval/Search
LLM only (with some prompt wrapping)
|3. Structured translation (for machine APIs)
LLM only (with some prompt wrapping)
|4. Unstructured translation (for human users)
LLM only (with some prompt wrapping)
|5. Text Generation
LLM only (with some prompt wrapping)
1. Information Retrieval/Search
If you’ve used Google (or any other search engine), you’re likely familiar with information retrieval and search. It can refer to:
- Finding information in a single given document
- Finding the correct document (or excerpt) across a larger corpus of documents
LLMs change the picture like so:
|LLMs like OpenAI’s ChatGPT
|Ranking based on
|Semantic matching, aka does the content seem conceptually related to the query.
|“cafe open late nomad wifi”
|“What are some cafes or similar places, open late near Nomad and suitable for working on a laptop?”
📌 On supplementing ChatGPT’s knowledge base
By itself, ChatGPT can’t answer questions about content it hasn’t been trained on. However, it has a context window of 16,000 tokens — essentially, 12,000 words of “memory”. This memory can include:
- Previous conversation history (if any)
- Additional info needed that ChatGPT hasn’t been trained on
- Your actual query
Say that you want to use ChatGPT to do smarter search over your own internal documentation. Your total number of docs > 12,000 words by a long shot. However, if you have another means of figuring out which doc (and for longer docs, which “chunk” of a particular doc) likely has what you’re looking for, you can provide that “chunk” as context for your question to ChatGPT.
Luckily, vector DBs do just that — assuming that you’ve already split your docs into LLM-friendly chunks and stored these in the DB in vector form, you can subsequently:
- Encode your search query as a vector
- Query the DB using to find the top X chunks most related to (1)
- Provide each of these to the LLM along with the original search query, to see if it can determine the answer with that extra context
(See an example of how someone did this for their internal work documentation). This pattern is commonly known as Retrieval Augmented Generation.
Dropbox is a file storage product — it’s in a perfect strategic position to allow users to search/query their own private docs (PDFs, powerpoints, text files, etc).
The AI-enhanced version of their Dropbox Dash product provides a natural-language search functionality, and also specifically cites which files the answer is derived from. For instance, if your company is storing all of its artifacts in Dropbox, it becomes very easy to ask company-specific questions:
CommandBar’s HelpHub product allows companies with public-facing websites to easily create a chatbot interface to answer their customers’ questions.
To set up this workflow, you first specify which webpages should be used as input sources for your particular HelpHub bot:
You can then either (1) issue a natural language query to get a list of relevant search results or (2) interact with a chatbot that automatically synthesizes the source into an answer.
Summarization is another widespread application of LLMs. You can think of “summarization” as a task where a larger chunk of text is condensed into shorter form, while preserving the most important information.
The following applications are all some variation of this:
- Outlining a document
- Chunking a meeting transcript into notes
- Splitting a detailed tech spec into tickets
- Classification – bucketing text snippets (e.g. customer reviews) as “positive” or “negative” or otherwise applying some relevant tag
Notion is a productivity/note-taking app that has integrated AI capabilities into its editor. Among other capabilities that it’s wrapped with shortcut buttons, it allows you to “summarize” a chunk of text.
For example, here is some source text:
When summarizing, NotionAI provides the following:
When generating an outline, NotionAI provides the following:
Superpowered is an AI notetaker for meetings — in addition to using a speech-to-text model to transcribe what was said during a meeting, it also passes this text through an LLM to extract (1) a summary and (2) any potential action items.
3. Structured Translation (for Machine APIs)
One particularly effective application of LLMs is being able to easily transform natural language into structured form. In the past, user-machine interactions were limited by the fact that machine interfaces expect structured input like JSON or SQL. LLMs give your app’s users the flexibility to ask for what they want in plain English (or any other natural language).
This significantly reduces the learning curve! Your users are much more likely to get up and running with your app’s core functionality, rather than getting stuck reading docs. You can now provide:
- More intuitive query bars. Without having to change their existing structured query interface, any given app can put a layer of “natural language” on top. Users can now type a single question instead of filling out a series of input fields, applying filters, or having to learn app-specific query syntax.
- Cross-API orchestration. There are also quite a few new apps where the LLM model serves as the core “workflow glue” between various 3rd party APIs. As an analogy – a secretary might receive a high-level goal from their boss (“organize a client dinner”), and accomplish this by breaking it down into discrete steps, some of which involve reaching out to other folks (e.g. messaging clients, calling a caterer, etc). Similarly, LLMs can chunk out a natural-language directive into smaller parts, calling out to APIs to either fetch information it needs or execute an action in some external system.
Booking.com is a travel search engine that allows people to find the best deals on flights, hotels, and rental cars. Normally, users have to enter their itinerary into a series of input fields:
With an LLM-integrated chatbot, users can ask about their itinerary, which is likely handled as follows:
- Raw user input is wrapped in a more sophisticated prompt, where ChatGPT’s native language abilities can restructure it into a known format (e.g. JSON with fields `origin`, `destination`, etc).
- This postprocessed, structured input is then issued to existing Booking.com API
- A prompt wrapper also instructs ChatGPT how to handle the resulting response (which follows some known format).
See the exchange below:
Linear is a work-management app that allows users to create and track tasks. To find specific subsets of tasks, users often apply a series of filters. However, these filters are not always obvious to new users, and can become tedious to reapply.
One way around this is to have saved filters – but why not improve the querying experience from the get go? Linear has implemented AI filters that translate an English query like “open bugs with SLAs” into the filters that their original API expects:
What the user types in:
Which the LLM will restructure into the following:
4. Unstructured Translation (for Human Users)
Translation is typically thought of as going from one language to another (e.g. English to Spanish) – and LLMs are indeed capable of doing this.
However, LLMs are also capable to “translating” in the following sense:
- Rewriting a passage with a different tone, or for a different target audience
- Explaining code as English (or vice versa)
- Refactoring code
Mutiny is a no-code app that enables non-technical users to update their company marketing websites (and to observe the effects via A/B testing).
They’ve integrated AI into their website editor, so that users can select specific text, and request that it be rewritten “friendlier” or “more aimed towards clients in financial services”.
Replit is a collaborative browser-based IDE, with a recent AI feature that allows users to ask questions about their code, refactor it, debug it, and more. It’s based on OpenAI’s Codex model, which was specifically trained on code (as opposed to ChatGPT, which was trained on natural language).
5. Text Generation
Pre-LLM algorithms could still achieve the previous NLP tasks (“reduction” and “translation”) to some extent, even if the results weren’t as high quality.
LLMs really unlocked the ability for text generation — adding large amounts of text where there was none before. You can give the LLM a high-level seed, and ask it to brainstorm on your behalf (e.g. “Give me five different titles for this blog post”).
The possibility space is quite wide, especially if you take into account that:
- “Output text” can represent more than just natural language, but also HTML, UML diagram syntax, etc — meaning that the output can be visual
- Generation doesn’t have to be one-shot — it can happen in a interactive, recursive way (e.g. “Give me five options”, “Now for option X, break it down into main sections”, “Now for the each of these sections, write a short paragraph”)
Jasper AI is a copywriting tool targeted towards marketers, which uses AI to easily generate blog posts, product descriptions, social media captions, etc.
Giving the user an entirely blank text box is overwhelming, and more likely to require several iterations of trial/error to get what the user desires. Instead, Jasper’s UI “binds” the space of what the LLM can generate by having the user specify things (e.g. intended use case, product name, tone, etc) — wrapping this all up in a separate prompt to the LLM behind the scenes. This allows Jasper a lot more control, quality, and consistency when it comes to the auto-generated text.
Tome is an app that allows you to use AI to generate and refine full presentation decks from a single input (e.g. “create a presentation about X”). Their straightforward prompt likely abstracts out a bunch of different calls from the user (“Make an outline about X”, “Write a paragraph for each section detailed in the outline”) as well as some potential DALL-E (text to image) integration.
For instance, you can input:
Which results in the following outline:
Each section of which gets a slide:
The user can tweak this initial scaffolding in subsequent iteration loops, using other AI features like “rewrite”.
Hopefully, these examples have given you some sense of how to use LLMs in your own product. We encourage you to experiment with integrating LLMs into your product if you haven’t already— we’re excited to see what you build!