Photo by @AXP_Photography on https://unsplash.com/photos/h_hc9_RMwd0

Tokens, the secret language of AI

by Omar Kamali / June 26, 2023 / in AI, LLM, Tokens

Artificial intelligence has become an inseparable part of our lives, quietly revolutionizing the way we interact with technology. But have you ever wondered how AI systems make sense of the vast amount of information they encounter? It turns out that AI has its own secret language, composed of tiny building blocks called "tokens." In this article, we'll pull back the curtain on AI tokens, shedding light on their essential role in powering the remarkable capabilities of artificial intelligence.

The Lego Bricks of AI

Think of tokens as the Lego bricks of the AI world. They are small, discrete units of information that AI systems use to represent different elements, such as words, images, or sounds. By breaking down complex data into these manageable chunks, AI algorithms can tackle the gargantuan task of understanding and processing diverse information.

No matter which kind of information you feed it, AI systems only ever sees a bunch of numbers, also called vectors. Let's take a simple example to illustrate the mapping between a piece of text and its corresponding tokens and numerical vectors. When chatting with ChatGPT, you might be writing "What is the capital of Moldova?". First, your sentence is broken into small pieces. Sometimes a word, sometimes just a character:

  1. What
  2. is
  3. the
  4. capital
  5. of
  6. Mold
  7. ova
  8. ?
You can reproduce this yourself on https://platform.openai.com/tokenizer
You can reproduce this yourself on https://platform.openai.com/tokenizer

Each one of these tokens is then converted to a numerical representation, that could look like:

  1. What: 2061
  2. is: 318
  3. the: 262
  4. capital: 3139
  5. etc...

The same numbers have different meanings depending on the AI and kind of information at play. In general, there are 4 different kinds of tokens. This numerical representation is used by the AI system to perform further calculations to give you an answer, but that happens after the tokenization stage, and is outside of the scope of this article.

Decoding the Written Word - Text Tokens

These tokens represent words, phrases, or characters in written or spoken language. The process of tokenization involves slicing and dicing text to identify word boundaries, sentences, and maintaining consistency through token normalization. Textual tokens give AI models the power to grasp language, decipher sentiments, translate languages, and perform an array of linguistic feats. Typically, text tokens tend to represent short and common words. Longer or uncommon words tend to be broken into smaller tokens.

Peering through AI's Digital Eyes - Visual Tokens

Just like we rely on our eyes to see and interpret the world around us, AI systems depend on visual tokens to make sense of images. Visual tokens are generated through ingenious techniques such as image segmentation, object detection, and feature extraction. By breaking down images into these meaningful visual elements, AI models gain the ability to identify objects, track their movements, and even understand the context of entire scenes. From image recognition to self-driving cars, visual tokens play a vital role in transforming pixels into meaningful insights.

Visual tokens corresponding to the recognized objects (credits to YOLACT++)
Visual tokens corresponding to the recognized objects (credits to YOLACT++)

The Language of Sounds - Audio tokens

But what about sound? That's where auditory tokens come into play. These tokens represent sounds, speech, or audio signals that AI systems encounter. Transforming audio data into tokens involves creating visual representations called spectrograms, deciphering phonemes (distinct speech sounds), and training acoustic models. With auditory tokens at their disposal, AI systems are able to perform tasks such as speech recognition, voice synthesis, music analysis, and audio classification. They're the backbone behind our "beloved" voice assistants, transcription services, and audio experiences.

A Symphony of Information - Multimodal tokens

The real magic happens when AI systems bring multiple modalities together through multimodal tokens. Multimodal AI systems are able to seamlessly integrate text, images, and audio, and these tokens enable a comprehensive understanding of complex information. Picture this: an AI system analyzing a video and its sound track at the same time, gaining further context by combining the visual and sound aspects. In the world of multimedia content analysis, multimodal tokens unlock the ability to unravel the relationships between text, visuals, and audio. They're the catalysts behind automated image captioning, video summarization, and immersive virtual experiences.

Why should I care?

As artificial intelligence (AI) continues to shape our world, understanding the concept of tokens becomes increasingly important. Tokens not only play a crucial role in how AI systems process and comprehend information, but they also have practical implications that directly impact user experience and operational costs. Let's explore why you should care about tokens and their significance in this realm.

Information limits: Context window

One key aspect of tokens is their association with the context window, which refers to the maximum number of tokens an AI system can effectively handle at once. Imagine the context window as a frame through which AI systems view and understand text. As another example, your whole conversation with ChatGPT should fit within the context window. Different models have varying context window sizes, typically ranging from a few thousand tokens to hundreds of thousands. 

To put it into perspective, let's consider a few examples. A context window of 4,000 tokens such as offered by an early version of ChatGPT may encompass a medium-sized article or a substantial portion of a book chapter. On the other hand, a context window of 16,000 to 32,000 tokens as currently provided by OpenAI could cover an entire novel or a lengthy research paper. And for truly large-scale analyses, a context window of 100,000 tokens as recently introduced by Anthropic (makes of Claude) might encompass multiple books or an extensive collection of articles.

Understanding these limitations is crucial when working with AI systems. It means that if you're feeding an AI model with a text longer than its context window, the system may not be able to capture the full context and may lose essential information. It underscores the importance of carefully considering the amount of text used and optimizing it to fit within the model's limitations.

Operating costs

Another reason to pay attention to tokens is the impact they have on operating costs associated with AI. The cost of utilizing AI services is often tied to the number of tokens processed. In the case of OpenAI, when you make a request to GPT-3 (a product used by AI services behind the scenes), the cost is $0.0015 for every 1K tokens. Using the full context window of 4K tokens would cost $0.006 in this example. GPT-4 costs 20 times as much, at $0.03 for every 1K tokens.

In natural language processing tasks, different languages may have varying numbers of tokens per word. For instance, languages like English and Spanish generally have a one-to-one correspondence between words and tokens. However, languages like Korean, Arabic or German tend to have more tokens per word due to their grammar and morphology. The same request in English might be 50% to 100% more expensive depending on the language you're using.

I discussed this language disparity in more details on Linkedin.

Optimizing the prompt or input length becomes critical in managing costs. By reducing the number of tokens required to convey the desired information, you can effectively minimize the expenses associated with processing AI tasks. Carefully crafting concise and precise prompts ensures that you receive the desired outputs without unnecessary token wastage.

In addition, optimizing prompts helps in streamlining AI operations, reduces computational resource requirements, and improves the overall efficiency of AI systems. By being mindful of token usage, you can harness the power of AI while keeping costs under control.

Understanding the significance of tokens in terms of information limits and operating costs allows you to make informed decisions when working with AI systems. It empowers you to optimize inputs, improve efficiency, and manage expenses effectively. So, the next time you interact with AI, keep in mind the importance of tokens and the role they play in shaping the AI landscape.

To conclude

Next time you marvel at the capabilities of AI, remember that a significant contributor behind the curtain are the humble tokens. High quality tokens are necessary for a useful and value producing AI. These unassuming building blocks enable machines to process and comprehend the vast ocean of information surrounding us. They are also limiting factors and can affect how you approach solving a problem using AI, sometimes forcing you to prune information or translate it first to optimize your usage.

You can follow me on Twitter (@omarkamali) and subscribe to my newsletter to stay on top of new articles. I will be writing more on the topics of AI and Digital Infrastructure in the future, so make sure to stay tuned!

Get my latest articles and updates
At most one email a month and no spam.

Omar Kamali
Written by Omar Kamali, Founder, CEO @ Monitoro, & Strategic Technology Advisor.