How GPT3 Works - Visualizations and AnimationsGPT3 的工作原理 - 可视化和动画
Jay Alammar 发表的一篇blog,我用机器翻译转给大家看看,关于最火热的GPT3的工作原理。
原文地址:
https://jalammar.github.io/how-gpt3-works-visualizations-animations/
The tech world is abuzz with GPT3 hype. Massive language models (like GPT3) are starting to surprise us with their abilities. While not yet completely reliable for most businesses to put in front of their customers, these models are showing sparks of cleverness that are sure to accelerate the march of automation and the possibilities of intelligent computer systems. Let’s remove the aura of mystery around GPT3 and learn how it’s trained and how it works.
科技界充斥着 GPT3 炒作。大规模语言模型(如 GPT3)的能力开始让我们大吃一惊。虽然对于大多数企业来说,展示在客户面前的这些模型还不是完全可靠,但这些模型正在显示出聪明的火花,这些火花肯定会加速自动化的进程和智能计算机系统的可能性。让我们揭开 GPT3 的神秘面纱,了解它的训练方式和工作原理。
A trained language model generates text.
经过训练的语言模型生成文本。
We can optionally pass it some text as input, which influences its output.
我们可以选择将一些文本作为输入传递给它,这会影响它的输出。
The output is generated from what the model “learned” during its training period where it scanned vast amounts of text.
输出是根据模型在扫描大量文本的训练期间“学习”的内容生成的。
Training is the process of exposing the model to lots of text. That process has been completed. All the experiments you see now are from that one trained model. It was estimated to cost 355 GPU years and cost $4.6m.
训练是将模型暴露于大量文本的过程。该过程已经完成。你现在看到的所有实验都来自那个训练有素的模型。估计耗资 355 GPU 年,耗资 460 万美元。
The dataset of 300 billion tokens of text is used to generate training examples for the model. For example, these are three training examples generated from the one sentence at the top.
3000 亿个文本标记的数据集用于生成模型的训练示例。例如,这些是从顶部的一个句子生成的三个训练示例。
You can see how you can slide a window across all the text and make lots of examples.
您可以看到如何在所有文本上滑动一个窗口并提供大量示例。
The model is presented with an example. We only show it the features and ask it to predict the next word.
该模型提供了一个示例。我们只向它展示特征并要求它预测下一个单词。
The model’s prediction will be wrong. We calculate the error in its prediction and update the model so next time it makes a better prediction.
模型的预测将是错误的。我们计算其预测中的误差并更新模型,以便下次做出更好的预测。
Repeat millions of times 重复数百万次
Now let’s look at these same steps with a bit more detail.
现在让我们更详细地看一下这些相同的步骤。
GPT3 actually generates output one token at a time (let’s assume a token is a word for now).
GPT3 实际上一次生成一个输出标记(让我们假设一个标记现在是一个词)。
Please note: This is a description of how GPT-3 works and not a discussion of what is novel about it (which is mainly the ridiculously large scale). The architecture is a transformer decoder model based on this paper https://arxiv.org/pdf/1801.10198.pdf
请注意:这是对 GPT-3 工作原理的描述,而不是讨论它的新颖之处(主要是荒谬的大规模)。该架构是基于本文https://arxiv.org/pdf/1801.10198.pdf的transformer解码器模型
GPT3 is MASSIVE. It encodes what it learns from training in 175 billion numbers (called parameters). These numbers are used to calculate which token to generate at each run.
GPT3 是巨大的。它用 1750 亿个数字(称为参数)对从训练中学到的内容进行编码。这些数字用于计算每次运行时要生成的令牌。
The untrained model starts with random parameters. Training finds values that lead to better predictions.
未经训练的模型以随机参数开始。训练会找到导致更好预测的值。
These numbers are part of hundreds of matrices inside the model. Prediction is mostly a lot of matrix multiplication.
这些数字是模型中数百个矩阵的一部分。预测主要是很多矩阵乘法。
In my Intro to AI on YouTube, I showed a simple ML model with one parameter. A good start to unpack this 175B monstrosity.
在我在 YouTube 上的人工智能介绍中,我展示了一个带有一个参数的简单 ML 模型。打开这个 175B 怪物的包装是一个好的开始。
To shed light on how these parameters are distributed and used, we’ll need to open the model and look inside.
为了阐明这些参数的分布和使用方式,我们需要打开模型并查看内部。
GPT3 is 2048 tokens wide. That is its “context window”. That means it has 2048 tracks along which tokens are processed.
GPT3 是 2048 个令牌宽。那就是它的“上下文窗口”。这意味着它有 2048 个处理令牌的轨道。
Let’s follow the purple track. How does a system process the word “robotics” and produce “A”?
让我们跟随紫色轨道。系统如何处理“robotics”这个词并产生“A”?
High-level steps: 高级步骤:
- Convert the word to a vector (list of numbers) representing the word
将单词转换为表示单词的向量(数字列表) - Compute prediction 计算预测
- Convert resulting vector to word 将生成的向量转换为单词
The important calculations of the GPT3 occur inside its stack of 96 transformer decoder layers.
GPT3 的重要计算发生在其 96 个转换器解码器层的堆栈中。
See all these layers? This is the “depth” in “deep learning”.
看到所有这些图层了吗?这就是“深度学习”中的“深度”。
Each of these layers has its own 1.8B parameter to make its calculations. That is where the “magic” happens. This is a high-level view of that process:
这些层中的每一层都有自己的 1.8B 参数来进行计算。这就是“魔法”发生的地方。这是该过程的高级视图:
You can see a detailed explanation of everything inside the decoder in my blog post The Illustrated GPT2.
您可以在我的博文 The Illustrated GPT2 中看到解码器内部所有内容的详细解释。
The difference with GPT3 is the alternating dense and sparse self-attention layers.
与 GPT3 的不同之处在于密集和稀疏自注意力层的交替。
This is an X-ray of an input and response (“Okay human”) within GPT3. Notice how every token flows through the entire layer stack. We don’t care about the output of the first words. When the input is done, we start caring about the output. We feed every word back into the model.
这是 GPT3 中输入和响应(“Okay human”)的 X 射线图。注意每个令牌如何流经整个层堆栈。我们不关心第一个单词的输出。输入完成后,我们开始关心输出。我们将每个词反馈回模型。
In the React code generation example, the description would be the input prompt (in green), in addition to a couple of examples of description=>code, I believe. And the react code would be generated like the pink tokens here token after token.
在 React code generation example 中,描述将是输入提示(绿色),此外还有几个 description=>code 示例,我相信。反应代码将像这里的粉红色令牌一样生成一个又一个令牌。
My assumption is that the priming examples and the description are appended as input, with specific tokens separating examples and the results. Then fed into the model.
我的假设是启动示例和描述作为输入附加,并使用特定标记分隔示例和结果。然后输入到模型中。
It’s impressive that this works like this. Because you just wait until fine-tuning is rolled out for the GPT3. The possibilities will be even more amazing.
令人印象深刻的是,它是这样工作的。因为您只需等到 GPT3 推出微调。可能性将更加惊人。
Fine-tuning actually updates the model’s weights to make the model better at a certain task.
微调实际上是更新模型的权重,使模型在某个任务上表现更好。
Written on July 27, 2020 写于 2020 年 7 月 27 日
版权声明:内容来源于互联网和用户投稿 如有侵权请联系删除