fanta_orange_grapeの(日記というより)記事のつもり

いままでの人生で一番美味しかったもの = 紅茶とトースト(高校生)。

このレベルの英文、全く、意味が取れないからツライ

単語の意味だけが、パラパラ頭に入る程度の英語力なので、、、、

この英文、全く、意味が頭に入りません。

The terms transfer learning and fine-tuning refer to two concepts that are very similar in many ways, and the two terms are being widely used almost interchangeably. The two terms don’t imply the same goal or motivation, but they still refer to a similar concept. What I mean by “similar concept” is this: Fine-tuning means taking some machine learning model that has already learned something before (i.e. been trained on some data) and then training that model (i.e. training it some more, possibly on different data). That’s all fine-tuning means. Some other answers put arbitrary, incorrect limitations on the term, for example claiming that it’s only called fine-tuning if it refers to the final stages of the training. None of these limitations bear any substance. Now, transfer learning means to apply the knowledge that some machine learning model holds (represented by its learned parameters) to a new (but in some way related) task. This should already look quite familiar to you to the concept of fine-tuning defined above, but in case it doesn’t yet, let’s put more abstractly what you actually do when you perform transfer learning: You take a model that has been trained on something and you use (part or all of) this model as (part or all of) a new model and train it on new data (i.e. train it some more). The last part of the previous sentence is what is meant by “applying” the trained model to a new task. In summary, it would be wrong to say that fine-tuning and transfer learning are the exact same thing, but as explained above they both refer to the concept of taking an existing, trained model and training it further, either as is or as part of a new model. The main reason why people use the term fine-tuning at all is simply to indicate the training of a machine learning model that is not being trained from scratch, but has already been trained before on some data (not necessarily to convergence). It’s just a convenient way to express that you are not training from scratch. Whether you train on the same or new data, and for the same or a new task, is a different story, the term fine-tuning just in itself contains no implications on any of that.

上記英文は、引用の引用ですが。。。

引用元のみ示します。

https://qiita.com/enoughspacefor/items/8c677947c57620eecd54