Teacher forcing is an algorithm for training the weights of recurrent neural networks (RNNs).[1] It involves feeding observed sequence values (i.e. ground-truth samples) back into the RNN after each step, thus forcing the RNN to stay close to the ground-truth sequence.[2]

The term "teacher forcing" can be motivated by comparing the RNN to a human student taking a multi-part exam where the answer to each part (for example a mathematical calculation) depends on the answer to the preceding part.[3] In this analogy, rather than grading every answer in the end, with the risk that the student fails every single part even though they only made a mistake in the first one, a teacher records the score for each individual part and then tells the student the correct answer, to be used in the next part.[3]

The use of an external teacher signal is in contrast to real-time recurrent learning (RTRL).[4] Teacher signals are known from oscillator networks.[5] The promise is, that teacher forcing helps to reduce the training time.[6]

The term "teacher forcing" was introduced in 1989 by Ronald J. Williams and David Zipser, who reported that the technique was already being "frequently used in dynamical supervised learning tasks" around that time.[7][2]

A NeurIPS 2016 paper introduced the related method of "professor forcing".[2]

See also

References

  1. John F. Kolen; Stefan C. Kremer (15 January 2001). A Field Guide to Dynamical Recurrent Networks. John Wiley & Sons. pp. 202–. ISBN 978-0-7803-5369-5.
  2. 1 2 3 Lamb, Alex M; Goyal, Anirudh; Zhang, Ying; Zhang, Saizheng; Courville, Aaron C; Bengio, Yoshua (2016). "Professor Forcing: A New Algorithm for Training Recurrent Networks". Advances in Neural Information Processing Systems. Curran Associates, Inc. 29.
  3. 1 2 Wong, Wanshun (2019-10-15). "What is Teacher Forcing?". Towards Data Science. Retrieved 2022-03-25.
  4. Zhang, Ming (31 July 2008). Artificial Higher Order Neural Networks for Economics and Business. IGI Global. pp. 195–. ISBN 978-1-59904-898-7.
  5. Yves Chauvin; David E. Rumelhart (1 February 2013). Backpropagation: Theory, Architectures, and Applications. Psychology Press. pp. 473–. ISBN 978-1-134-77581-1.
  6. George Bekey; Kenneth Y. Goldberg (30 November 1992). Neural Networks in Robotics. Springer Science & Business Media. pp. 247–. ISBN 978-0-7923-9268-2.
  7. Williams, Ronald J.; Zipser, David (June 1989). "A Learning Algorithm for Continually Running Fully Recurrent Neural Networks". Neural Computation. 1 (2): 270–280. CiteSeerX 10.1.1.52.9724. doi:10.1162/neco.1989.1.2.270. ISSN 0899-7667. S2CID 14711886.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.