possible write what `ham2pose` suggests, here.
maybe also need a note about back translation: that people use it (progressive transformers, sign llm), but the outputs are incoherent. this is because people train the backtranslation models on the translation model outputs, and not independently, as one should.
Originally posted by @AmitMY in #77 (comment)