Some opensource LFMs are just trained on prompts and responses. Good for pattern-matching. If a question varies even just a little bit from their pattern-matching understanding, their ability to find out what the answer might be becomes highly limited. Whereas the student who fundamentally and deeply understands a topic won't be thrown off by any variation of the question. They'll be able to reason and step-by-step get to the answer.

Orca uses teacher assistance from ChatGPT(3.5 and 4). The model is asked to provide how it came to its response (justify your response, explain step-by-step, eli5), and that is used to train.