I see two approaches for explaining the outcome:
1. Reasoning back on the result and justifying it.
2. Explainability - somehow justifying by looking at which neurons have been called.
The first could lead to lying. E.g. think of a high schooler explaining copied homework.
While the second one does indeed access the paths influencing the decision, but is a hard task due to the inherent way neural networks work.