I have noticed on several occasions that the AI-generated session feedback does not take into account the comments entered in the field “How the session went.”
It appears that the generated feedback relies on generic text modules rather than on the athlete’s actual qualitative input. For example, I wrote “medical recovery, load intentionally reduced,” yet the generated feedback stated “…this could be due to a few factors, such as nutrition or fueling,” which is clearly not relevant in this context.
I would encourage the development team to consider a deeper integration of a true AI agent, for example a modern LLM-based system, that can properly interpret and contextualize athlete comments. This would significantly improve the perceived quality of feedback, make athletes feel more accurately understood, and strengthen the platform’s positioning as genuinely AI-driven rather than template-based.
I would be very interested in hearing your thoughts on this, and I wish everyone a great end-of-year training block.