Ask The Expert

Identifying and evaluating AI use cases

Hi Kirsty,

Can you suggest a way to capture details of how different teams across an organisation are experimenting with AI, and evaluate the potential effectiveness of all these use cases?

Thanks!

Joe Sharp

Thanks for the question, Joe. I’m seeing this come up a lot at the moment.

Many bid teams are actively experimenting with AI but aren’t always sure which tasks it can practically help with, or whether free tools will do the job vs specialist bid tools. A common challenge is that this experimentation is happening in pockets across teams, without a shared way to capture or assess what’s being tried.

One simple but effective approach is to create a shared space where all AI ideas and experiments can be logged. I’d recommend using a simple excel-based AI use case tracker to capture details such as the task being tested, the tool used, expected impact, risks and learnings. To demonstrate this, I’ve provided an example tracker populated with sample use cases, training notes, and example AI prompts. You can download it here.

Assessing use cases also becomes much easier if you categorise them by impact and likelihood of success. In practice, only a small proportion of ideas are worth progressing (in my experience, around 3 out of 10) so people should feel empowered to try and fail in order to uncover the gold nuggets- the great ideas worth progressing. By reviewing the tracker regularly as a team, you can agree who should take promising ideas forward and which ones to park for now.

This approach will help your team focus their time on high-impact, high-confidence use cases, while also making it easier to communicate wins, inspire confidence, and create momentum and excitement around AI adoption.

I hope that helps, and it would be great to keep in touch on how your team gets on.
Kirsty