Complete Story
 

04/22/2025

Teach AI to Work Like a Member of Your Team

The tool only works well if it works for you

At a Fortune 500 retail company, the leadership provided a team responsible for drafting supplier negotiation contracts with an artificial intelligence (AI) tool meant to streamline their work. Powered by a widely used large language model (LLM), the leadership expected the tool would speed up the team's work by summarizing documents, answering content questions, comparing contracts and more.

However, despite high expectations, the teams output remained the same. Although the tool could generate generic text—for example, a rough draft of the contract—the team had to then customize that text for each supplier. For each contract, they still had to manually incorporate critical details into the contract, such as supplier information, terms, order history and other nuances. As such, the tool had minimal impact on reducing the team's workload.

This story reflects a pattern of how AI tools fail to live up to their promises. In a recent survey HBR conducted of 30 companies across industries (including the contracts team above), respondents reported that generic AI tools often fail to help users complete the specific tasks required in unique workflows because they are generic.

Please select this link to read the complete article from Harvard Business Review.

Printer-Friendly Version