AI detection tools are not foolproof

BVWireIssue #269-1
February 5, 2025

practice management and growth
valuation practice management, information technology, artificial intelligence

For a variety of reasons, valuation experts will want to determine whether AI or an actual human being generated content. For example, in litigation, if an opposing expert’s report is found to include AI-generated content, it can weaken their testimony if attorneys question the level of personal expertise applied to the analysis. A recent article discussed this, and the author tested three tools designed to detect content that AI generated. The tools were put through their paces on an AI-generated paragraph and a human-written paragraph. Here are the results:

  • Grammarly’s AI detector was the most accurate among the three, successfully distinguishing between AI and human content;
  • Phrasly.AI performed poorly, mistakenly classifying AI content as human and struggling with short text; and
  • ZeroGPT showed inconsistent results, correctly identifying human content but failing to recognize AI-generated text.

The article, written by Miranda Kishel (Development Theory), is in NACVA’s QuickRead publication, and you can read it if you click here.

Please let us know if you have any comments about this article or enhancements you would like to see.