The Hidden Causes of AI Workslop—and How to Fix Them

A recent episode of the HBR IdeaCast, titled "The Hidden Causes of AI Workslop—and How to Fix Them", provided a helpful way to think about our current moment as it relates to AI and the hidden risks of integrating these tools into our professional workflows.

Jeff Hancock and Kate Niederhoffer explain that the pressure to innovate has given rise to "AI Workslop" which is defined as low-quality, AI-generated content that masquerades as a completed task but lacks the necessary substance or context to actually advance the work.

The guests define workslop as a phenomenon where the signal of effort is decoupled from the actual quality of the output (@ 02:35). For legal professionals, this creates a dangerous "burden shift." Instead of the author performing the critical thinking, the recipient must now act as an editor to catch logical gaps or "hallucinations."

The "Recipe" for Slop 

The podcast outlines that workslop is rarely the result of simple laziness. Rather, it is driven by structural pressures:

  • Vague Mandates: Broad directives to "use AI" without specific guardrails.

  • Overburdened Staff: In environments where everything is "urgent," the frictionless nature of GenAI makes it a tempting shortcut.

Beyond the immediate frustration, "workslop" imposes significant professional and financial costs on an organization. Research highlighted in the episode indicates that recipients of such content consistently judge the sender as less competent and less trustworthy (@ 06:29). This is a devastating reputational blow for legal professionals whose influence depends on strategic trust. Furthermore, the productivity drain is substantial, as it takes an average of two hours to detect and remediate a single instance of workslop, costing large firms millions annually (@ 09:17). To move from "slop" to strategy, the guests suggest that leadership must offer clear paths forward:

  • Pilot Mindset: Moving beyond passive literacy to "agency," where the human remains the pilot in command, using AI as the engine while retaining final authority over judgment and voice (@ 20:32). (See Human in the Loop)

  • the "J-Curve": Recognizing that new technology often causes an initial dip in productivity. Leaders must provide the psychological safety for teams to critique AI outputs and redesign workflows properly (@ 23:36).

Ultimately, transitioning to an AI-integrated legal department is a leadership challenge that requires managing the "J-curve" of adoption, which is the reality that productivity may initially dip as we redesign our workflows. To avoid the trap of workslop, we must shift from broad mandates to a culture of "pilot agency," where teams have the psychological safety to critique and even reject AI outputs.

Our goal is to ensure that innovation actually serves our internal clients. Passing along unvetted, "masquerade" work only shifts the cognitive burden back onto the business, undermining the very reason we exist as counsel. In a market increasingly saturated with AI-generated filler, human credibility is our highest premium. By being intentional about where we embed these tools, we protect our reputation as a trusted strategic partner by using AI to augment our output without diluting our professional voice.


To hear more, check out the podcast, here.


Comments

Popular Posts