Document Type
Article
Publication Date
8-23-2023
Abstract
This somewhat tongue-in-cheek post discusses biases in AI systems. Noting that AI bots need to be “trained,” this post suggests that untrained mediator bots may spew out unwanted interventions such as providing undesired evaluations of BATNA values – or failing to provide desired evaluations. So mediators probably will need to co-mediate with their bots for a while to observe and correct its biases. Ironically, bots may produce language that normal humans understand much better than the confusing jargon we habitually use. So the mediator bots may need to train human mediators.
Recommended Citation
John Lande,
TRAINING YOUR MEDIATOR BOT
(2023).
Available at: https://scholarship.law.missouri.edu/fac_blogs/72