When my previous company started using technologies such as machine learning to automate tasks such as curation, Rich Jaroslovsky, an experienced newsman who pioneered using web technology to build the online version of The Wall Street Journal, circulated a memo with three simple guidelines that are applicable to anyone thinking of using AI to automate their newsroom.
SmartNews was at the forefront of using technology to process, curate, and rank large volumes of news stories so many of the hiccups we’re seeing in the application of AI to publishing today were front of mind for the company years ago.
Rich’s memo was a riff on Isaac Asimov’s Three Laws of Robotics reworked for today’s world where AI is being applied to any number of tasks in pursuit of scale and efficiency. This simple set of rules is useful as a checklist to help people think through the responsible application of autonomous technology.
I’d encourage anyone who builds products that use AI to link to these rules from your product requirements template. I can say from experience that building features with these three simple tenets in mind will save your organization a lot of headaches going forward.
Rich Jaroslovsky’s Three Laws of Automation
- It has to be highly automated. Our technology is what makes us scalable, and allows us to accomplish so much with so few people. I realize there is often a manually intensive phase when a new feature is being tested. But even in the testing phase, the question of how the task can be automated should be front-of-mind — and should be implemented when the feature is moved into full production, not as a “we’ll get to it” enhancement at some point in the distant future.
- It has to provide visibility. That is, we have to know what the system is actually doing — what content it is sending out — at any given time. It’s not enough to learn after the fact, and then have to grapple with unintended consequences. For us non-engineers, at least, It’s much less important that we have visibility into the why or the how, visibility into the what is critical.
- It has to allow for intervention when we spot problems — the ability to stop something bad from happening when we see it is happening, or is going to happen. This is much different from the concept of “human control,” where actions only take place if they are approved; such a model flies in the face of Rule #1. But it isn’t good enough to say we’ll just depend on the technology, wash our hands of the consequences and figure we’ll fix it later if It is doing bad things.
What are your thoughts? Are there examples you’d care to share that are instructive on what can go wrong if you don’t heed these rules? I’m building my own list of how un-supervised AI has caused problems in publishing but if you’ve got some other stories, share them in the comments so we can all learn together.
Leave a comment