jonoropeza.com

Making software and software development teams. Mostly the people parts.


Models of Automation

With Bitcoin price weakness and the novelty and/or wonder of the latest ChatGPT iteration pulling our collective heads out of the Web 3 space, I've been thinking about the models of automation, the variables used and how they're chosen.

To illustrate the models, let's use an example of making a service that provides answers to questions such as "what are the approved treatments available for this or that disease" based on the latest biomedical literature.

There are at least five models we might use to enable this service, they are:

  • No automation at the core
  • Automation as Sentry
  • Automation as Helper
  • Autonomation
  • Full automation

No automation at the core - The human does the thing. You're dealing with a small local company, you call a phone number or send an email, a human answers the phone or replies. I used "at the core" deliberately because a modern phone call or email relies on a lot of automation to move information around. Here I'm referring specifically to how the service creates and ships the answer to its queries.

If we don't use automation at the core of our service for looking up treatments, human operators are answering each message without anything automatic happening to either help them, guide them or guard them against bad or wrong answers. This is the default state of any operations-driven company that hasn't set up their service tech-first. There's nothing wrong with it. And nothing particularly exciting, either.

The sentry - The human does the thing, with warnings provided by machinery. Think collision detection in a car. Or a data entry form where if you put in an outlier, it warns "hey, that number you just entered is 10x the other thousand and one numbers in that column, you might want to check it."

With this model applied to our service, human operators would be responsible for answering questions. The interface they type into might provide some guardrails around length of reply, and might prevent them from including offensive language or committing a HIPAA violation.

The helper - The human does the thing, aided by the machinery. A classic example is narrowing down 100 choices to the likeliest 10. That would be helpful convergence. Or, a canonical example in healthcare being a radiologist assisted by AI to say "hey, I think you should look again at that scan, it looks like these other tumors I've been trained on" - in this case, helpful re-divergence after a human might have converged too quickly.

Here, in our service, human operators are still answering questions. Their interface shows them likely answers, previously accepted answers, or otherwise points them towards papers where they might find the answers. Ideally they would usually use the automated help but in rare cases would use their judgment to go outside the suggested responses.

Autonomation - https://en.wikipedia.org/wiki/Autonomation - Here the machine does the thing, with a human there to detect edge cases, take over from the machine when needed, and ultimately improve the system.

In this case, our service operators would watch questions come in, and see machine proposed answers on their screen. There would be a human-friendly-timescale pause, let's say twenty seconds, during which time the operator would have an opportunity to "stop the line" and prevent an incorrect, offensive or otherwise undesirable answer from being sent.

Full automation - Here the machinery does the thing unsupervised, usually (hopefully!) with heavy post-hoc observability. This sounds frightening but it's actually extremely common. Go to Google and type a search. Your results are returned fully automated. Google has observability over failures, but there's no humans involved in each resultset.

This last model would look a lot like Google, or frankly most popular sites where you're searching for a thing (Amazon, Airbnb, Door Dash, etc). Answers are shipped automatedly to the user, with operators reviewing logs, aggregates, complaint reports, etc. This is really hard. This is why search is so hard. All the above models take longer than our expectations of a web application. Autonomation might be the most frustrating one because it's possibly the best, but it can't operate at the speed we expect and it's impossible to scale wide.

There are probably a lot of markets where the answers are valuable enough and the application narrow enough where autonomation is not just possible, but the preferred model.

posted in Artificial Intelligence