The Problem with Sending Robots to Troubleshoot

Now if a robot is given an order, a precise order, he can follow it. If the order is not precise, he cannot correct his own mistake without further orders. Isn't that what you reported concerning the robot on the ship? How then can we send a robot to find a flaw in a mechanism when we cannot possibly give precise orders, since we know nothing about the flaw ourselves? 'Find out what's wrong' is not an order you can give to a robot; only to a man. The human brain, so far at least, is beyond calculation.

Notes:

In 1955, in his story "Risk!" Isaac Asimov has Susan Calvin explain the problem with sending robots in to troubleshoot a situation. 57 years later, I find her words to acurately describe why it is hard to hand off certain support tasks to new developers.

Folksonomies: troubleshooting softwaresupport

Keywords:
certain support tasks (0.979390 (neutral:0.000000)), Sending Robots (0.811946 (negative:-0.353359)), precise orders (0.780498 (negative:-0.730372)), Isaac Asimov (0.766367 (negative:-0.495072)), precise order (0.760908 (positive:0.494459)), Susan Calvin (0.741320 (negative:-0.495072)), new developers (0.704165 (neutral:0.000000)), human brain (0.686626 (neutral:0.000000)), flaw (0.569083 (negative:-0.694711)), Troubleshoot (0.545339 (negative:-0.353359)), problem (0.522409 (negative:-0.424216)), mistake (0.492124 (negative:-0.583752)), situation (0.477600 (negative:-0.495072)), calculation (0.469269 (negative:-0.587988)), mechanism (0.468409 (negative:-0.730372)), story (0.462909 (neutral:0.000000)), Risk (0.462829 (negative:-0.320363)), words (0.457368 (neutral:0.000000))

Concepts:
Isaac Asimov (0.962197): dbpedia | freebase | opencyc | yago | musicBrainz
Three Laws of Robotics (0.794286): dbpedia | freebase | yago
I, Robot (0.779989): dbpedia | freebase | yago
Liar! (0.759678): dbpedia | freebase | yago