How science represents the real world can be cute to the point of frustrating. In 7th grade mathematics you have problems like:
“If six men do a piece of work in 19 days, how many days will it take for 4 men to do the same work when two men are off on paternity leave for four months?”
Well, of course there was no such thing then of men taking paternity leave. But you can’t help but think about the universe of such a problem. What was the work? Were all the men the same, did they do the work in the same way, wasn’t one of them better than the rest and therefore was the leader of the pack and got to decided what they would do on their day off?
Here is the definition of machine learning according to one of the pioneers of machine learning, Tom M. Mitchell:
“A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E”
This can be difficult for a social scientist to parse because you’re curious about the experience E and the experience-r of experience E. What is this E? Who or what is experiencing E? What are the conditions that make E possible? How can we study E? And who set the standard of Performance P? For a scientist, Experience E itself is not that important, rather, how E is achieved, sustained and improved on is the important part. How science develops these problem-stories becomes an indicator of its narrativising of the world; a world that needs to be fixed.
This definition is the beginning of the framing of ethics in an autonomous vehicle. Ethics becomes an engineering problem to be solved by logical-probabilities executed and analysed by machine learning algorithms. (TBC)