AI and human values

Next week, I will have the pleasure of being a keynote speaker at National Taipei Normal University as part of a conference on Artificial and Human Intelligence. As I am preparing my speech, I would like to share some of my opinions on the subject with you.

Firstly, we must separate the two aspects of Artificial Intelligence (A.I) as used in education. On the one hand, we have A.I. that directly supports learning. For example, when software is used to analyse student responses and to propose a personalised learning programme that corresponds with their results – what we call ‘adaptive learning’. On the other hand, concepts like ‘deep learning’ or ‘machine learning’, which, in short, involve abstract representations of data. So there is no confusion, let’s be clear, in the latter aspect we teach the machines to learn, not the students, even if advances in that area are of benefit to students too.

Discussions about Artificial Intelligence in education encompass both aspects of A.I: providing students with the most effective tools to help them learn better (which by the way, represents a gigantic market); try to imagine the place reserved for humans in a world where machines are capable of carrying out colossal calculations and therefore predict the future (it’s not a place that makes much money). We are familiar with the concept of A.I. as a tool to help students learn thanks to the expression “skills for the 20th century” – an endlessly updated catalogue of educational priorities.  Discussions concerning both aspects are of vital importance.

Let’s take a look at a possible scenario detailing the contribution of A.I. (I concede, it is a bit of an exaggeration): students take tests at their school – an institution that prides itself on its benevolence. The results are analysed, compared and contrasted with other data (parents’ level of education, their professions, place of residence…). The end result is a very detailed profile for each student outlining their chances of success, their future academic results, their ideal profession etc. The school goes to great lengths to accompany each child every step of the way – creating ever more impressive indicators and statistics – all made possible by advances in A.I. research – and in doing so the benevolent school does exactly what it swore it never would: it establishes norms. Children’s academic performances are then measured against these norms. The norms help the school judge each student – the gap between a child’s performance and that of the “average” student might suddenly become too large, the school invites the parents in for a meeting and explains that the child no longer matches the school’s criteria (they won’t put it this way, but the precise term is that he or she is deviant – deviating from the norms). The place of the student, their value – at school and beyond – is up for discussion (he or she is reevaluated).

The scenario throws up all kinds of ethical issues. Tools and institutions are not intrinsically benevolent. We need a whole lot of human intelligence in order to defend ourselves against such a scenario and to limit the potential damage A.I. could cause. The question we should be asking ourselves is how do we ensure that we are explicitly teaching skills that are worthy of men and women. Skills such as empathy, compassion, care for others and appreciation of diversity as an invaluable asset that cannot be reduced to a statistical norm.  There are plenty of other human intelligence skills to be taught and learnt and you probably all have your word to say on the matter. I would therefore like to invite you to share your ideas with me, please don’t hesitate to write and let me know what you think.

Have an excellent weekend!

Leave a Reply

Your email address will not be published. Required fields are marked *