relating to or characteristic of humankind: the human body | the complex nature of the human mind.
• of or characteristic of people as opposed to God or animals or machines, especially in being susceptible to weaknesses: they are only human and therefore mistakes do occur | the risk of human error.
• showing the better qualities of humankind, such as kindness: the human side of politics is getting stronger.
In so far as the suggestion that educators will never be replaced; I think this is a bigger area of debate. If we simply use technology to deliver the same curriculum of content-based knowledge, I believe we could very easily be replaced. Google does this for us already - a Google Curriculum App could easily replace a teacher who just delivers facts and content. The point about technolgoy is that it should shift our practises; it is not about delivering the same stuff but using a computer. It means a paradigm mind-shift into skills that enable learners to be successful; it enables personalised learning based on spontaneous teachable moments, it means a strong relationship between learner and educator - which is what keeps the 'human' necessary. The fact that elearning courses strive for more 'human experiences' and more face to face meetings means we still have a desire to communicate with a human being, in fact, as I explore in my post about digital citizenship for the Flat Classroom Teacher Certification course, digital citizezhip is actually about how to communicate with other people with technology; it is a whole new way of communicating that technology as allowed us. But this week I have questioned what it is we really want: is this desire to communicate with a human being, actually a desire to communicate with someone who cares and understands? If that could be a non-human, would it still work? What if, in the future, robots have that capability; that 'human' capacity to respond to the needs of learners, to empathise and understand, to personlise... This brings into question what we actually mean by 'the human'.
Developing from the notion of the integration of technology onto the human, the short film, 'Sight' (see my blog post: The Machine is Us) goes beyond interacting with and using tech for communication, it becomes more our actual world; it is blended into our actual view and experience of it. I question how far off this is (Google Glass); the technology might not actually be in place yet, but the experience already is. Think about the concepts presented in the short film, "Avatar Days", which begins to explore the blurred line between player and avatar, human and simulation, and offers a glimpse into a world already in motion, where humans spend time as 'other' and operate as 'other' giving personality, and humanistic principles to pixels. "True Skin" takes this further and explores the robotic in the human; in a world where enhancement is normal, the boundary between the human body and robotic body has been erased.
If these two films look at the robot in the human, "Robbie" and "Gumdrop"look at the human in the robot. They raise questions about what it means to be human in terms of advanced artificial intelligence. Both address the cybercultural possibility of "machinic sentience" but in very different ways. In a far-distant future-world, "Robbie" is a moving documentary that charts the existential reflections of an ageing robot who is drifting alone through space on the last of his battery life. "Gumdrop" is a sweet robot actress who offers a lighter, less dystopian view of the future of artificial intelligence, but both make me think about what is is to be 'human'. As a Vegan, I entrust sentient thought to animals - not a world-wide belief I know, but mine. I believe they have thoughts, feelings, memories... These films suggest that in the future, "humanistic principles of autonomy, rationality, self-awareness, responsibility, resilience and so on can be held by an artificial intelligence within a mechanical form". If so, what does that say about the extent to which we rely on human cognition and the flesh of a human body to give ‘human’ meaning to the experience of the world?" Do we NEED actual humans for a 'human' experience?
The whole concept of 'artificial intelligence' is an odd one too. If we are not born digital natives but have to learn how to operate successfully in a digital world, how is this different to robots who 'learn' to be 'human'? Can there even be 'artificial' intelligence? I struggle with the actual word 'artifical'; if artificial is defined as "made or produced by human beings rather than occurring naturally, especially as a copy of something natural" think about this in terms of the 2001 film, "Artificial Intelligence: AI", where a robotic boy longs to become "real" so that he can regain the love of his human mother. Is this artificial - once it is made, cannot it not be anything other than made? Once it is created, is not real? This brings into question the nature of mind, memory and learning, and the ways in which technological mediation is positioned in relation to it. Does what we are physically made of define us; if we can feel, if we have emotions, are we not 'human'? Surely being human is not really about being made of flesh and blood; there are many people out there made of 'meat' who are not 'human', who operate outside the realm of what most consider humanistic principles, who show no rationality or responsibility. I am not religious and don't hold that we are defined by a 'soul' per se, but I am spiritual and believe we are more than the vessel that holds us - therefore, does it matter what material that vessel is? If we can think, be responsible for others, show kindness, reason and compassion, rationality and self-awareness - are we, whatever the 'we' is, 'human'? Trans-humanism extends the humanistic principles of rationality, scientific progress and individual freedom; ‘humanity’ is a temporary condition and the future of human evolution is in the direction of a post-human future state in which technological progress will free us from the inconveniences of limited lifespan, sickness, misery and intellectual limitation.
It is these broken down lines of limitation that we need to explore as educators; ‘technology-enhanced learning’ appears to have become the new acceptable global term - and the emphasis should be on 'enhanced'. Presently, technology should be one of our tools that allows us to deliver more effective, personalised, interactive, relevant learning. As an educator - and a learner - responding to the idea that technology and the Internet in particular damages our capacity to think, I suggest that 'artificial intelligence' is content-based learning; it is facts learned rote that offer no guidance or support for life in the real world; it is just information that offers no deep-thinking and does not foster skills that allow more learning to happen. Carr (2008), suggests that our "media environments develop their own logic, to which we adapt socially and physiologically", which may be the thin end of the trans- or even post-human future state wedge. We are already out there; we exist virtually in social media and repsond accordingly. I have relationships with people I have never and will never meet; I know what they look like, how they think, what they feel and what they believe in; I have virtual relationships that enhance my learning - are they 'artificial'? More interestingly, what happens to our social media when we die? Will we still exist, forever in stasis, in binary, in code? Is that artificial intelligence because it has no flesh? Could we be recreated in a future world from all this information that exists forever, out there? Is that who we are?
----------------------------------Coursera, Week 4: Redefining the Human. https://class.coursera.org/edc-001/wiki/view?page=Redefiningthehuman
"Is Google Making Us Stupid?" The Atlantic. http://www.theatlantic.com/magazine/archive/2008/07/is-google-making-us-stupid/306868/ 21 Feb. 2013.