Skip to content

The AI Imperative 

Business

Nov 4, 2020 - 8 minute read

Blog Post AI Education 416X300
Objectivity Innovative leader in technologies

Our specialty is designing, delivering, and supporting IT solutions to help our clients succeed. We have an ethical framework that underpins everything we do. Our underlying philosophy is that every client engagement should result in a Win-Win and this is supported by our four values: People, Integrity, Excellence, and Agility. Our clients are at the heart of our business and we are proud to form long-lasting working relationships, the longest of which is 29 years. Our goal is to continue to grow our business whilst remaining true to the ethical framework and values on which we are founded.

See all Objectivity's posts

2988 HC Digital Transformation 476X381

Share

AI is quietly transforming humanity. Can we vote about it? 

We want AI to make our lives better. For that, we need to understand it. Otherwise we may disrupt our societies on many levels. 

The ability that AI has to acquire knowledge through experience and observations is already widely deployed. Its ability to learn, to predict, and to act in various scenarios is rapidly increasing. 

As we pass year 2030, Artificial Intelligence (AI) will influence us by being woven into everything we do. By then, the very quick AI growth we are seeing during this nascent 2020 will open each of us to other identities from the cloud. The interactions between people, companies and other entities are set to change. Imagine all your conversations annotated by AI emojis and filtered and prioritised by AI emphasis—how would your perception change? 

Future AI Trends

AI is very good at performing tasks in a quick and efficient way to achieve a set of objectives, irrespective of what they are. When the real world is allowed to be controlled by AI, it becomes imperative to educate ourselves in machine intelligence, so that the human and machine objectives are aligned. This is how we will be able to influence responsible, ethical, and beneficial AI development. How we learn and teach can be improved by using AI to understand Human Intelligence (HI) and learning interactions.  

We humans increasingly communicate through networks. These networks are evolving to non-centralised models where local processing agents manage our communications. AI is now being deployed to run those localised communications and is anticipated to be the main driver of future mobile technologies. The promise of 6G is to wrap all our communications in AI. 

When AI provides the context for our communications, that context may conflict with the context we usually give to our interactions. High context societies, where a raised eyebrow can carry much meaning, will be most affected. 

Mary Yoko Brannen's article about how Mickey Mouse was received differently by a Japanese audience than its US counterpart is another wonderful example of how subjective a message can be. Any household with parents from different societies will recognise how context is inherent when we communicate. Edward T. Hall’s book Beyond Culture spoke about these things as early as 1976. 

As AI increasingly controls communications and the label switches from 5G to 6G, we will mostly not be aware of how AI controls the way we communicate. We should still understand it. 

Professor Rose Luckin is a European thought leader in the areas of human intelligence and artificial intelligence. Dr Paul Matthews is at the coalface of lecturing computer science and creative technologies. The two of them will talk about these subjects at a free webinar you can attend “AI in Education”, taking place on 5th November, 2020.

Within Europe, the similarities are more striking than the differences. Germany intends to provide around €3bn funding for its aggregate government Artificial Intelligence strategy during the 2019-2025 period. The intent is part of its broader AI push that includes new German AI healthcare legislation that is similar to Finland’s legislation. There is an obligation for pharmacies (September 2020) and hospitals (January 2021) to connect to the German government’s cloud-based health (TI).

But there are differences between countries such as the UK and, say, Finland. The approach taken in Finland by its Finnish Centre of Artificial Intelligence (FCAI) is in many ways similar to the UK’s AI Council. The two countries both encourage specific industries and projects by targeting financial support in Public Private Partnerships (PPPs). 

The difference is in relation to the broader population. Around 1.5% of Finland’s population has now completed the free online course provided by the University of Helsinki. If the UK achieved the same result, around 1 million Brits would have a basic AI education. 

This would not mean a million data scientists—it would mean one million people who are less susceptible to populism and misleading information regarding how our society works in the near future. It might stabilise democracy and enable meaningful votes on subjects such as the ownership of personal data. 

The course provides a high-level understanding of 6G mixed reality perceptions of our environment, and how network heterarchy might cause countries such as Sweden to ban some suppliers of 5G networks from their country. It also shows how objectivity will provide subjectivity in our lives, because the cognitive models of the future will be subjective. 

The starting point for companies in the future will be to outcompete not only with production and sales, but also systematic data acquisition (“DataOps”) from information pockets to create the smartest maps of the markets they serve. Yves Doz has written extensively about examples such as PolyGram in 1998, who built an empire by connecting specific musical talent to its detailed knowledge of regional music markets. He calls it a shift from global to metanational. 

Better and earlier are two core AI themes. For healthcare, this means providing information, diagnosis, and treatment earlier and better, owing to techniques such as computer vision and deep learning. AI is critical for earlier patient encounters in healthcare because the demographic shift between now and 2030 would otherwise place unreasonable burdens on our healthcare systems. And companies such as Telefónica have measurably improved their customers’ lives by using AI to entertain and assist communications in all their markets worldwide. 

So far, so good. But when the shift to metanational really embraces us, it will be linked to changes in how we perceive ourselves, as Francis Fukuyama says in his book “Identity: The Demand for Dignity and the Politics of Resentment”. In it, Fukuyama discusses how our need for self-worth is challenged when the identity our community has given us is replaced by new identities. 

Jean-Paul Sartre said in 1946 that a man is what he wills himself to be. If that willpower is a finite resource, then our ability to resist how our environment changes us is also limited. 

AI really helps, and AI is set to change us. 

Should we understand it before it changes us? 

AI in Education

In terms of education and AI, there is a significant challenge that extends beyond, but is related to ensuring that as many members of the population as possible understand enough about AI to recognise that they are and will increasingly be using it all the time. This challenge lies in the need for us to understand our own human intelligence. As Yuval Noah Harari wisely notes—we had better fully understand ourselves, because the AI surely will. At the moment, our education systems are built on an impoverished view about what people need to know and understand about the world. This view is largely built on old-fashioned thinking about human intelligence that fails to recognise the vast majority of what our human intelligence involves. In particular, we fail to focus on the aspects of our intelligence that cannot be automated. 

“When I was a PhD student,” says Professor Rose Luckin, “I remember the celebrations when IBM’s Deep Blue beat Gary Kasparov at chess. The feeling in the department was that we had passed an important milestone on the way to what we would now call Artificial General Intelligence (AGI), because intelligent people played chess and we had a computer that could play chess better than anyone. In fact, we were at a crossroads, where the way ahead did not take us to broader, richer AI, it merely took us along the same road of rule-based AI that had enormous limitations. In the AI winter that followed we started to recognise that much of what we took for granted about our human intelligence, such as vision, was actually far harder to achieve with AI than playing world champion level chess. We finally recognised that we needed to take a different route if we were to build AI that could ‘see’.” 

We are at that same crossroads now. We stand faced with a false belief that our AI can be and is becoming as smart as we are. We fail to recognise that the AI is only scratching the surface of intelligent behaviour when we compare it to our Human Intelligence (HI). The reason we are so blind to the truth is that we have focussed for centuries on a very narrow perception of HI. This narrow view of HI has influenced the education systems that we have created and is damaging the prospects for the vast majority of humanity, by continually educating people to be good at the things that AI can also be good at. The only way for us as humans to progress is to recognise the importance of the aspects of our human intelligence that cannot be automated. The aspects that are way more sophisticated than anything AI can achieve.  

For example, we can learn to probe, reflect about, regulate, and understand our own intelligence. We are capable of judging our ability to deal with complex, novel situations. These abilities need to be taught and they need to be mastered, but nevertheless we have the capability to do this—AI does not. As humans, we can empathise, sympathise, be compassionate. We can build the strong and productive social relationships that are the foundation of civil society. Our capacity for self-awareness of a very sophisticated sort sets us aside from AI. This self-awareness or meta-intelligence extends beyond our physical and psychological selves to the world around us, both real and virtual, physical and digital. We move seamlessly between different locations, interact with different people, use different tools and technologies with incredible nonchalance and we take it all for granted most of the time. This is our superpower and we need to learn to cherish and nurture it in our education systems if we are to flourish in the AI driven world.

Summary

How we represent ourselves will be augmented by AI. How we interact will be assisted by systems that analyse each participant. We are changing our worlds and may, like Julia on Shakespeare’s balcony, exclaim “Too early seen, unknown, and known too late”.

Microsoft has called for a broad, public debate as well as government regulation on AI technologies. For that, we need to be more educated to understand the future roles of AI and HI. Join Objectivity’s upcoming free webinar “AI in Education” on 5th November, 2020 to learn more about the significant impact Artificial Intelligence can have on education.

About the Authors

Peter Karsten, MSc Applied Physics

Peter Karsten is an experienced group executive, sales and management consultant with 20 years of management level experience in high tech and online media with companies such as Nokia, Citibank, TeliaSonera, Bessemer Venture Partners and Turner Broadcasting. He has filed 13 patents and speaks 5 languages.

Rose Luckin BA, PhD Artificial and Cognitive Science

Rose Luckin is Professor of Learner Centred Design at UCL Knowledge Lab and the founder and CEO of EDUCATE Ventures. Rose brings together educators. EdTech creators and researchers to ensure the effective and innovative use of technology throughout learning. She is an internationally recognised authority in understanding the respective future roles of artificial and human intelligence.

Paul Matthews PhD Information Science

Paul Matthews is Senior Lecturer, Department of Computer Science and Creative Technologies, University of the West of England. He has a Santa Fe Institute certificate in the Introduction to Complexity and a BSc in experimental psychology.

2988 HC Digital Transformation 476X381
Objectivity Innovative leader in technologies

Our specialty is designing, delivering, and supporting IT solutions to help our clients succeed. We have an ethical framework that underpins everything we do. Our underlying philosophy is that every client engagement should result in a Win-Win and this is supported by our four values: People, Integrity, Excellence, and Agility. Our clients are at the heart of our business and we are proud to form long-lasting working relationships, the longest of which is 29 years. Our goal is to continue to grow our business whilst remaining true to the ethical framework and values on which we are founded.

See all Objectivity's posts

Related posts

You might be also interested in

Contact

Start your project with Objectivity

CTA Pattern - Contact - Middle