Reliance acsn’s Alex Miller interviews Professor Nick Jennings CB, FREng (also Reliance acsn Advisory Board Member) on his recent Lovelace award for his ground breaking work on multi-agent systems for AI and the past, present and future of this much misunderstood technology.
Recently, I had the pleasure of talking to Professor Nick Jennings CB, FREng, vice-provost for Research and Enterprise and Professor of AI at Imperial College London. Professor Jennings’ research in the fields of AI, autonomous systems and cyber security is internationally renowned and most recently has been recognised through the joint award of the Lovelace medal, awarded in recognition of an outstanding contribution to the understanding and advancement of computing.
Professor Jennings and I begin by discussing this incredible achievement.
Alex: Professor Jennings, firstly, congratulations on behalf of Reliance acsn for your recent joint award of the Lovelace Medal.
Nick: Thank you, it has been a great journey. I’ve been an AI researcher for 30 odd years now and I was really pleased to be awarded the Lovelace medal, particularly as it was awarded jointly with a colleague of mine whom I’ve collaborated with for a long time. I feel for our field of multi-agent systems (MAS), it’s really good to see it come into the fold of respectability, with it being recognised as an important discipline that can make valuable contributions.
Alex: I wonder, when you first entered the field of multi-agent systems, it must have been a small research community. How did you become involved in AI and MAS?
Nick: Yes, it was a very small community. I went to do a PhD at Queen Mary’s College on an EU funded project in distributed artificial intelligence and I got really interested in AI just from the thoughts and possibilities of trying to build and design a machine that would behave in a smart way, I found that intrinsically interesting.
Alex: Absolutely. I think when most people think of AI and Computer Science, multi-agent systems are not a term that springs to mind. What are multi-agent systems and why are they an important feature of artificial intelligence?
Nick: An agent is an alternative word for an AI system that can act with a degree of autonomy, can act within its environment and can take decisions upon itself. The multi part is when you have a collection of many agents and I have developed algorithms that will allow agents to cooperate, coordinate and compete. For me, this is the next evolution of AI.
Alex: When we look at AI and multi-agent systems in the context of cyber security, one of your recent publications outlines methods for the detection of dictionary Domain Generated Algorithms in network traffic using deep learning. Improvements to the detection of malicious and C2 traffic is a welcome achievement and is of great use to cyber security blue teams. How do you see such research being integrated into existing infrastructure?
Nick: I really think cyber is an important domain for AI. Modern computer systems and networks are so complex and interconnected that people and low-level tools alone do not scale. Humans need more support from the machinery that is helping to predict and protect those systems. What is normal, abnormal, what to worry about and what not to worry about.
Alex: Interesting, so we are perhaps looking at the next generation of aid for cyber security analysts. Are there any other prime candidates for AI integration with cyber security?
Nick: Testing of networks so that networks have the ability to be probed continuously to look for weaknesses. Being able to use a system and verify users of that system; biometrics, face recognition and AI will let us have other forms of authentication.
Alex: Perhaps now we can move on to some of the issues thought to affect AI systems. What do you feel are the greatest obstacles still to overcome in the development of AI?
Nick: For me, there are a few of these:
- The AI’s ability to understand when it knows what it knows and when it doesn’t know what it knows.
- AI systems being able to explain and understand why they have made particular choices. It’s important for the computer to come to a rationale as to why it has made a particular decision. A lot of machine learning algorithms are currently quite opaque in that department.
- How AI can work with us as humans and form part of a partnership. Machines are very poor at being good collaborators and in the partnership model, must improve.
Alex: It’s strange to think of machines as not being risk averse and the need to teach such systems this skill. Whereas, I think more people are aware of the lack of accountability and complexity of AI systems. Is this more of an issue in multi-agent systems when we are considering multiple AI systems interacting?
Nick: A multiple agent system where agents represent different organisations or individuals are inherently more unpredictable than one system. Having said that, one AI system has enormous complexity around what the system is doing. So, multi-agent systems do make this harder but the upside is that this is nature and you have to accept it. We will continue to develop tools to add guarantees about what the system will do. It’s still very much a research area and research still needs to be done.
Alex: Continuing to look to the future of AI, many AI products are becoming more widespread, with GUIs that allow non-experts to operate them. In many cases, tools such as NLP are presented as a ‘black box’ with no understanding of the accuracy, precision, false or true positive rate. Do you think these tools are useful or do they provide a false sense of security?
Nick: There is a burgeoning industry in AI tools coming around and I see a number of these that are marketing claims, more to do with driving up stock prices than any technology associated with them. Having said that, there are many good examples of systems using AI. Frameworks are starting to emerge around the ethical deployment and algorithmic accountability of AI to help deploy AI and act as a checklist when deploying AI. When stuck with a black box and you don’t know how it does something or what it has been trained in, you should be very cautious about deploying it broadly.
Alex: Following on from that, do you have any views on how and if the world of AI can be demystified for those outside of the industry? For example, since the pandemic, many people have been using the term exponential to mean ‘growing quickly’ rather than to cite the exponential function. Is this a problem and if so, how can we combat it?
Nick: One of the most frustrating things for me as an AI researcher is the press that it can get. It is important that we rebalance some of the dialog in the public domain. In the UK government AI council, we are working on messages for AI positive. I think there is a general campaign required around AI and data literacy. The pandemic is a really interesting example of people looking at data way more than they have done previously. People should be able to understand the values of data, understand what you can and can’t figure out from data and how to probe data. As that improves, it then becomes an easier job to discuss AI. Some form of guidance around the guidance and ethics and checks that have gone around AI systems is needed to help navigate through the purchase or use of AI systems.
Watch the whole interview here
PROFESSOR NICK JENNINGS
ADVISORY BOARD MEMBER
Professor Nick Jennings CB, FREng is Vice-Provost (Research) at Imperial College. He is responsible for promoting, supporting and facilitating the College’s research performance and for leading on the delivery of the Research Strategy. Nick also holds a chair in Artificial Intelligence in the Departments of Computing and Electrical and Electronic Engineering .
Before joining Imperial, Nick was the Regius Professor of Computer Science at the University of Southampton (where he is still a Visiting Professor) and the UK Government’s Chief Scientific Advisor for National Security.
Professor Jennings is an internationally-recognized authority in the areas of artificial intelligence, autonomous systems, cybersecurity and agent-based computing.
Alex has a background in mathematics which lends itself to the analytical and critical thinking skills required in penetration testing. As a CREST Registered Tester, Alex has experience delivering a wide range of penetration tests.