Howard University professor talks about his research in emerging technologies.
TechRepublic’s Karen Roby spoke with Dr. Danda Rawat, professor of electrical engineering and computer science at Howard University, about artificial intelligence (AI), machine learning (ML) and how they can help cybersecurity. The following is an edited transcript of their conversation.
Karen Roby: I know that your, your focus a lot is on AI, machine learning, and wireless networking for connected systems. Expand on that for us.
SEE: Identity theft protection policy (TechRepublic Premium)
Danda Rawat: We have about $16 million in different projects. We have a data science-related project. We have cybersecurity-related projects. We have machine learning and artificial intelligence related projects. I can tell you that, you know, my expertise is along the line of cybersecurity. I am machine learning and wireless networking for connected systems or network systems. What that means is if you have Internet of Things (IoT) where different smart things are connected with each other, and you want to make them efficient, secure, and reliable … in that case, you need to have not one only wireless networking, or just the networking, you need to have a secure networking. But to make them smart, then we need some intelligence. So, in that case, what you need is, machine learning and artificial intelligence. If you recall, when the internet was designed, nobody thought there would be cybersecurity issues, right?
As a result, we are now having no sort of problem for cyber attacks, right? Because vulnerabilities out there, cyberattackers out there. And if we had a talk about, you know, cybersecurity, when we designed the internet, maybe we will not have this many problems now. That’s why my area, research area, we are focusing on how can you design something that is secure by design and intelligent for connected systems that we are envisioning for coming years to be deployed. One example, I can tell you is very cool, ad hoc networks. It’s not there yet, but we are expecting that there will be smart cars on the road. They will be talking to each other, but if smart cars are talking, that means your private information that is linked to the car would be broadcast to the public. And somebody might, you know, misuse that information.
How can you make sure that there is a network when you need, you can use, but is it secure? If it’s not secure, how can you make it secure, right? And if you are trying to figure out how cars can park without revealing any private information and make some intelligent traffic patterns. I mean, avoid traffic congestion and so you need to know all the topics that I mentioned: Machine intelligence, cybersecurity, and wireless networking for emerging connected systems. So that’s why I feel like we can go broader, but my goal is to cover all the topics or all the areas that are important for emerging connected systems.
Karen Roby: There’s so many pieces involved in setting up and furthering connected systems of the future.
Danda Rawat: My research projects are focusing on cybersecurity for artificial intelligence systems and the machine learning algorithms. And the other way around too. Machine learning and artificial intelligence for cybersecurity. It’s like two ways we are tackling the problems. For example, machine learning has good results in machine vision, computer vision, or you can look at the games. For example, chess was built that was using machine learning and artificial intelligence. As a result the chess game, played by computer, beat the smartest human a couple of years ago. And it has been a very promising application of AI and machine learning. So at the same time, you can look at how machine learning algorithms are compromised. If you recall, there was a Tesla car speeding. And I think it slipped by at 55 miles on the road [with a speed limit] of 35 miles per hour, just because of a smart piece of tape [on a sign].
SEE: Social engineering: A cheat sheet for business professionals (free PDF) (TechRepublic)
In that case we are trying to use the advantage or the benefits that AI and machine learning offer for securing systems that we are envisioning in the years to come. But again, at the same time, can we use the other way around too, right. That’s why my research is focusing on AI and machine learning for cybersecurity. At the same time, cybersecurity for AI because we want to secure the system that is working for a greater good.
Another example that I could explain is if you are using, let’s say a machine learning algorithm to filter out the applicants from the application pool to hire somebody. In that case, a machine learning algorithm would filter out the candidates who are not matching the criteria for that particular job. But everything is relying on the data and the computer or machines are being analyzed. They don’t have eyes, but still, those algorithms would be giving some results that are not fair. Maybe you’ll see some biased results, even though the machine used to filter out the candidate that machine learning algorithm or AI system relied on the data. If data is poisoned or deceived, then what will happen is the result will not be the result that we want. So, in that case, machine learning algorithms or data use for machine learning algorithms should be secure. That would help us to get the desired result and unbiased results. So, I think it’s a very important topic to cover in terms of research, in terms of development, and so.
Karen Roby: How do we get more people involved in cybersecurity? We know that the supply and demand is there. We’re only going to hear about it more in the future. How do we get more people involved and more students specializing in this area?
Danda Rawat: We need to invest in, I think, STEM education. And another thing is, I think we need to attract the students in that field. Again, it’s very challenging to recruit students in certain fields. For example, if you are looking at, just I’ll give one example, wireless communication, and you are trying to secure wireless communication, and if you are designing a security algorithm in a wireless environment, it is hard to visualize how secure that signal is, right? In that case, it will be hard to visualize. So we need to make sure that students are getting some exposure to some technical details through hands-on experiments. If you just say, I designed some, you know, secure wireless algorithms and that is transmitting a signal securely, it would be hard to convince anybody because you cannot see the wireless signal in the air, right?
Danda Rawat: But there are certain fields that you can see. For example, if you are working on image-processing and video processing, you will see the input as emails, maybe distracted images or blurry images. And if you process the image, you will see the clear image. So you can see certain outcomes, right?
I think that the summary is we need to educate the potential students and give them some hands-on experience so that they would be attracted to the program and stay in the program and graduate with the degrees. I think similarly in AI and machine learning when we see the applications, they are very promising, but when you try to develop, it’s very challenging, right? I think we need to look at both sides in a way that these are the applications. We can show applications to attract the student and teach them in a way that they would be staying in the program and graduate from the program. So I think we need to do a better job, I think, to recruit, retain, and train with the excellent environment and graduate them with the degree. Otherwise, I think we will be facing a severe shortage of highly skilled personnel in the near future.
Image: Mackenzie Burke