Sign In / Sign Out
Navigation for Entire University
- ASU Home
- My ASU
- Colleges and Schools
- Map and Locations
Stephanie Forrest is the director of the ASU Biodesign Institute Center for Biocomputing, Security and Society, and she is a professor in the School of Computing, Informatics and Decision Systems Engineering. She has more than 20 years of experience leading interdisciplinary research and education programs, particularly at the intersection of biology and computation, including work on computer security, software engineering and biological modeling.
ASU Now spoke with Forrest about her take on the computer security landscape and what computer scientists can learn from human immune systems and biological evolution.
Question: You research the intersection of biology and computation. What can biology teach us about computer security?
Answer: Biology is the true science of security. And by that I mean that organisms have had to contend with adversaries and competitors from the very beginning of their evolutionary history. As a result, they’ve evolved an incredible repertoire of defense systems to protect themselves. Every cell has a defense system, and every kind of animal has a defense system, and even ecological systems have defenses built-in.
Looking at how biological systems have learned to protect themselves can suggest novel approaches to security problems. One of the easiest places to see this is in the immune system, which plays a major role in protecting individual organisms from foreign viruses and bacteria. What I try to do is look at biological mechanisms and principles and translate those mechanisms and architectures into computational algorithms that protect computers.
Q: What is your take on the scope of data breaches over the past decade?
A: We consumers don’t have as deep an understanding of the scope of these breaches as we should. Today, we’re essentially forced, either through our jobs or just to conduct our lives, to give up huge amounts of personal information to third parties, who have demonstrated time and again that they cannot protect it. As a result, our data are everywhere — in the hands of foreign governments, in the hands of cybercriminals, in the hands of the media, and in the hands of corporations we may never have heard of.
The impact of leaked data is just as important as the number of stolen records. We know that data about millions of people has been taken. But if you are never actually hurt by the stolen information, as in, you’ve never had money stolen or been blackmailed, are you harmed just by people knowing your credit score?
Courts have ruled on this, and in my view, they have set what is an unobtainable standard: you have to prove that you have been harmed. So if my personal information is in a database that gets hacked, unless the criminal uses it to do something like steal my money, and unless I can prove that that specific criminal used that specific data to steal my money, I can’t sue the person responsible for the database that was breached.
That seems like an impossible standard. If there are several copies of my social security number out in the world, how can I prove which copy was the one that let the criminal take my money?
Q: Why are current computer systems so vulnerable to hacking?
A: Part of the problem is how the tech industry has grown up, and it’s very difficult technically to go back and retrofit systems to prevent problems we’re seeing today. Our IT systems today consist of many tightly integrated systems of software that talk to each other, and they’re all controlled by different organizations, companies and institutions.
When something bad happens, even if you could assign fault, that fault is usually distributed over so many entities that there’s no effective stick. The carrot is for companies to make more money producing more technology in our lives and increasing our dependence on it. There’s nothing reigning that back. We also don’t have software liability or consumer protection, even though software is widely used by everyone.
Another issue is that cloud storage and cloud computing have exacerbated our vulnerability. In the old days, my data were just on my own computer, and if my own computer got hacked, it was just my information that was lost. Now that my data are merged with everyone else’s, like with Equifax, then one breach has enormous impact.
Q: What are the biggest security challenges we’ll face in the near future?
A: The immediate challenge is the internet of things. In addition to the insecure software and networks we already have, we’re adding devices that interact with our physical lives. We’re already doing a poor job of securing our software systems, but once these systems have the ability to control our physical and virtual environments, the complexity and risks go up dramatically.
Another risk I see is the possible end to the general purpose computer. As a computer scientist, this terrifies me. The flexibility of computing and software produced the wonderful technology we have today. Today’s computing devices are incredibly general and because of that, they can do a wide range of things. I worry that, in the rush to secure our systems, the easiest path will be to restrain functionality. Losing that general purpose computing ability would be a terrible loss, not just for computer scientists, but also for society.
We also have real questions about how the growth of technology and its securities or insecurities will interact with our democratic process. We in the U.S. value free speech more than any other place in the world. But, I think we’re seeing the limits of that and we need to reformulate what free speech means in the context of a world with social media platforms. Having some kind of public discourse about this is important, and it will have an impact on cybersecurity.
Another future challenge will be in the area of consumer protection. My concern here is that we should be having the public discourse now about what the principles are that we as a society want to embrace, rather than just waiting for another crisis and a knee-jerk reaction. I fear something serious happening, say a massive cyber-enabled power failure or manipulation of the financial markets, and suddenly there is pressure to pass a law gets passed requiring, e.g., that any device with access to the internet has to run a particular type of software. That would be counterproductive, and against our principles, but it also would increase our overall risk because it would create new single points of failure.