First let's verify that it's really Dave who logged in. Over the past several years, computer-security researchers at SRI, Mitre, and other organizations (including the U.S. government) have learned that individuals have distinctive system-usage signatures. Data that can make up that signature include the name (or type) of programs executed, the method of changing system directories, the login time, and session length. Let's say that Dave normally uses the mainframe during business hours to read e-mail. One Saturday night around 2:00 a.m., he logs in, scans the system read-only directories, and then attempts to rewrite the master password file. There's a good chance your system's been infiltrated.
That's a simple scenario, of course. Programmers, who perform a wide variety of computer activities at all hours of the day and night, are more difficult to validate than 9-to-5 data-entry clerks. On an academic network, you'll frequently need to recalculate your baseline models for each user as his or her expertise grows. The computer is vulnerable if hackers break into a new user's account before there's enough data to train the neural net properly or construct the model. Still, studies show that if the operating system is gathering the proper data, AI techniques can be applied in this area.
Expert systems can be applied to the second problem, trying to detect if Dave (or the intruder using Dave's account) is misbehaving. A network monitoring tool can. see what commands Dave is issuing (like changes to other user's files, or altering permission flags for various files). If the knowledge base contains data on known ways of hacking superuser privileges or crashing the system, it can watch for that type of activity. If Dave issues the first two commands in a dangerous three-command sequence, the expert system could alert the systems operator, flash a warning on Dave's screen ("What are you doing, Dave?"), or even lock his account out of the system.
Perhaps you're thinking that Big Brother is watching. You're right. Instead of Orwellian thought police monitoring your private conversations, you might soon have AI software watching your every keystroke. Given today's business realities, we might as well get used to that unpleasant idea.
I wrote the above essay in June 1994 and recently stumbled across it. Eighteen years later, it’s still relevant.