with 10 posters taking part
In the second of our series of podcasts on expert system produced in association with Darktrace, we dive into something a little spookier: the world of “insider threat” detection.
There have actually been a variety of current prominent cases where individuals within companies utilize their access to information for self-enrichment or ill-intent, and it slipped past the normal policies and tools that are jointly described as “data loss prevention.” The majority of the time, workers are long preceded the information theft is discovered (if it ever is), and avoiding information loss nearly needs a Minority Report level of pre-cognition.
To get some insight into how AI might play a role in identifying expert risks, Ars editors Sean Gallagher and Lee Hutchinson talked with Kathleen Carley, director of the Center for Computational Analysis of Social and Organizational Systems at Carnegie Mellon University, about her research study into determining the informs of somebody ready to take the information andrun Lee and Sean likewise spoke with Rob Juncker, senior vice president of Research study and Advancement at information loss avoidance software application company Code42, about whether AI can actually assist spot when individuals will stroll off with or publish their company’s information. And Justin Fier, director for Cyber Intelligence and Analysis at Darktrace, talked with Lee about how AI-related technologies are currently being given play to stop expert risks.
This special edition of the Ars Technicast podcast can be accessed in the following locations:
https://itunes.apple.com/us/podcast/the-ars-technicast/id522504024?mt=2 (May take a number of hours after publication to appear.)