Skip to content

It looks like we may have content for your preferred language. Would you like to view this page in English?

Deep Fakes Causing Real Harms: Time to Take Action

“Deep fake” technology has unleashed new powers to distort reality in ways and on a scale that is not yet fully understood.
 
Powered by artificial intelligence and machine learning techniques that are exponentially advancing their sophistication, deep fakes are increasingly difficult to detect. The result are attacks that distort reality more frequently and convincingly than many in the board room may realize, and are wreaking havoc via audio and video of real people saying and doing things they never said or did. Left unchecked, deep fakes will increasingly cause marketplace disruptions, inflict individual and corporate reputational harm, and undermine our fundamental understanding of truth.

In this Corporate Counsel article by Privacy, Security, and Data Innovations partner Harry Valetk, he addresses the deep fake technology threat, advocating for organizations to define the term, invest in detection tools, build scenarios into security plans, strategize against misinformation, and enhance post-incident measures for risk mitigation and protection against fraud and reputational damage.