Artificial Intelligence & Machine Learning , Events , Next-Generation Technologies & Secure Development
Governance, Privacy and Ethics in the Age of AI
IAPP and Workday Executives Detail the Nuances of Dealing With the Impact of AIArtificial intelligence, generative AI and machine learning have always been part of the broader privacy conversation, according to J. Trevor Hughes, CEO and president, IAPP.
See Also: Safeguarding Election Integrity in the Digital Age
More than 50% of organizations are sending concerns about AI risk to their privacy leader, he said. "So, we see the privacy field, the 85,000 members that we have around the world, really upskilling to respond to this call to action that AI is presenting to us all," he said.
One of those privacy offers, Barbara Cosgrove, vice president and chief privacy officer at Workday, has helped build a framework for using AI at her company. "It's based on our principles, our procedures and our people," she said.
In this video interview with Information Security Media Group at RSA Conference 2024, Hughes and Cosgrove also discussed:
- How their respective organizations are building safety and trust in AI systems;
- Responses to emerging AI standards;
- Recommendations for compliance, cybersecurity, safety and trust.
Hughes is an attorney specializing in e-commerce, privacy and technology law. He has testified before the U.S. Congress, the U.S. Federal Trade Commission and the British Parliament on issues of privacy, surveillance and privacy-sensitive technologies.
Cosgrove has extensive expertise in leading international data protection, ethics and compliance programs, including oversight of global data privacy programs, implementation of technology compliance standards and development of "privacy by design" and machine learning "ethics by design" frameworks. She served as the chief security officer for Workday.