The “Three Pillars” (people, process, and technology) management framework requires a delicate balance in order to achieve successful operations outcomes. Despite the ‘technology’ pillar dominating the conversation as of late, cybersecurity practitioners are the backbone of your organization's defense against...
In the latest weekly update, ISMG editors discussed the Trump campaign's leaked documents and the many hacker groups targeting the U.S. presidential election, the potential for OpenAI's new voice feature to blur the line between AI and human relationships, and insights from the Black Hat Conference.
As cloud adoption accelerates, the unchecked growth of nonhuman identities is exposing companies to increased risks. Adam Cheriki, CTO and co-founder of Entro Security, explains why securing these identities is crucial and how the company's platform delivers a comprehensive solution.
Microsoft's Sherrod DeGrippo delves into the rise of SIM swapping, the role of social engineering in cyberattacks, and the emerging use of AI by threat actors. She emphasizes the need for real multifactor authentication and advanced strategies to counter these evolving threats.
Cyberattacks have become increasingly disruptive and often involve encryption or deletion of data that makes systems inaccessible. This creates substantial downtime and complicates the recovery process for organizations, said Jason Cook, AVP of worldwide partner sales engineering at Rubrik.
The widespread use of generative artificial intelligence has brought on a case of real life imitating art: Humans have begun to bond with their AI chatbots. Such anthropomorphism - treating an object as a person - is not a total surprise, especially for companies developing AI models.
AI has revolutionized app development, while also introducing security challenges. Liqian Lim of Snyk discusses the importance of implementing security measures early in the development process to manage AI tool-related risks and safeguard the software development life cycle from vulnerabilities.
David Gee, board risk adviser, non-executive director and author, shares leadership lessons from his career in his latest book, "The Aspiring CIO and CISO." He discusses his approach to managing cybersecurity risks, engaging with teams and simplifying communication.
Social media platform X faces the prospect of more legal scrutiny in Europe over its decision to feed customer data into its Grok artificial intelligence system after it agreed Thursday to suspend harvesting tweets as training data. NOYB said the company it is still likely violating privacy law.
ISO/IEC 42001, launched in late 2023, is the world's first AI management system standard, offering a framework to ensure responsible AI practices. Craig Civil, director of data science and AI at BSI, discusses the importance of AI policies and BSI's plans to implement the standard.
The Irish data regulator sued social media platform X, accusing the service of wrongfully harvesting users' personal data for its artificial intelligence model Grok. During a hearing on Tuesday, regulators told the High Court of Ireland that X violated GDPR rules.
The chairman of the U.S. Senate Intelligence Committee warned Wednesday that leading social media platforms, generative artificial intelligence vendors and tech giants like Microsoft, Google, Meta and OpenAI are failing to adequately combat deceptive AI use in the 2024 national elections.
Information Security teams face a mounting challenge: as businesses outsource more critical functionality to third parties, third-party cybersecurity incidents are hitting unprecedented highs. Add to this a widespread regulatory push for increasingly strict third-party risk management (TPRM) practices, and Infosec...
Cohesity CEO Sanjay Poonen discusses how the acquisition of Veritas will drive innovation in data protection, using AI for better insights and customer satisfaction. He shares plans to use AI technology for data insights and expand the firm's market share, especially within the Fortune 100.
OpenAI is "excited" to provide early access to its next foundational model to a U.S. federal body that assesses the safety of the technology, founder Sam Altman said on Thursday. OpenAI earlier essentially disbanded a "superalignment" security team set up to prevent AI systems from going rogue.
Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing inforisktoday.co.uk, you agree to our use of cookies.