Ramesh has seven years of experience writing and editing stories on finance, enterprise and consumer technology, and diversity and inclusion. She has previously worked at formerly News Corp-owned TechCircle, business daily The Economic Times and The New Indian Express.
An attempt by the California statehouse to tame the potential of artificial intelligence catastrophic risks hit a roadblock when Governor Gavin Newsom vetoed the measure late last month. One obstacle is lack of a widely-accepted definition for "catastrophic" AI risks.
Foreign threat actors are using generative artificial intelligence to influence U.S. elections, but their impact is limited, said OpenAI. Threat actors from China, Russia, Iran, Rwanda and Vietnam maliciously used AI tools to support influence operations.
This week, Australia seized crypto from alleged Ghost mastermind, Taiwan drafts new AML rules, IcomTech founder sentenced, U.S. looks to recover stolen crypto, EigenLayer's erroneous fund transfer, FTX's bankruptcy plan approved, Bitfinex hack update and regulatory push for a lawsuit against Nvidia.
A U.S. federal judge mostly stopped from going into effect a newly-enacted California law restricting the use of election-related deepfakes, ruling Wednesday the statute likely violates American freedom of speech guarantees. The legislation "acts as a hammer instead of a scalpel," the judge wrote.
This week, a guilty plea for $37M stolen, a $3.8M Onyx hack, a first conviction for illegal crypto ATM operations, Zort owner fraud, WazirX's post-hack liability, U.S. congressmen ask for Binance exec's release, a U.S. court denied Tornado Cash exec's motion and a SEC-Mango Markets settlement.
OpenAI claims its new artificial intelligence model, designed to "think" and "reason," can solve linguistic and logical problems that stump existing models. Officially called o1, the model nicknamed Strawberry can deceive users and help make weapons that can obliterate the human race.
California Gov. Gavin Newsom on Sunday vetoed a hotly debated AI safety bill that would have pushed developers to implement measures to prevent "critical harms." The bill "falls short of providing a flexible, comprehensive solution to curbing the potential catastrophic risks," Newsom said.
The United States on Thursday criminally charged an alleged key money laundering figure in the Russian cybercriminal underground on the same day Western authorities shut down virtual currency exchanges by seizing web domains and servers associated with Russian cybercrime.
This week, BingX, Truflation, OpenAI X account hacked; Germany shut 47 exchanges; Caroline Ellison sentenced; two got crypto theft charges; one got crypto scam fine; Banana Gun will refund victims; WazirX, Liminal in dispute; SEC settled with TrueCoin, TrustToken; CFTC may settle with Mango Markets.
Wednesday brought more turmoil in the top ranks of OpenAI after three executives in leadership positions quit the company at a time when the AI giant seeks to convert itself into a for-profit entity. The new structure may affect how the company prioritizes and addresses AI risks.
More than 100 tech companies including OpenAI, Microsoft and Amazon on Wednesday made voluntary commitments to conduct trustworthy and safe development of artificial intelligence in the European Union, with a few notable exceptions, including Meta, Apple, Nvidia and Mistral.
Using facts to train artificial intelligence models is getting tougher, as companies run out of real-world data. AI-generated synthetic data is touted as a viable replacement, but experts say this may exacerbate hallucinations, which are already a big pain point of machine learning models.
LinkedIn this week joined its peers in using social media posts as training data for AI models, raising concerns of trustworthiness and safety. The question for AI developers is not whether companies use the data or even whether it is fair to do so - it is whether the data is reliable or not.
This week, Delta Prime and Ethena were hacked, Lazarus' funds were frozen, the SEC settled with Prager Metis and Rari Capital, Sam Bankman-Fried sought a new trial, the SEC accused NanoBit and CoinW6 of scams, the CTFC sought to fight pig butchering, and Wormhole integrated World ID and Solana.
If the bubble isn't popping already, it'll pop soon, say many investors and close observers of the AI industry. If past bubbles are a benchmark, the burst will filter out companies with no solid business models and pave the way for more sustainable growth for the industry in the long term.
Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing inforisktoday.co.uk, you agree to our use of cookies.