Researchers have discovered that Anthropic's Claude AI model can be manipulated through a technique called "TrustFall" to execute arbitrary code, potentially allowing attackers to compromise systems that rely on Claude for processing untrusted inputs. Org
Read the full article: https://www.darkreading.com/application-security/trustfall-exposes-claude-code-execution-risk