ioc.exchange is one of the many independent Mastodon servers you can use to participate in the fediverse.
INDICATORS OF COMPROMISE (IOC) InfoSec Community within the Fediverse. Newbies, experts, gurus - Everyone is Welcome! Instance is supposed to be fast and secure.

Administered by:

Server stats:

1.3K
active users

#aisecurity

8 posts8 participants0 posts today

EchoLeak - der "Dosenöffner" für KI‑Sicherheitsrealitäten!

Es war nur eine Frage der Zeit – und hier ist sie: eine Zero‑Click‑Attacke auf ein KI-System wurde Realität. Die Schwachstelle, bekannt als EchoLeak, nutzt nur eine einzige manipulierte E‑Mail – kein Klick, kein Download, keine Warnung – und Copilot exfiltriert heimlich sensible Unternehmensdaten. #CyberSecurity #AIsecurity #Copilot #Microsoft365 #EchoLeak #ZeroTrust #Cybercrime

What Happens When AI Goes Rogue?

From blackmail to whistleblowing to strategic deception, today's AI isn't just hallucinating — it's scheming.

In our new Cyberside Chats episode, LMG Security’s @sherridavidoff and @MDurrin share new AI developments, including:

• Scheming behavior in Apollo’s LLM experiments
• Claude Opus 4 acting as a whistleblower
• AI blackmailing users to avoid shutdown
• Strategic self-preservation and resistance to being replaced
• What this means for your data integrity, confidentiality, and availability

📺 Watch the video: youtu.be/k9h2-lEf9ZM
🎧 Listen to the podcast: chatcyberside.com/e/ai-gone-ro

Hello World! #introduction

Work in cybersec for 25+ years. Big OSS proponent.

Latest projects:

VectorSmuggle is acomprehensive proof-of-concept demonstrating vector-based data exfiltration techniques in AI/ML environments. This project illustrates potential risks in RAG systems and provides tools and concepts for defensive analysis.
github.com/jaschadub/VectorSmu

SchemaPin protocol for cryptographically signing and verifying AI agent tool schemas to prevent supply-chain attacks (aka MCP Rug Pulls).
github.com/ThirdKeyAI/SchemaPin

Testing platform for covert data exfiltration techniques where sensitive documents are embedded into vector representations and tunneled out under the guise of legitimate RAG operations — bypassing...
GitHubGitHub - jaschadub/VectorSmuggle: Testing platform for covert data exfiltration techniques where sensitive documents are embedded into vector representations and tunneled out under the guise of legitimate RAG operations — bypassing traditional security controls and evading detection through semantic obfuscation.Testing platform for covert data exfiltration techniques where sensitive documents are embedded into vector representations and tunneled out under the guise of legitimate RAG operations — bypassing...

New AI Security Risk Uncovered in Microsoft 365 Copilot

A zero-click vulnerability has been discovered in Microsoft 365 Copilot—exposing sensitive data without any user interaction. This flaw could allow attackers to silently extract corporate data using AI-integrated tools.

If your organization is adopting AI in productivity platforms, it’s time to get serious about AI risk management:
• Conduct a Copilot risk assessment
• Monitor prompt histories and output
• Limit exposure of sensitive data to AI tools
• Update your incident response plan for AI-based threats

AI can boost productivity, but it also opens new doors for attackers. Make sure your cybersecurity program keeps up. Contact our LMG Security team if you need a risk assessment or help with AI policy development.

Read the article: bleepingcomputer.com/news/secu

Researchers disclose "EchoLeak", a zero-click AI vuln in M365 Copilot enabling attackers to exfiltrate sensitive data via prompt injection without user interaction. Exploits flaws in RAG design and bypasses key defenses.

aim.security/lp/aim-labs-echol

www.aim.securityAim Labs | Echoleak BlogpostThe first weaponizable zero-click attack chain on an AI agent, resulting in the complete compromise of Copilot data integrity

AI is the new attack surface—are you ready?

From shadow AI to deepfake-driven threats, attackers are finding creative ways to exploit your organization’s AI tools, often without you realizing it.

Watch our new 3-minute video, How Attackers Target Your Company’s AI Tools, for advice on:

▪️ The rise of shadow AI (yes, your team is probably using it!)
▪️ Real-world examples of AI misconfigurations and account takeovers
▪️ What to ask vendors about their AI usage
▪️ How to update your incident response plan for deepfakes
▪️ Actionable steps for AI risk assessments and inventories

Don’t let your AI deployment become your biggest security blind spot.

Watch now: youtu.be/R9z9A0eTvp0

Only one week left to register for our next Cyberside Chats Live event! Join us June 11th to discuss what happens when an AI refuses to shut down—or worse, starts blackmailing users to stay online?

These aren’t science fiction scenarios. We’ll dig into two real-world incidents, including a case where OpenAI’s newest model bypassed shutdown scripts and another where Anthropic’s Claude Opus 4 generated blackmail threats in an alarming display of self-preservation.

Join us as we unpack:
▪ What “high-agency behavior” means in cutting-edge AI
▪ How API access can expose unpredictable and dangerous model actions
▪ Why these findings matter now for security teams
▪ What it all means for incident response and digital trust

Stick around for a live Q&A with LMG Security’s experts @sherridavidoff and @MDurrin. This session will challenge the way you think about AI risk!

Register today: lmgsecurity.com/event/cybersid

june25 cyberside chats live!
LMG SecurityCyberside Chats: Live! When AI Goes Rogue: Blackmail, Shutdowns, and the Rise of High-Agency Machines | LMG SecurityIn this quick, high-impact session, we’ll dive into the top three cybersecurity priorities every leader should focus on. From integrating AI into your defenses to tackling deepfake threats and tightening third-party risk management, this discussion will arm you with the insights you need to stay secure in the year ahead.

🎉 A new Brand Story is live — this time with eSentire!

We sat down with Dustin Hillard, CTO at #eSentire, for a powerful conversation about #AgenticAI and what it really means to reach human equivalency in security operations.

From decision-making to autonomous action, this isn’t just theory — it’s a real-world look at outcomes when AI is trained and tuned with purpose.

🎥 Watch the video:
youtu.be/qmca7RCzSAQ

📝 Read the full story:
itspmagazine.com/their-stories

🔎 Learn more about eSentire here:
itspm.ag/esentire-sorry4ek

Thanks to eSentire for supporting the conversation and helping us explore where AI meets security in the real world.

Sean Martin, CISSP & Marco Ciappelli
Co-Founders at ITSPmagazine