Recently Microsoft and OpenAI announced that they caught state-level adversaries using their AI tools to generate attack code and the like. Bruce characterized this as Microsoft spying on its users. 1/
https://www.schneier.com/blog/archives/2024/02/microsoft-is-spying-on-users-of-its-ai-tools.html
Some folks felt that the word "spying" was unwarranted -- after all, they said, who would expect that Microsoft *wouldn't* be looking at the things you type into its service. So, the argument goes, it's not fair to call that spying when it's what we've all come to expect from such services. 2/
Privacy would benefit from an understanding of Shifting Baseline Syndrome, long understood in ecology (and first developed in fisheries management). It's where each generation has a different "old" baseline to which they compare, with each generation's baseline worse than the previous. So just like certain fish populations in the world's oceans have fallen 80% from having fallen 80% from having fallen 80% (and so on), our expectations of privacy have fallen with each generation of technology. 3/
LLMs function like applications of yore: a static piece of data and code that we use to do something. The fact that everything is cloud hosted is just the modern deployment model. But a consequence is that everyone (cynical tech folks and even ordinary users) expects that anything you do online isn't private.
The word "spy" is about an expectation mismatch between the person being spied upon and the one spying. If the former has had a shifted baseline, there's no mismatch and thus no spying. The problem is the baseline *has* shifted, and almost nobody noticed. 4/4