The future of censorship-resistant communications is going to be distributing LLMs trained on dissident content, rather than the content itself.
Imagine “the anarchist cookbook” but it’s a device-local chatbot that will answer all your (technical and ideological) questions interactively and persuasively.
But also imagine a kid in a repressive society who has questions about religion or sexuality and has nobody to talk to about it (particularly if Internet filtering keeps expanding.)
Anyway one of the big limitations of censorship-resistant tech is that it’s hard to build low-latency services like web browsing and chat. But (relatively) easier to do high-latency file distribution, maybe? LLMs are files, but provide local interactivity.
The LLaMA/Alpaca models are already capable of doing this. They’re fine-tunings on top of quantized LLMs. The better models are big (gigabytes) but I doubt a few GB is going to be a terrible issue in a few years.