ioc.exchange is one of the many independent Mastodon servers you can use to participate in the fediverse.
INDICATORS OF COMPROMISE (IOC) InfoSec Community within the Fediverse. Newbies, experts, gurus - Everyone is Welcome! Instance is supposed to be fast and secure.

Administered by:

Server stats:

1.4K
active users

@nonlinear @futurebird @dahukanna @PavelASamsonov @knowuh homomorphic implies (aiui) that all operations on one half of the homomorphism can be mapped 1:1 to operations on the other half, and my point here is that we already know that at least in the strongest form that argument is not true.

Kevin Riggle

@nonlinear @futurebird @dahukanna @PavelASamsonov @knowuh in the much weaker and non-homomorphic sense that we can use the models on one side to make predictions about the models on the other side and then test them against the real world, sure absolutely. That’s just science! But we really, really can’t assume that the real world will validate our extrapolations.

@nonlinear @futurebird @dahukanna @PavelASamsonov @knowuh (this is the book I’m reading and it goes into quite some detail about how the symmetries break down. BUT, causality and modeling is of great interest to me and I now know what I’m reading next thank you :)

@nonlinear @futurebird @dahukanna @PavelASamsonov @knowuh (oh published 2011, no wonder it doesn’t cite Hoel, well here’s the important bit)

mdpi.com/1099-4300/19/5/188

MDPIWhen the Map Is Better Than the TerritoryThe causal structure of any system can be analyzed at a multitude of spatial and temporal scales. It has long been thought that while higher scale (macro) descriptions may be useful to observers, they are at best a compressed description and at worse leave out critical information and causal relationships. However, recent research applying information theory to causal analysis has shown that the causal structure of some systems can actually come into focus and be more informative at a macroscale. That is, a macroscale description of a system (a map) can be more informative than a fully detailed microscale description of the system (the territory). This has been called “causal emergence.” While causal emergence may at first seem counterintuitive, this paper grounds the phenomenon in a classic concept from information theory: Shannon’s discovery of the channel capacity. I argue that systems have a particular causal capacity, and that different descriptions of those systems take advantage of that capacity to various degrees. For some systems, only macroscale descriptions use the full causal capacity. These macroscales can either be coarse-grains, or may leave variables and states out of the model (exogenous, or “black boxed”) in various ways, which can improve the efficacy and informativeness via the same mathematical principles of how error-correcting codes take advantage of an information channel’s capacity. The causal capacity of a system can approach the channel capacity as more and different kinds of macroscales are considered. Ultimately, this provides a general framework for understanding how the causal structure of some systems cannot be fully captured by even the most detailed microscale description.