ioc.exchange is one of the many independent Mastodon servers you can use to participate in the fediverse.
INDICATORS OF COMPROMISE (IOC) InfoSec Community within the Fediverse. Newbies, experts, gurus - Everyone is Welcome! Instance is supposed to be fast and secure.

Administered by:

Server stats:

1.3K
active users

#computerscience

52 posts50 participants11 posts today

This is for the super nerds, so don't feel bad if you don't get it.

I asked ChatGPT to design a menu with Dutch food influences for an Edsger W. Dijkstra-themed restaurant based upon his work. I then asked it to create the LaTeX code to generate a printable version of the menu.

No notes. Perfection. Lost in the PDF generation was that drinks were labeled as “Side Effects (Handled)" which is divine.

💻 **Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task**

"_Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning._"

Kosmyna, N. et al. (2025) Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task. arxiv.org/abs/2506.08872.

#Preprint #AI #ArtificialIntelligence #LLM #LLMS #ComputerScience #Technology #Tech #Research #Learning #Education @ai

arXiv logo
arXiv.orgYour Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing TaskThis study explores the neural and behavioral consequences of LLM-assisted essay writing. Participants were divided into three groups: LLM, Search Engine, and Brain-only (no tools). Each completed three sessions under the same condition. In a fourth session, LLM users were reassigned to Brain-only group (LLM-to-Brain), and Brain-only users were reassigned to LLM condition (Brain-to-LLM). A total of 54 participants took part in Sessions 1-3, with 18 completing session 4. We used electroencephalography (EEG) to assess cognitive load during essay writing, and analyzed essays using NLP, as well as scoring essays with the help from human teachers and an AI judge. Across groups, NERs, n-gram patterns, and topic ontology showed within-group homogeneity. EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity. Cognitive activity scaled down in relation to external tool use. In session 4, LLM-to-Brain participants showed reduced alpha and beta connectivity, indicating under-engagement. Brain-to-LLM users exhibited higher memory recall and activation of occipito-parietal and prefrontal areas, similar to Search Engine users. Self-reported ownership of essays was the lowest in the LLM group and the highest in the Brain-only group. LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning.

🧬 The Future Of Discover: What AlphaEvolve Tells Us About the Future of Human Knowledge
buzzsprout.com/2405788/episode
helioxpodcast.substack.com/pub

The kind of breakthrough that makes you wonder what else we've been missing, what other solutions have been hiding in plain sight, waiting for the right kind of intelligence to find them.

#AlphaEvolve #DeepMind #Google#AI #AlphaEvolve #DeepMind #MachineLearning #OpenScience #TechEthics #AlgorithmicDiscovery #ComputerScience #Innovation #TechCriticism

💻 **Dark LLMs: The Growing Threat of Unaligned AI Models**

"_In our research, we uncovered a universal jailbreak attack that effectively compromises multiple state-of-the-art models, enabling them to answer almost any question and produce harmful outputs upon request._"

Fire, M. et al. (2025) Dark LLMs: The growing threat of unaligned AI models. arxiv.org/abs/2505.10066.

#AI #ArtificialIntelligence #LLMS #DarkLLMS #Technology #Tech #Preprint #Research #ComputerScience @ai

arXiv logo
arXiv.orgDark LLMs: The Growing Threat of Unaligned AI ModelsLarge Language Models (LLMs) rapidly reshape modern life, advancing fields from healthcare to education and beyond. However, alongside their remarkable capabilities lies a significant threat: the susceptibility of these models to jailbreaking. The fundamental vulnerability of LLMs to jailbreak attacks stems from the very data they learn from. As long as this training data includes unfiltered, problematic, or 'dark' content, the models can inherently learn undesirable patterns or weaknesses that allow users to circumvent their intended safety controls. Our research identifies the growing threat posed by dark LLMs models deliberately designed without ethical guardrails or modified through jailbreak techniques. In our research, we uncovered a universal jailbreak attack that effectively compromises multiple state-of-the-art models, enabling them to answer almost any question and produce harmful outputs upon request. The main idea of our attack was published online over seven months ago. However, many of the tested LLMs were still vulnerable to this attack. Despite our responsible disclosure efforts, responses from major LLM providers were often inadequate, highlighting a concerning gap in industry practices regarding AI safety. As model training becomes more accessible and cheaper, and as open-source LLMs proliferate, the risk of widespread misuse escalates. Without decisive intervention, LLMs may continue democratizing access to dangerous knowledge, posing greater risks than anticipated.

Disney and Universal sue AI firm, label it 'bottomless pit of plagiarism'
By Jessica Riga

Disney and Universal file a copyright infringement lawsuit against an AI firm the Hollywood giants describe as a "bottomless pit of plagiarism".

abc.net.au/news/2025-06-12/dis

ABC News · Disney and Universal sue AI firm Midjourney for copyright infringementBy Jessica Riga

#softwareEngineering #computerScience #programming #lisp #commonLisp #interview #macro #discussion with historical notes-

screwlisp.small-web.org/show/V

My quick notes on the downloadable interview discussion with @vnikolov and @kentpitman About Vassil's assertables classed toggleable assertion macro design.

Provokes lots of fascinating historical notes from Kent about what the ANSI CL and earlier standardisations were doing and had in mind.

screwlisp.small-web.orgVassil Nikolov’s assertables with Kent Pitman

How safe are AI companions? Experts say app developers are falling short
By Ellen Phiddian

AI-powered friends and partners can fight loneliness, but they can also supercharge isolation. So how can companion apps be made safer?

abc.net.au/news/science/2025-0

ABC News · AI companion apps such as Replika need more effective safety controls, experts sayBy Ellen Phiddian