ioc.exchange is one of the many independent Mastodon servers you can use to participate in the fediverse.
INDICATORS OF COMPROMISE (IOC) InfoSec Community within the Fediverse. Newbies, experts, gurus - Everyone is Welcome! Instance is supposed to be fast and secure.

Administered by:

Server stats:

1.6K
active users

#LLM

523 posts261 participants50 posts today

I very often agree with Bruce Schneier. But not today.

If I wanted to make a private agreement through a digital trusted third party, why would I need an LLM?

The examples include comparing salaries. Instead of setting up (and later securely deleting) an LLM, we could just as easily run a function boiling down to
`return a > b;`

No need to involve LLMs with their uncertainty or possibility to do prompt injection.
#BruceSchneier #LLM #TTP
schneier.com/blog/archives/202

Schneier on Security · AIs as Trusted Third Parties - Schneier on SecurityThis is a truly fascinating paper: “Trusted Machine Learning Models Unlock Private Inference for Problems Currently Infeasible with Cryptography.” The basic idea is that AIs can act as trusted third parties: Abstract: We often interact with untrusted parties. Prioritization of privacy can limit the effectiveness of these interactions, as achieving certain goals necessitates sharing private data. Traditionally, addressing this challenge has involved either seeking trusted intermediaries or constructing cryptographic protocols that restrict how much data is revealed, such as multi-party computations or zero-knowledge proofs. While significant advances have been made in scaling cryptographic approaches, they remain limited in terms of the size and complexity of applications they can be used for. In this paper, we argue that capable machine learning models can fulfill the role of a trusted third party, thus enabling secure computations for applications that were previously infeasible. In particular, we describe Trusted Capable Model Environments (TCMEs) as an alternative approach for scaling secure computation, where capable machine learning model(s) interact under input/output constraints, with explicit information flow control and explicit statelessness. This approach aims to achieve a balance between privacy and computational efficiency, enabling private inference where classical cryptographic solutions are currently infeasible. We describe a number of use cases that are enabled by TCME, and show that even some simple classic cryptographic problems can already be solved with TCME. Finally, we outline current limitations and discuss the path forward in implementing them...

📚 Extracting Citations with LLMs

At the #LLM for HPSS workshop, @cmboulanger David Carreto Fidalgo & Andreas Wagner presented LLaMore: a Python tool for extracting citation data from unstructured legal & humanities texts using #LLMs

Unlike GROBID, LLaMore handles complex footnotes and free-form references. Early results with GPT-4o and Llama 3.3 show significantly higher accuracy when benchmarked against a new gold standard TEI-annotated dataset.

#TEI #openscience @maxplanckgesellschaft

So new infosec challenge with people asking LLMs to write small applications for simple tasks and then using them without checking what the script actually does.
I asked a coworker yesterday what language* the script he generated that way was written in. The answer was ‘French’ and that basically tells you all you need to know why ICT got a panic attack.

*it pulled in three JavaScript libraries from somewhere and luckily ran locally without phoning home.

If you understand Virtue Epistomology (VE), you cannot accept any LLM output as "information".

VE is an attempt to correct the various omniscience-problems inherent in classical epistemologies, which all to some extent require a person to know what the Truth is in order to evaluate if some statement is true.

VE prescribes that we should look to how the information was obtained, particularly in two ways:
1) Was the information obtained using a well-understood method that is known to produce good results?
2) Does the method appear to have been applied correctly in this particular case?

LLM output always fails on pt1. An LLM will not look for the truth. It will just look for what is a probable combination of words. This means that an LLM is just as likely to combine a number of true statements in a way that is probable but false, as it is to combine them in a way that is probable and true.

LLMs only sample the probability of word combinations. It doesn't understand the input, and it doesn't understand its own output.

Only a damned fool would use it for anything, ever.

#epistemology #LLM #generativeAI #ArtificialIntelligence #ArtificialStupidity @philosophy

I little while ago, I played around with a pre-release of the new Docker Model Runner. Very cool stuff if you work with models at all, whether as a dev or a user -- and now the (beta) feature is available publicly. Go check it out! Here's a link to the main docs to get you started: docs.docker.com/desktop/featur
#docker #LLM #AI

Docker Documentation · "Docker Model Runner""Learn how to use Docker Model Runner to manage and run AI models."

ソラリスはもうジャパンの秘密を解明しているでしょうか

WindowsとMac対応!誰でも仕事を自動化できるアプリ「Task Till Dawn」【今日のワークハック】 | ライフハッカー・ジャパン lifehacker.jp/article/2504task

ライフハッカー・ジャパン · WindowsとMac対応!誰でも仕事を自動化できるアプリ「Task Till Dawn」【今日のワークハック】By ライフハッカー・ジャパン編集部
#Apple#LLM#news