TL;DR

Commenters recommend never handing an LLM raw SSH or DB credentials. Instead, give the model access to constrained tools or proxies that enforce permissions and policies, and use very fine-grained credentials where possible.

What happened

A Hacker News discussion addressed how to let large language models interact with system resources without exposing full credentials. Contributors argued against granting direct SSH or database access to an LLM; the favored approach is to present the model with a curated set of tools or a proxy layer that performs actions on its behalf. When an agent knows only the interfaces available to it, it can only call those functions; missing tools cannot be invoked. Participants described using proxies to separate the privileges the agent holds from the credentials used for backend access, allowing the proxy to enforce allowed operations. Several commenters also recommended narrowly scoped credentials and compared the architecture to established concepts such as “Rule of Two” or domain trust models (i.e., scopes and permissions). Practical caveats were noted: SSH is harder to mediate because commands are mixed with session bytestreams. The thread mentioned vendors and patterns such as Xano and a contributor who works at Formal, a company that builds proxies for least-privilege access.

Why it matters

  • Giving raw credentials to an LLM increases attack surface and risk of accidental or malicious actions.
  • Tool- or proxy-based access enables enforcement of policies, logging, and limits on what the agent can do.
  • Fine-grained credentials and scoped permissions support least-privilege principles and reduce potential damage.
  • SSH presents unique mediation challenges because protocol-level data can include executable commands that are hard to filter.

Key facts

  • Consensus in the thread: do not give an LLM direct SSH or database credentials.
  • Preferred pattern: expose a restricted set of tools or a proxy layer that performs sanctioned operations for the agent.
  • If a tool is not provided to the agent, it cannot execute that action; if a tool is provided, the agent decides when to call it based on its instructions.
  • Proxies can separate the agent’s credentials from the backend credentials and can enforce allowed/blocked operations.
  • Commenters recommended issuing very fine-grained credentials so tokens only permit intended actions.
  • SSH was identified as more difficult to mediate because commands and data are intermingled in the connection bytestream.
  • The discussion referenced implementation concepts like the Rule of Two and domain trusts, described as scopes and permissions.
  • Tools and platforms such as Xano were named as examples; a commenter disclosed working at Formal, a company that builds proxy solutions for least privilege.

What to watch next

  • Whether more platforms standardize agent tool interfaces and proxy patterns for LLM access (not confirmed in the source).
  • How vendors and open-source projects evolve support for mediating SSH sessions or bytestream-level controls (not confirmed in the source).
  • Whether regulatory or best-practice guidance emerges for credential handling when models act as agents (not confirmed in the source).

Quick glossary

  • Large language model (LLM): A machine learning model trained on large text corpora that can generate or reason about language and be used as an agent to perform tasks.
  • Proxy: An intermediary service that performs actions on behalf of a client and can enforce access controls, logging, and policy checks.
  • Least privilege: A security principle that limits access rights to the minimum necessary for a task, reducing potential misuse or damage.
  • SSH: A protocol for secure remote command execution and file transfer; session data can include commands mixed with other bytestream content.
  • Scoped credentials: Authentication tokens or accounts configured so they permit only a narrowly defined set of operations or resources.

Reader FAQ

Can I give an LLM direct SSH or DB credentials?
Thread consensus: do not give direct credentials to the model; use constrained tools or a proxy instead.

How do I limit what an LLM can do with a database?
Give the agent only narrowly scoped credentials or route actions through a proxy that enforces allowed queries and operations.

Is SSH access easier or harder to secure for LLM agents?
Commenters said SSH is trickier to mediate because commands are embedded in the session bytestream, making filtering harder.

Are there recommended vendors or patterns?
The discussion mentioned tools like Xano and cited a contributor who works at Formal; broader vendor recommendations were not provided.

Tl;dr you don’t give your llm ssh access. You give it tools that have access. Yes, easily. This isn’t a problem when using a proxy system with built in safeguards…

Sources

Related posts

By

Leave a Reply

Your email address will not be published. Required fields are marked *