Skip to main content
Silo is built from the ground up with privacy as the foundation, not an afterthought. Unlike other AI assistants that send your conversations to remote servers for analysis and training, Silo keeps your data secure and under your control. Every conversation, every document you share, and every piece of information you discuss stays private. We don’t just promise privacy, we engineer it into every layer of our architecture.
  • Zero-knowledge architecture: We cannot see your data, even if we wanted to
  • No data mining or profiling: Your conversations are never used for training
  • Verifiable privacy guarantees: Cryptographic attestation you can verify yourself
  • End-to-end encryption: Data is encrypted on your device and stays encrypted in transit

How it works

Silo uses Trusted Execution Environments (TEEs) to process your AI queries. TEEs are secure, isolated areas within a processor that guarantee your data cannot be accessed, even by Silo, the cloud provider, or anyone with physical access to the server. Think of it as a cryptographically sealed vault that processes your queries without exposing them to the outside world. Your query is encrypted on your device, sent to a TEE running on secure hardware, processed in complete isolation, and the encrypted response is sent back to you. No logs, no traces, no data leakage.
  • Hardware-level security: Queries processed inside NVIDIA confidential compute enclaves
  • Cryptographic attestation: You can verify the enclave is running the code we claim
  • Encrypted data in transit: Your queries are encrypted before they leave your device
  • Automatic data sanitization: Nothing persists after your query is processed

Privacy across model types

Open-source models

Open-source models (DeepSeek R1, Llama 3.3 70B, GLM 4.6/4.7) run inside NVIDIA GPUs with confidential compute mode enabled. This provides hardware-level isolation, encrypted memory, and immediate deletion.

Closed-source models

For closed-source models like GPT or Claude, Silo routes queries through proxy servers running in TEEs. Our Anonymizer model strips PII from your query before it reaches the model provider. The provider never knows who sent the query.

Private Deep Research

The full reasoning chain runs inside a series of secure enclaves with only encrypted messages passed between them. This is the only available private deep research in the market.

Local models

The embeddings model and anonymizer model run entirely on your device. Nothing leaves your machine.

Voice

Speech-to-text (Whisper) and text-to-speech (Kokoro) run in trusted execution environments.

You’re in control

Your data belongs to you, period. Every conversation is stored fully encrypted and can only be decrypted by your device after you authenticate. Not even Silo can read your chats. We never log unencrypted conversations, build profiles about you, or train AI models on your data.
  • Secure authentication: Only you can unlock your conversations
  • Instant permanent deletion: Delete your data at any time
  • Full data portability: Export your conversations whenever you want
  • Zero-access encryption: We literally cannot read your data

Payment privacy

Private AI with a Stripe payment trail isn’t actually private. If your query is anonymized but your payment is traceable, you haven’t solved the problem. Silo supports:
  • ZCash: Fully private end-to-end
  • Apple Pay / Stripe: Convenient, not private
  • FAI token: Discounted access across the Sovereign Agent Stack