Why Security Matters for AI Assistants
AI assistants are becoming central to daily workflows. They read your messages, access your files, browse the web on your behalf, and connect to your communication channels. That level of access makes security not just a nice-to-have but a fundamental requirement.
Whether you are running Moltbot on your own server or using a managed platform like Molty by Finna, understanding the security layers that protect your data will help you make informed decisions and avoid common pitfalls.
This guide covers the key security practices that every AI assistant user and administrator should know.
API Key Management
Your AI assistant relies on API keys to communicate with language model providers like Anthropic and OpenAI. These keys represent both access and billing - if they are compromised, an attacker can run up charges or access model endpoints under your identity.
Store Keys Securely
Never store API keys in plaintext configuration files, environment variables on shared machines, or code repositories. Use a secrets manager like Doppler, HashiCorp Vault, or your cloud provider's secrets service.
On the Finna platform, all API keys are encrypted using AES-256-GCM before being written to the database. Each tenant gets a unique derived encryption key via HKDF, so even in the unlikely event of a database breach, one tenant's keys cannot be used to decrypt another's.
Rotate Keys Regularly
Set a rotation schedule for your API keys. Most providers allow you to create multiple keys, so you can generate a new one, update your assistant's configuration, verify it works, and then revoke the old key with zero downtime.
Limit Key Scope
When your model provider supports scoped keys or usage limits, use them. Set monthly spend caps so that a compromised key cannot generate unlimited costs.
VM Isolation
One of the most important architectural decisions in multi-tenant AI platforms is how to isolate tenants from each other.
The Problem with Shared Infrastructure
Many AI platforms run multiple tenants on the same server, separated only by software boundaries like containers or process sandboxing. This creates risk. AI assistants with file system tools could potentially traverse directories to access another tenant's data. A bug in sandboxing logic could expose one tenant's conversation history to another.
Finna's Approach - Firecracker MicroVMs
Finna runs every tenant in a dedicated Firecracker microVM on Fly.io. Firecracker is the same virtualization technology that powers AWS Lambda and Fargate. Each VM has its own kernel, file system, network stack, and memory space. There is no shared file system between tenants, no shared process space, and no possibility of one tenant's assistant accessing another's data through path traversal or process inspection.
This is heavier than container isolation, but the security guarantees are substantially stronger. For an AI assistant that has tools to read files, execute code, and browse the web, hardware-level isolation is the appropriate choice.
Encrypted Storage
Encryption at Rest
All sensitive data should be encrypted before it hits disk. This includes API keys, channel credentials (like WhatsApp session tokens), and any user-provided secrets.
Finna uses per-tenant derived encryption keys. The master key is stored in a hardware security module-backed secrets manager. Individual tenant keys are derived using HKDF with tenant-specific context, meaning even the platform operators cannot easily decrypt tenant data without the master key and the correct derivation context.
Encryption in Transit
All communication between your browser and the dashboard, and between the dashboard and your gateway, should use TLS. On Finna, gateway traffic routes through Cloudflare Tunnel, adding an extra layer of encryption and access control. Gateways bind to localhost only - there are no exposed ports that an attacker could probe directly.
Authentication and Token Management
Gateway Tokens
Every Finna gateway has a unique authentication token. This token is required for all API and WebSocket communication. Even if someone discovers your gateway's URL, they cannot interact with it without the token.
Tokens are generated as 32-byte cryptographically random hex strings during provisioning and stored in both the database and the gateway's runtime environment.
Device Pairing
Moltbot supports device pairing as an additional authentication layer. When enabled, a new device must be explicitly approved before it can interact with your assistant. This prevents unauthorized users from messaging your bot even if they know its phone number or chat handle.
For managed deployments through Finna, the Control UI uses token-based authentication, allowing you to manage your gateway without going through the device pairing flow while still maintaining security.
Multi-Factor Access
The Finna dashboard uses Clerk for authentication, which supports multi-factor authentication, social login, and passwordless options. Enabling MFA on your dashboard account adds a significant barrier against account takeover.
Conversation Data Handling
What Gets Stored
Understand what your AI assistant stores and where. Moltbot maintains conversation sessions that include message history for context. This data lives on the gateway's encrypted volume.
Model providers also have their own data policies. Anthropic, for example, does not train on API inputs by default. OpenAI offers similar opt-out options. Review your provider's data usage policy and configure it accordingly.
Session Management
Regularly review and clear old sessions. Moltbot provides session management commands that let you list active sessions and reset them when they are no longer needed. On Finna, you can manage sessions directly from the dashboard.
Audit Logging
For compliance-sensitive deployments, Finna maintains audit logs of all security-relevant events - gateway creation, configuration changes, authentication attempts, and administrative actions. These logs are retained for seven years, meeting SOC2 and ISO 27001 requirements.
Access Control
Principle of Least Privilege
Only grant your AI assistant the tools and permissions it actually needs. If your use case is limited to chat and web search, disable file management and code execution tools. Moltbot's skill system lets you control exactly which capabilities are available.
Channel-Level Controls
Each messaging channel has its own access control settings. WhatsApp can be configured with a pairing policy that requires explicit approval for new contacts. Discord bots can be restricted to specific servers and channels. Telegram bots can limit interactions to approved users or groups.
Configure these controls based on your threat model. A personal assistant should be locked down to your devices only. A team assistant might allow anyone in your Slack workspace but no one outside it.
Role-Based Access
For team deployments, implement role-based access control. Not every team member needs the ability to modify gateway configuration or view API keys. The Finna dashboard enforces ownership checks on every request - only the gateway owner can access its configuration and proxy endpoints.
Network Security
Zero-Trust Networking
Finna implements zero-trust networking through Cloudflare Tunnel. Gateways do not expose any public ports. All traffic enters through an authenticated tunnel, and Cloudflare Access policies can add additional restrictions like IP allowlisting or identity-based access.
DNS and IP Management
Each gateway gets dedicated IPv6 and shared IPv4 addresses for DNS routing. This ensures your gateway is reachable by messaging platforms while maintaining isolation from other tenants at the network level.
General AI Security Principles
Prompt Injection Awareness
Be aware that users interacting with your AI assistant could attempt prompt injection - crafting messages designed to override the assistant's instructions. While modern language models have built-in protections, additional guardrails in your system prompt and tool configuration help reduce this risk.
Output Validation
If your AI assistant generates content that feeds into other systems (like sending emails or executing commands), validate and sanitize outputs before acting on them. Treat AI-generated content with the same caution you would apply to any user input.
Regular Updates
Keep your Moltbot installation updated. Security patches and improvements are released regularly. On Finna, gateway images are version-pinned and updated through a controlled deployment process, ensuring you get security fixes without unexpected changes.
Summary
Security for AI assistants requires attention across multiple layers - from how you store API keys to how you isolate tenant workloads to how you handle conversation data. The most effective approach combines strong defaults (encrypted storage, VM isolation, token authentication) with user-configurable controls (channel access policies, tool permissions, session management).
By following these practices, you can confidently deploy AI assistants that handle sensitive workflows without compromising your security posture.