Deployment Topologies
Rescile’s enterprise architecture is designed around decoupled state and compute for scalable hybrid integration.
Layered Architecture
1. Public / Edge Layer
Unified Access, UI Hosting & API Routing
- Handles inbound connections via HTTPS and WSS.
- Hosts the Rescile Portal Fleet, acting as the Edge Router, serving the Module UI, and running the Prebuild Engine.
- End Users (Browser) and AI Assistants (LLM/Agent) connect to this layer.
- Integrates with external Enterprise Self-Service and Execution portals.
2. Private / Internal Layer
Stateless Serving Fleets (Auto-Scaling Groups)
- Rescile Controller Fleet: Serves the GraphQL & REST API dynamically across the cluster.
- Rescile MCP Server Fleet: Handles LLM Context Protocol interactions for native AI.
- Both fleets pull immutable artifacts from shared storage and serve queries entirely from memory.
3. Build & Shared Storage Layer
Immutable Artifact Pipeline & Foundational Configuration
- Git Repository: Stores all configuration, Asset CSVs, and TOML blueprints.
- Rescile Importer: Runs as a CI/CD Artifact Builder Node. It pulls source from Git, builds the graph, and publishes the artifact.
- Shared Storage (S3 Registry): Stores the generated
rescile-bundle.tar.gzartifacts.
Standalone Mode (rescile-ce)
For smaller deployments, individual laptops, or air-gapped environments, you can run the entire pipeline in standalone mode using the rescile-ce binary.
This collapses the Importer and Controller into a single execution context, monitoring local directories for changes instead of polling S3.