Atlas uses a layered architecture: Owner, Workspace, Transactions, Items, Files. Each layer carries workspace_id so we can filter every query before we even touch the MCP tool code. This keeps multi-tenant brokers safe while still letting them publish a single MCP server URL.
Workspace-first filtering
Every request starts with workspace validation. The MCP server verifies the workspace_id, role, and consent scopes before it even inspects tool parameters. This preserves the “deny by default” rule documented in the Knowledge Base.
Once the workspace is verified, the server checks whether the actor is Owner/Admin (full access) or invited Client (transaction-scoped access). The policy engine rejects any attempt to cross workspaces.
- All objects carry workspace_id
- Every query filters by workspace_id first
- Never let a Client select across workspaces
Transaction, Item, and File lineage
Transactions collect every workflow for a given deal. Items represent tasks, forms, and documents. Files are the signed PDFs, comps, and reports. MCP tools always log which transaction/item/file they touched so you can answer “Who generated this PDF?” instantly.
Atlas provenance tags combine workspace_id, transaction_id, tool_name, and event_id. You can feed those tags into downstream RAG systems or audit exports.
Policy + tracker loop
The tracker & guard store maintains counters per office, user, tool, vendor, and time window. Before a tool executes, MCP calls the preflight service to ensure budgets are available and consent scopes still apply.
After execution, the postflight service writes a signed usage event with the fields listed in the Knowledge Base. This closes the loop and feeds alerts, dashboards, and billing.
- limits by office
- user
- tool
- vendor
- model
- time window
Identity and consent wiring
Atlas uses explicit, time-bound scopes. Agents, coordinators, and clients approve scopes through the consent service. MCP tools read those scopes during preflight and write them to usage events after each run.
Because scopes are logged alongside artifacts, brokers can prove who approved third-party data pulls or MLS exports.
Deploy pattern
Most teams host the MCP server inside Atlas, then run thin clients in Custom GPT, Slack, or Playwright. Advanced teams fork the server, keep our tracker services, and extend the tool catalog with brokerage-specific flows.
- Stand up an Atlas Server Builder environment or fork the repo.
- Declare each tool contract: inputs, outputs, scopes, rate caps, latency, and pricing.
- Attach tracker policies, identity roles, and consent scopes so every call is audited.
- Expose the server URL to Custom GPTs, Slack apps, VS Code, or any other MCP client.