Security

Security at Parapet

Last updated: April 27, 2026

Parapet handles drawings, quotes, and project data that estimators use to win bids. The bar for protecting that information is high. This page describes the technical controls we have in place today, in plain language.

For the full data-handling picture, including what we collect and which sub-processors touch it, read the Privacy Policy.

01

Tenant isolation

Every record in Parapet is scoped to an organization. Isolation is enforced by Postgres Row Level Security policies on every table, keyed to the membership of the requesting user. Application code executes against the database with the user’s own JWT, so a query that tries to reach data outside the user’s organization fails at the database layer — not just in application logic.

A small number of background jobs (for example, the Stripe webhook handler and the worker pipeline) run with elevated credentials. Each such code path is reviewed for an explicit reason and is annotated in source.

02

Authentication

Sessions are managed by Supabase Auth. Cookies are HttpOnly, Secure, and SameSite. Sign-in supports email + password, with verified-email password reset. Tokens are short-lived and refreshed server-side on each request. Rate limits protect login, signup, and password-reset endpoints.

03

Encryption

All traffic to Parapet is served over TLS. Data at rest in our Postgres database and in object storage is encrypted by the underlying managed service. Application secrets (API keys, signing secrets, database credentials) live in deployment-platform secret managers and are injected at runtime — they are never committed to source.

04

File uploads

Drawing PDFs and quote PDFs are uploaded directly to encrypted object storage using short-lived signed URLs issued by the API. The API server itself never buffers customer files, which removes a class of large-file failure modes and shrinks the attack surface for upload abuse. File type and size are validated both at upload time and again before processing.

05

AI processing

AI inference happens only in our worker service, server-side. The Anthropic API key is never exposed to the browser. Customer content is sent to Anthropic only for the duration of an analysis request; per Anthropic’s API terms, customer content is not retained after inference and is not used to train Anthropic’s models. We do not use customer content to train any model of our own.

Every AI-extracted scope item carries a confidence indicator (low, medium, high). A human estimator must review each item in our Record-of-Drawings workspace before it appears in a bid. The product is built around the assumption that the AI is fast, not infallible.

06

Webhooks

Inbound webhooks (from Stripe for billing events, from Resend for email delivery events) verify cryptographic signatures using constant-time comparison before any payload is processed. Each event ID is stored on first receipt so retries are idempotent — a duplicate delivery is recognized and skipped.

07

Background work isolation

Heavy workloads — AI analysis, PDF rendering, email sending — run on our worker service, separate from the edge-deployed marketing site and application. Each job carries an organization ID and a request correlation ID end-to-end so we can trace any anomaly back to its origin without scraping logs.

08

Logging and monitoring

We use Sentry for error tracking and structured logs in our API and worker services. Logs do not include payment instruments, drawing contents, or quote contents; they include identifiers (user ID, organization ID, request ID) needed to debug a problem. Suspicious behavior — failed-login spikes, unusual upload volumes, AI-spend anomalies — surfaces to the team for review.

09

Sub-processors

We use a small set of vetted third-party services to deliver the product. Each is contractually limited to processing data on our behalf for a specific purpose. The full list lives in our Privacy Policy. We update that list when we add or change a sub-processor.

10

Reporting a vulnerability

If you believe you have found a security vulnerability in Parapet, please email [email protected]. A GPG public key is available on request.

We ask researchers to give us at least 90 days from the date of the report to remediate before any public disclosure, and to avoid actions that would degrade the Service for other customers (no denial-of-service, no automated scanning at high volume, no testing against accounts other than your own). In return, we will acknowledge the report promptly, keep you updated on remediation progress, and credit you in our release notes if you would like. We do not currently run a paid bounty program.

Have a deeper question?

Security, compliance, or architecture questions from a buyer or auditor — we're happy to walk through specifics.