Loading...
Loading...
No legalese. No vague corporate speak. Here's exactly what happens to your data from the moment you paste it until it's gone — with links so you can verify every claim yourself.
When you paste a suspicious message or upload a screenshot, it stays in your browser until you hit “Check This Message.” We don't see it, store it, or do anything with it until you actively choose to scan.
Your message is sent over HTTPS — the same encryption your bank uses. Nobody can read it in transit. It arrives at our server, which runs on Netlify's SOC 2-certified infrastructure.
We send your message to Anthropic's Claude AI to check for scam patterns. Anthropic's policy is explicit: they do not train models on API inputs. They process it, return the analysis, and that's it. We use the paid API tier, which has a zero-retention data policy.
The analysis result is held in server RAM (not a database, not a CDN, not disk) for up to 1 hour. This is solely so the share link works — if you send someone your result URL, they can view it during that window.
Is the share link guessable? No. Each result gets a random UUID (e.g. a3f8c2d1-7b4e-...). There are 340 undecillion possible IDs — it cannot be guessed or enumerated.
What if I don't share? The result still expires from RAM after 1 hour regardless. We don't extend the cache based on whether you shared or not.
Who can access it? Only someone with the exact URL. There is no index, no search, no way to browse other people's results. After the hour, even the URL returns nothing.
After the cache expires, your message and the analysis are permanently gone. We don't write anything to a database. We don't keep server logs of what you scanned. There is no “history” to worry about. When the server restarts, the cache is wiped entirely.
If you upload a screenshot instead of pasting text, here's what's different:
Because we don't have a database of user scans, there's nothing to steal. The only data that exists at any moment is whatever's in the 1-hour RAM cache — and that's gone on every server restart. An attacker who breached our server would find an empty room.
We monitor their policies and would switch providers or notify users before sending data under different terms. Their current API terms explicitly prohibit using customer inputs for training. If that changes, we change too.
Our app runs on Netlify's edge network, with serverless functions executing in the US. Anthropic's API also processes requests in the US. Both are SOC 2 compliant.
Yes. Since we don't store personal data permanently and don't require accounts, most GDPR/CCPA obligations are minimized by design. EU and California residents can email us to exercise any data rights — though in practice, there's nothing to delete because we didn't keep it.
They can ask, but we won't have anything to hand over. We don't log what you scan, and the RAM cache is gone within an hour. You can't subpoena data that doesn't exist.
Fair question. You can verify the key claims yourself:
Still have questions? Email us at hello@scamsignal.ai and we'll answer in plain English, too. You can also read the full legal privacy policy if you want the formal version.