Identity Verification: Flows We’ve Seen in CCPA Data Requests (2 of 2)

Identity verification is a critical part of exercising most data rights. Are you requesting a copy of your purchase history, or did an identity thief make the request? In part one of this series, we defined identity verification, explained its role in privacy law, and summarized what California privacy regulation says about verification.

In this post, we’ll highlight some of the common verification flows, interesting anecdotes, and tricky challenges we’ve seen. If you’re a consumer reader, this post will help you understand more about what to expect when you make data requests. It also may be useful for privacy researchers, authorized agents, and companies who want to understand the current landscape of identity verification.

Real-world Verification Flows

Common Verification Flows

Log in to account

If a consumer already has a password-protected account, the CCPA encourages and (sometimes even requires) that businesses ask consumers to log in to that account. However, regulations also explain that businesses cannot require consumers to make a new account if they don’t already have one.

Great for… Tricky spots…
•Most usable option for consumers who have an active account •Consumers might not have access to the password or associated email address
•Authorized agents cannot assist with requests that require a consumer’s private password
A screenshot from the Twitter settings page “Download an archive of your data.”. A pop-up window says “Verify your password.”

Email or phone verification

It’s common for companies to collect email or phone number in a form and send an email or text to confirm the consumer’s ownership of the contact. Often this process is automated, immediately sending an SMS code or “Click this button to confirm” email. Other times, a support team member reaches out manually via email.

Good for… Tricky spots…
•When a company already has a phone number or email on file that matches the one the consumer provides. •Sometimes consumers feel frustrated when these requests are “exploding,” requiring them to complete an action in a short time limit.
•Requires ready access to SMS and/or email
•Manual processes require additional back-and-forth communication between the consumer and company
Two screenshots from a data request sent to Lyft. The first reads, “Welcome back to Lyft! We’ll send a text to verify your phone.” The second reads, “Verify identity. Enter your email address to verify your account. This is an example of an automated flow.

Less Common Verification Flows

Photo of ID or face

Some companies ask consumers to share a copy of their government-issued identification. This might require scanning the ID or taking photos of the front and back. Some flows invite consumers to censor or remove unnecessary information on their ID, but usually don’t provide instructions on how to do so. Because an image of an ID can be illegally reproduced, some flows add additional safeguards such as liveness detection or facial recognition.

This screenshot from a T-Mobile access request illustrates a flow that includes “Verify your photo ID” with a small icon of a camera.
A screenshot illustrating a manual email verification flow for a request submitted by an authorized agent. An email from BeenVerified reads, “Consumer Reports, Inc. has submitted a Right to Know request on your behalf…To verify your request, please provide us with the following information: your full name, address, and phone number.”

Liveness detection uses your mobile device sensors to try and confirm that you’re a real, live person. If a company uses liveness detection, they often require that consumers take a new ID photo using a camera in their app, rather than uploading a still image.

Some also confirm legitimate IDs by asking consumers to take a selfie photo, sometimes while holding their ID. Many use facial recognition to match the selfie to the photo in the ID. However, selfies can be found on many public profiles. To mitigate stolen images, some flows also ask users to take a new photo in an odd or specific pose, making it more difficult for fraudsters to copy.

Good for… Tricky spots…
•Requests involving sensitive data that requires a higher confidence of verification •Consumers don’t always have access to cameras, or have devices that don’t support newer sensor software
•Situation where a company did not retain consumer phone or email •Consumers don’t always have access to government IDs
•These techniques usually require collecting new, sensitive data that the company didn’t already maintain
•Facial recognition relies on biometric data, a particularly sensitive class of data
A series of screenshots from Bumble’s selfie verification flow. These screenshots are not from an actual data rights request, but are mock-ups created by Bumble to explain their photo verification process. Image created by

Identity questionnaires

Also called “Knowledge-based Authentication” or dynamic security questions, some companies use short questionnaires to ask questions about a person’s life and relationships that ostensibly only they would know. Unlike static security questions that require a user to give answers upon sign-up (“What was your first concert?“), dynamic questions are generated from data aggregated from services like data brokers or credit agencies. For example, in a recent request flow we tested, we were quizzed to select an answer to these questions from 4-5 response options:

  • In which of the following states does [MOTHER] currently live or own property?
  • Which of the following street addresses in [PREVIOUS CITY] have you ever lived or been associated with?
  • What month was [FATHER] born in?
Good for… Tricky spots…
•Does not require special access to devices or documentation •Some questions are based on publicly-available information, meaning they could be successfully completed by a fraudster
•Relies on a consumer’s memory
•Not all consumers have relationships with their legal family members

Sign an affidavit

Rather than relying on security or technology tools to verify requests, some flows also include a simple legal document that consumers are required to sign. An affidavit like this doesn’t provide any material protection against fraud, but it does raise the stakes for fraudsters by threatening them with the possibility of perjury.

Good for… Tricky spots…
•Legal dissuasion against fraud •Very easy to spoof
Screenshot of an online affidavit from a data request with Acxiom. The text includes, “Sign your request. I, [name], hereby declare that the information submitted in this request is true…I also declare under penalty of perjury that I have full authority to submit this request…”

Device profiling

One verification technique that sometimes happens invisibly behind the scenes is device or browser profiling. Also called browser “fingerprinting,” the company collects small pieces of information about a consumer’s device or browser when visiting a webpage. For example, they might note and keep track of the browser type (Firefox or Edge?), operating system (Windows or Apple?) or screen size (big landscape? Small portrait?). When many data points like these are taken together, they can sometimes uniquely identify a given person.

Good for… Tricky spots…
•Doesn’t create usability barriers for consumers •Requires that consumers always use the same devices

Uncommon Verification Flows


A few companies require that a consumer print, sign, and mail a notarized document. Notarization requires a consumer to find an in-person official and sometimes pay a notary fee. The CCPA bans companies from charging fees for data requests, and explicitly notes that companies need to reimburse consumers for expenses such as notary fees. Don Marti, a friend of Digital Lab, outlined an experience he had with data request notarization on his personal blog.

One-click downloads

In rare cases, the data in question isn’t very sensitive, or doesn’t require additional verification. For example, Oracle Advertising provides a one-click download to access the data associated with the device and browser currently accessing the site.

A screenshot from an Oracle web page reads, “Want to see your data? Oracle empowers you to view the online third-party, interest data associated with your browser, computer, or device,” with a “Download PDF” button.

Matching domain-specific data

Some companies are able to match a person to an individual database by asking them questions about past interactions. This might include purchase history such as, “What was your receipt total when you shopped at our store last week?” or a mobile identifier, “Enter your Android advertising ID.” This kind of data isn’t necessarily a secret, but is usually not publicly available. Unfortunately, it can also sometimes be tricky for a consumer to track down this data.

This screenshot from a Tesla email illustrates some of the data that consumers are required to provide before accessing Tesla vehicle data if they do not have an account. It includes vehicle identification number, the start date of vehicle ownership, and date the vehicle was sold.

There’s more work on the horizon

In this piece, we described a handful of the verification patterns we’ve seen across thousands of data rights requests. We’re not making recommendations or claiming solutions for consumers or companies regarding verification. It’s a tricky process. However, we do plan to continue studying identity verification patterns and sharing our knowledge to improve data rights requests on behalf of consumers. We’re particularly invested in making identity verification flows work with authorized agents, through initiatives such as the Data Rights Protocol.

Thanks to Ginny Fahs for feedback on this piece. Don Marti also helped identify interesting flows from his adventures in data rights.

Get the latest on Innovation at Consumer Reports

Sign up to stay informed

We care about the protection of your data. Read our Privacy Policy