Comparative Policy Analysis Rubric

The privacy policy comparative analysis — the name of the process used to create this report — is focused on 10 criteria informed by the Digital Standard, a set of benchmarks that can be used by companies and organizations to design digital products that are respectful of consumer privacy rights. This is more narrowly focused than our standard product rating process which incorporates over 100 questions derived directly from the Digital Standard. This streamlined analysis results in valuable insights without the more comprehensive evaluation that would be needed for comparative ratings.

This rubric was used to structure the comparative policy analysis of three videoconferencing platforms.

Criteria

1. Personal Data Leak — Does using the service have the potential to inadvertently leak or expose personal information?

2. First Party Data Collection — What data are collected by the service? Are the data collected by the service only what’s required to run the service? This point covers collection of sensitive information like health data, biometric data, location, or financial information.

3. Data Enhancement — Can the company collect data from third parties and combine it with data they have already collected to create a more comprehensive and theoretically more accurate data set? Special notice would be given to data sets that contain sensitive information like health data, biometric data, location, or financial information.

4. Third Party Access — Can data collected by the first party be shared with or accessed by third parties? If yes, can users opt out of third party access, and/or know how these third parties use or subsequently re-share the data? Special notice would be given to sensitive information like health data, biometric data, location, or financial information.

5. Implications of Employer or School Sponsorship of Service — Does the service collect (or provide opportunities) for an employer or colleague to micromanage, harass, or otherwise engage in unprofessional behavior? Are “productivity tracking” features well scoped to not be susceptible to inappropriate use or abuse? Are the features — both defaults and customizable settings — of the app or service designed to withstand abuses or excesses in the default settings (i.e., “Zoombombing”)?

6. Data Deletion and Retention — How long are data retained by first and third parties? Are deletion windows clearly defined? Does the end user have any right to review, modify, or delete data held by first or third parties?

7. Differentiation between data collected from hosts versus participants — Does the service differentiate between the data collected from hosts (who theoretically have more power/have chosen to use a specific platform) and a participant (who might not have chosen to use a specific platform)?

8. Information Used for Product Improvement — Most services claim the right to use data collected to either improve existing products, or to develop new products. If these rights are not clearly defined — or if the company claiming these rights is large, and has multiple products, and/or enhances data from multiple sources — then the definition of “product improvement” or “product development” can mean nearly anything. The combination of a large amount of accurate, sensitive information, paired with the ability to use that information to develop just about any new product, creates a significant privacy risk.

9. Data That Can Be Sold or Shared as Part of a Transaction — While the terms of most companies explicitly state that data collected and stored by the company are an asset that can be transferred, some companies will place limits on what can be transferred, or allow people to opt out of a data transfer. Requiring users to opt in via explicit informed consent before any data are transferred is a best practice that shows the company respects user privacy.

10. Access to Data for Machine Learning, AI Analysis, or Human Review — If a service can access content for automated scanning, generating a transcript from audio or video files, or other types of predictive or automated analysis, the terms should specify exactly how that data are used, who can access them, if they are shared with any third parties, and what choices people have to either opt in or opt out to any automated review. Additionally, if features such as transcript generation or other automated analysis are ever supported by human review, how are end users notified about the possibility of human review?

Get the latest on Innovation at Consumer Reports

Sign up to stay informed

We care about the protection of your data. Read our Privacy Policy