An interview with Consumer Reports’ Senior Director of Testing

Maria Rerecich, Senior Director of Product Testing at Consumer Reports 

Meet Maria Rerecich, Senior Director of Product Testing at Consumer Reports. A lifelong electrical engineer by training, Maria is as fundamental to the mission of Consumer Reports (CR) as Maxwell’s famous equations are to the field of electrical engineering.

Rerecich is the beaming force behind the teams that evaluate global consumer products. “The real thing that gives you pride is knowing we’re helping consumers and people care about what we do,” Rerecich says.

“We’re generating more than just test scores […] we’re making companies compete against each other to improve their products. We’re making products better for people.”

It’s plain and simple. Her enthusiasm is palpable when she describes the art and science of CR’s testing process. Appliances, home and outdoor products, and consumer electronics — ask her team anything, really. They test it all. Rerecich is as thoughtful and meticulous while explaining technical processes as she is with testing TVs as she is leading a team. I interviewed her to learn more about her vision for product testing at Consumer Reports — the nitty gritty details of process and ideas on future expansion.

SN: What are the most interesting products to test?

 

MR: I find the smartphones and TVs most interesting, because of the advances in technologies and the variety of models. You never appreciate the full breadth of what the products are in the market until you see 20 TVs in a room next to each other. It really makes an impression on you. But what I like about the non-electronics are things I may have used as a consumer but had no technical knowledge or real expertise about, and I can learn from the experts on my staff.

SN: In your opinion, what is most challenging about the testing process?

 

MR: Comparative testing requires testing every product in a category (like multi-cookers or gas grills) the same way, with the same inputs, under the same conditions.

Everything needs to be consistent to let us fairly compare products tested at different times. If the test requires a certain setting for the products, you need all of those products to be configured the same way. Also, the test needs to relate to what consumers want to know about the product and how it is used by the consumer. Defining all of that is the crux of a test.

We develop new tests by examining products’ features and performance, experiment with test procedures and then analyze the results to determine what tests can measure the relevant characteristics and can show differences among products.

SN: What is the Digital Standard and why did it get started?


MR:
For years, we’ve tested hardware: lawnmowers, refrigerators, TVs. Over time, more products were able to be connected to the internet. Products are shifting from hardware-only to software-based — the internet of things. We have thousands of products coming into our labs every year, and more and more of these are connected devices — I like to say: “When it comes to the Internet of Things (IoT), we have all the things.” We need to figure out what is important to evaluate with these connected devices.

This is uncharted territory for consumers. Users don’t expect their running shoes to send out their location data or their fridge to be sending information about their food to the manufacturers.

We saw there was a need to test IoT devices to protect consumers and empower them to make their own purchase decisions with more information like: What kind of data is being collected? Where is my data going? Is it secure? We knew we needed to have a way to investigate and combine new criteria with existing product ratings.

SN: If the Digital Standard had an autobiography, what chapter in the story would we be in now?


MR:
Expansion. We’ve laid the groundwork. We’ve implemented Digital Standard testing for several categories. Now it’s a question of expansion in many dimensions. We should be doing ratings across more new and existing product categories. We should increase the amount of investigative testing we do.

SN: It’d be great to hear about product testing possibilities and limitations. What are some examples of features you can (with much certainty) observe to test? On the flip side, what are features you want to test but currently can’t?

 

MR: There are 3 levels of this:

 

  • What we can confidently test are concrete things we can see, measure or conduct in a lab. Is the data encrypted? Does a product have an option for multi-factor authentication, and is it enabled by default?
  • A little murkier is information in a company’s privacy policy. In this, we can see what the company is basically vouching for. What makes it difficult is that these documents are often written in legalese and can be vague. From a legal standpoint, they would like to leave open as many options as possible even if they don’t have an intention to do something malicious. The policy makes certain things allowable because they don’t know what they may want to do with your data in the future. We also don’t know if the company is actually adhering to the policy and what they are actually doing with user data. The privacy policies and terms of service tells us what they say they are doing.
  • What presents challenges for testing: Keeping up with software and firmware updates. These products can change all the time, daily. There could be a software update that changes the privacy policy completely, one that fills a security vulnerability or changes a user interface. These updates may not be well documented, so it can be difficult to determine if it is important to check any particular update.

SN: So how do you handle [software and firmware updates] in your testing process?


MR:
In an ideal world with infinite bandwidth, we would test everything, everyday. If we did this now, we would never test any new products. So, in the meantime, we use various tools that will let us see when changes have been made.

What would potentially help more is if there was clear documentation of what was changed — real release notes that were actually complete and reflected what was going on in the update.

When a software release says, “We fixed some bugs.” — Which ones did you fix!? What else was changed? More details would be helpful.

SN: How would you classify the different types of testing that your team does?


MR:
[Note: This part of the interview has been simplified to the structured outline written below.] There are 3 main types of testing we currently do at Consumer Reports:

  1. Full comparative ratings
  2. Partial ratings (called head-to-head or one-off testing)
  3. Analysis / Investigations

Full comparative ratings:

  • Definition: This process is foundational Consumer Reports technical testing comparing standard variables across “the entire set” of products.
  • Key Question: How do product models compare? Which are better/worse? Which one should I buy?
  • Value to consumer: Help a consumer know which one to buy given many options.
  • Consumer Reports examples: Smart TVs, Connected Cameras

Partial ratings:

  • Definition: This process is testing that focuses on testing one (or a few) features. It is a targeted comparison among some products (doesn’t have to be all). Subset of technical ratings.
  • Key question: How do these products compare on certain aspects or features? (Ex. privacy, security, camera functionality) How does this new feature perform?
  • Value to consumer: Help a consumer know which one to buy/use given a specific aspect/feature, or if a particular feature is worthwhile.
  • Consumer Reports example: Peer-to-peer payment applications

Analysis / Investigations:

  • Definition: This process analyzes some category through a lens of (UI review, privacy policies, security rules, etc.) This work can be done in the form of an investigative journal article.
  • Key question: This process stems from a more specific question. For example, “How well does X app handle your data?” There may be a clue or leading reason why the team may want to start an investigation.
  • Value to consumer: To expose a problem or issue and recommend (to companies or manufacturers) how this could be fixed.
  • Consumer Reports example: Videoconferencing systems, Reproductive applications

SN: How are you thinking about Consumer Reports’ testing processes going into the future?


MR:
We need to optimize and automate some of our testing so that we can continue to add Digital Standard testing to new categories while still being able to add new models to categories we’ve already incorporated Digital Standard testing into. Having a strategy to scale up and handle software updates is key for us. In an ideal state, we would have privacy and security testing and scoring combined into all of our ratings. We’re prioritizing product categories that have more obvious opportunities for some kind of vulnerability, such as those with cameras or microphones.

The more products and categories we can look at, the more we can have an impact.

SN: What do you enjoy most about product testing?


MR:
It’s several things. The opportunity — we see so many cool products. We test 200 TVs a year and not everyone gets a chance to see that many products. The heart of what I like is the people I work with. I work with experts in so many things — there’s a person who knows about paint materials with a chemistry background or a person who knows what to consider when you want to install solar panels on the roof of a house. And also, the manufacturers really care about what we say and many times we can influence them to make better products for consumers.

Get the latest on Innovation at Consumer Reports

Sign up to stay informed

We care about the protection of your data. Read our Privacy Policy