Technology Cautionary Tales: When “Big Mother” starts to drift into “Big Brother’s” lane – Part 1
by Olivier Blanchard | September 21, 2018
Listen to this article now

In case you haven’t had the pleasure of attending one of my keynotes or lectures about the future of technology yet, I usually open with my theory that all technologies and technology use cases can be seen through three different lenses: Big Brother (oppressive surveillance and control, as in George Orwell’s 1984), Big Mother (altruistic surveillance and control – imagine Big Brother’s technology falling into the hands of an overbearing parent or caretaker), and Big Butler (technology 100% in the service of individual users).

Examples of Big Brother technology use cases would be government and corporate surveillance and tracking of individuals, their behaviors, and their communications, generally without their permission or knowledge. Examples of Big Mother technology use cases would be companies like Google or Amazon “listening” to users’ physical conversations through devices and apps, analyzing the contents of their digital communications, and monitoring their location, behavioral patterns, and recent searches to help anticipate their needs, recommend beneficial courses of action, and even act on their behalf if a threat is detected. Examples of Big Butler technology use cases would be self-driving vehicles, home security systems unlocked by facial recognition, digital assistants, and pretty much any technology that obeys a users’ orders.

The spectrum of lenses could be expanded, but I like simplicity, so let’s limit it to three archetypes for now. Bonus: Trinities work well in our myth-rich culture, so having three archetypes instead of five or six helps make the concept not only easy to manage but also feel familiar.

One thing to note, however, is that while these archetypes are separate from one another, technology use cases often fall into more than one category at a time. Take Google and Amazon, for instance: One could argue that the totality of user data being collected through searches, emails, website visits, smart speakers, contact lists, and physical location fall simultaneously into Big Brother and Big Mother categories. Part suspicious and covert, part benevolent and aimed to help. Add digital assistants into the mix, and now even a Big Butler application, by virtue of being connected to the rest of a broader data collection ecosystem, can bleed into the other two.

Whenever companies like Google, Amazon, Facebook, Microsoft, Apple, Samsung, and others build technology ecosystems that collect, track, connect, and analyze data points pertaining to specific individuals, then use that data to tie behavioral models to user profiles, it can become difficult to completely decouple interlaced Big Brother, Big Mother, and Big Butler use cases. The broader the applications, the more likely the overlap. Therefore, many popular technologies today – from your phone and the apps it runs, to tomorrow’s smart homes – can be seen through all three lenses simultaneously, depending on what aspect of these technologies you happen to care to focus on.

I don’t bring this up to frighten you. I bring it up because it is important for consumers and for developers to give these three archetypes some thought before designing a product or choosing to use it. What am I really building here? is as important a question as what am I really buying here? We all play a part in steering technologies away from Big Brother use cases and towards Big Butler use cases (in my opinion, the preferable model). Equally important, we must be vigilant when it comes to the very thin line separating Big Mother and Big Brother. The only real difference between the two is intent, and since intent can flip on users like a switch, it behooves us all to be particularly vigilant with regard to Big Mother use cases.

Case in point: The BBC reports that “one of the largest life insurance providers in North America [John Hancock] will no longer offer policies that do not include digital fitness tracking. [The company] will now sell only “interactive” policies that collect health data through wearable devices such as a smartwatch. Policyholders can earn discounts and rewards such as gift cards for hitting exercise targets.”

At first glance, this looks like what it is: An insurance company deciding to modernize its model, digitally-transform, and set itself apart from the competition by using popular health-tracking technology and gamification to improve outcomes, reward certain customer behaviors, and steer customers towards healthier lifestyles. Before we get into today’s emerging gray areas, let’s go back in time to a time when there was no hint of gray anywhere in sight:

A year ago, John Hancock announced that it was “expanding its Apple Watch program to all new and existing Vitality life insurance policyholders.” As the company’s press release stated: “The program gives customers the opportunity to earn the Apple Watch Series 3 for an initial fee of $25 through regular exercise. Now, in addition to permanent life insurance buyers, the 4 million Americans who buy lower-cost term life insurance every year will have the opportunity to protect their financial future for less than $15/month, while earning an Apple Watch Series 3.” Great idea. The release also gave us a glimpse into how the program had evolved since its launch in 2015:

Through the first-of-its-kind John Hancock Vitality program, launched in 2015, customers can earn rewards and premium savings of up to 15 percent for the things they do to stay healthy, like exercising, walking and eating right. Rewards include $600 in annual savings on healthy food purchases as well as gift cards and discounts on entertainment, travel and wellness-related purchases. In 2016, the company introduced customers to the opportunity to select an Apple Watch (for an initial $25 fee) to record their activities and earn points that reduce or eliminate their monthly payments for their watch over a two-year period.

Since then, John Hancock Vitality members have embraced the Apple Watch:

  • using it 6 out of 7 days a week

  • increasing their step count by an average of 2,000 steps per day after enrolling in the program

  • showing a 20 percent increase in weekly physical activities after getting their Apple Watch

  • about half of them are paying $0 each month (i.e. fully funding their watch) by achieving their monthly activity goals

Outstanding. A perfect example of what a Big Mother/Big Butler use case can look like.

But just 11 months later, as the success of the program so far is leading to its integration into all of John Hancock’s life insurance policies, privacy and consumer rights watch groups are unfurling their red flags. At one end of the alarmist spectrum is Matt Stoller, of the Open Markets Institute, with this warning: “Naturally the American dystopian surveillance state will combine insurance with fat-shaming. Welcome to hell.” At the other, some warn that insurance companies (not necessarily John Hancock) could use this type of data collection to show preferential treatment towards the most potentially profitable customers, and penalize the potentially least profitable customers. While insurance companies will be quick to reply that their industry is heavily regulated, and must demonstrate “in actuarial terms,” why a policy is to be altered or rates increased, two obvious caveats come to mind:

1) Regulatory regimes can change. Consumer protections that are in place today might not exist five years from now.

2) Rewarding some customers with discounts and free services is a way to make changes to the cost of their plan without having to make changes to the policy itself. In other words, it could be construed as a way to work around existing regulations. In that type of scenario, less profitable customers would be required to pay full price (more money) for plans that more profitable customers pay discounted prices (less money) for. Even if the mechanism takes on the form of rebates, and the premium’s price itself never changes, the end result is that some customers pay x while others pay (x – y).

This isn’t fat-shaming vehicle, by the way. Quite the contrary. In a non-data-driven model, outward appearance (like being fat or skinny) would be more likely to create a bias. If you substitute data for visual cues, however, you can minimize your reliance on outdated biases like fat = unhealthy, and prioritize more relevant and customer-specific data points like blood pressure, activity level, stress levels, VO2 Max, heart rate, and more. That’s a good thing. I can tell you that, as an athlete, I routinely bump into incredibly healthy and fit fellow athletes who would qualify as “fat” to an untrained eye, but would likely earn every discount offered by an insurer like John Hancock. Conversely, I know a great deal of people who don’t look fat, and may even be described as “healthy looking” by casual onlookers, but whose lifestyles puts them at risk for a hellscape of medical problems somewhere down the road. So, measuring the right things isn’t only net positive, it may also be a way to eliminate fat-shaming from the process altogether. Obesity doesn’t always denote ill-health.

The fear among some analysts, however, is that as this “data = better outcomes” mode of thinking grows more prevalent among insurers, we may reach a point where all insurance providers will require that their customers track and report their fitness habits in order to qualify for discounts – discounts without which insurance premiums may no longer be affordable for the average person. In other words, insurers could decide to adjust their prices upward to pressure customers to comply with their data collection and monitoring schemes. In this type of model, consumers would find themselves faced with an unfair choice: Either submit to mandatory data tracking, monitoring, and reporting, or pay usurious prices for insurance premiums that respect consumers’ expectations of data privacy.

This issue becomes particularly thorny the moment you realize that DNA is also a category of data that insurers are likely to want to get their hands on. I can’t think of a category of data more personal and private than DNA. It shouldn’t be shared lightly, and no one should be forced to share it unless compelled to by a court order. So bear in mind that once you open the door to insurance companies being able to effectively mandate access to your heart rate, blood glucose levels and activity levels in order to deliver better outcomes, mandating access to your DNA for the exact same stated reason isn’t likely to be too far behind.

Are we here yet? No. Is this a slippery slope argument? Absolutely. Do I like slippery slope arguments? No. But we need to have this conversation because every single instance of a Big Mother technology use case drifting out of its lane and into a Big Brother technology use case is, at its core, a slippery slope argument come true. Therefore: caution ahead. We should all proceed carefully. “The road to hell,” as the saying goes, “is paved with good intentions.”

For a deeper dive into the topic of data privacy as a human right, my colleague Fred McClimmans suggests a quick visit to and You’re in for a fascinating read.

To be continued…


Image Credit: My Fit Station



About the Author

Olivier Blanchard has extensive experience managing product innovation, technology adoption, digital integration, and change management for industry leaders in the B2B, B2C, B2G sectors, and the IT channel. His passion is helping decision-makers and their organizations understand the many risks and opportunities of technology-driven disruption, and leverage innovation to build stronger, better, more competitive companies.  Read Full Bio.