Search

New Data Privacy Proposal Points the Way Toward Possible Compromise Between Tech Cos and Users

The News: A sweeping new data privacy proposal could point the way toward a possible comprise between tech companies and users. The Age Appropriate Design Code is a new British online data privacy proposal aimed at increasing protections for children online. Coming on the heels of the otherwise comprehensive 2018 Data Protection Act, this new proposal outlines new rules that specifically address online safety for minors. Per the New York Times:

“The rules will require social networks, gaming apps, connected toys and other online services that are likely to be used by people under 18 to overhaul how they handle those users’ personal information. In particular, they will require platforms like YouTube and Instagram to turn on the highest possible privacy settings by default for minors, and turn off by default data-mining practices like targeted advertising and location tracking for children in the country.”

A similar set of guidelines, dubbed COPPA (the 1998 Children’s Online Privacy Act), already exists in the United States but only applies to children under 13. The Age Appropriate Design Code is scheduled to go before British parliament for a vote sometime this year, and to be applied soon after.

Sweeping new data privacy proposal points the way toward possible compromise between tech companies and users

Analyst Take: It’s hard to argue against the fact that both the Age Appropriate Design Code and the 2018 Data Protection Act are important keys to providing protections for minors and keeping them safe online. That said, the chasm between what technology companies think is ‘the right thing to do’ and what users are comfortable with as it relates to personal data privacy, for themselves and their children is, in most instances, a deep one. Here’s a look at some of what I think the most important elements of this discussion, especially as it relates to the new data privacy proposal, include:

Protecting the rights and safety of children online is important, but not without issues

Some tech industry lobbyists have argued that while the objective of the proposal is noble, the rules themselves, or how they would require technology platforms to comply to data privacy rules, may run afoul of its intent, and may even cause more harm than they aim to correct. Age-gating for instance, could unnecessarily limit the types of services that a website or platform provides. Small companies may also no longer be able to provide or direct effective advertising services to young adults — from introducing them to content that specifically caters to their tastes, to notifying them of products and offers that would likely be of value to them. An argument can also be made that in order to ensure compliance with regard to age verification, platforms may have to collect more data from would-be users than they might have otherwise collected absent these rules.

Expanding the same protections to other vulnerable online users is possible — why not do it?

While tech companies and regulators work on data privacy details, what struck my eye about this proposal is twofold:

First, it hints at a possible expansion of data privacy and protection rules based on age where such data protections already exist but only apply to children under 13, instead of children under  18. This upward shift in age inclusion opens the door to the next logical question: If tech companies can do this for 13 and 18 year olds, why can’t they also do it for 25 year olds, 45 year olds, and 75 year olds? Or rather, why must basic data protection rules be predicated on age at all? Why not just make them universal? Is there a compelling reason why adult users of technology platforms deserve to be put at greater risk of stalking, harassment, hacking, doxing, and violence than their younger counterparts?

Second, many of the tools and practices to be set in place to protect children’s data online could presumably be used to protect adults as well. For instance, one tenet of the new code focuses on tech companies “thinking about the risks to children that would arise from collecting and processing of their personal data works equally well with adults.” Following the same logic, this premise could also be required to consider the risks to women, vulnerable communities, users living with disabilities, and the elderly that could arise from collecting and processing their personal data. If companies understand that children must be protected online because they are vulnerable to a plethora of threats, shouldn’t these same companies also understand that other users are vulnerable as well? And therefore, is there a rational reason why some vulnerable users should be protected but not others? From a regulatory or legislative standpoint, could it not be argued that it is in the public interest for these same companies to extend privacy and online safety protections to other vulnerable users besides children?

Taking that logic a step further, could it not be argued that since all users are inherently vulnerable to data theft, privacy abuses, stalking, fraud, harassment, and a plethora of unpleasantness and genuine threats, these same tech platforms have a responsibility to protect all users as best they can?

Expanding these protections to all users by default only makes sense

The Code’s 15 governing principles can almost all be expanded to adult users: “Best interest of the child” can just as easily become “best interest of the user.” The “data protection impact assessment” can also easily be applied to users of all ages. Data collection features being required to be set on the highest data privacy settings by default also doesn’t have to be limited to children. With regard to data sharing: “Do not disclose children’s data unless you can demonstrate a compelling reason to do so, taking account of the best interests of the child” can easily become “Do not disclose users’ data unless you can demonstrate a compelling reason to do so, taking account of the best interests of the user.” One can go down the list of all 15 principles and make the same observation each time.

There is a template here, for universal data privacy and online safety

This is the crux of what I see as a possible direction for data privacy regulations aiming to help tech companies and their users find a healthier balance than the one currently being debated around the world. Based on this latest effort, it would essentially look like this:

  1. A shift from an opt-out data privacy model (in which the default could be maximum data collection and the user must actively opt out of it) to an opt-in data privacy model (in which the default is minimum data collection, and the user has to opt in to more data collection in order to benefit from more data-dependent services).
  2. An emphasis on putting the best interest of the user front and center of all data collection and processing decisions, not just as a matter of culture but as a matter of law.
  3. Controls, some on the user side, some on the platform side, that allow users to determine what types of content they are comfortable with receiving or being exposed to, and in some cases, even be protected from.
  4. A deliberate restoration of trust in the platform-user relationship (further reinforced by policies of transparency and clear disclosure).

Because these principles, along with the tools and data privacy practices that will enable their execution, can be expanded to all age groups, and also because users and the governments they look to to protect them from exploitation, fraud, loss of privacy, and other threats have been actively looking to address the dual issue of digital privacy and digital security, I see in this proposal a template for what could become a model for universal data privacy and online safety requirements. Ideally, technology platforms would adopt this approach all on their own, but if they cannot, or will not, legislatures and regulatory bodies around the world may begin to feel growing pressure to step in and compel them to do so.

Futurum Research provides industry research and analysis. These columns are for educational purposes only and should not be considered in any way investment advice.

Other insights from the Futurum Research team:

IOT Cybersecurity Regulations Kick In With the Start of 2020

Facebook Doesn’t Really Care About Your Privacy — and This is Why It Hurts Libra

Why CMOs Need to Be Involved in Privacy Policy Creation

 

 

Author Information

Olivier Blanchard has extensive experience managing product innovation, technology adoption, digital integration, and change management for industry leaders in the B2B, B2C, B2G sectors, and the IT channel. His passion is helping decision-makers and their organizations understand the many risks and opportunities of technology-driven disruption, and leverage innovation to build stronger, better, more competitive companies.  Read Full Bio.

SHARE:

Latest Insights:

TSMC, Samsung, and Intel All Announced Agreements
Olivier Blanchard, Research Director at The Futurum Group, shares his insights on the geopolitical, market, and supply chain implications of finally securing domestic semiconductor chip production.
The Strategic Acquisition of Netreo by the Global Software Solutions Leader Has the Potential to Reshape the Future of IT Monitoring and Management
Discover insights from Steven Dickens, Vice President and Practice Lead at The Futurum Group, on how BMC's strategic acquisition of Netreo will shape the future of IT monitoring and management.
April 19 ‘Halving’ and New ETFs May Alter the Finance Ecosystem
Steven Dickens, VP and Practice Leader at The Futurum Group, highlights that as Bitcoin has introduced spot Bitcoin ETFs and experiences its fourth halving, it continues to redefine the financial landscape.
Unveiling the Montreal Multizone Region
Steven Dickens, Vice President and Practice Lead, and Sam Holschuh, Analyst, at The Futurum Group share their insights on IBM’s strategic investment in Canadian cloud sovereignty with the launch of the Montreal Multizone Region.