I remember getting my first Blackberry. It was more like a glorified pager, but I could get my email and use BBM. Freed from my desktop machine, I could go anywhere and still be reached. It was thrilling.

Flash forward to today and “smartphone neck” is an actual thing. There are places considering legislation about texting and walking. With any great breakthrough, there are often some unexpected downsides.

Speaking of breakthroughs, the most recent issue of the MIT Technology Review featured their annual list of 10 Breakthrough Technologies — which is “a technology, or collection of technologies, that will have a profound effect on our lives,” says the piece.

We at DX Institute are suckers for a listicle of breakthrough technologies. After all, we’re unabashed tech optimists. Tech offers opportunities across a range of industries, particularly for those who act early to identify both the opportunities and the threats while keeping on top of evolving tech.

Today, I want to dig in for a moment on this idea of downsides. Any new technology — particularly disruptive ones — brings the risk of unintended consequences, a topic that came up during my panel at the Globe Forum last month.

What’s in MIT’s list are no exception. Well, except babel-fish earbuds. We tried, but we really can’t think of a downside to improving communication across languages. Especially if it comes without the need to stick a fish in your ear.

With that in mind, here’s a closer look at the potential unintended consequences of three of technologies featured in the list.

1) Sensing City

Let’s start with one from our own backyard: Alphabet’s Sidewalk Labs has partnered with Waterfront Toronto to develop a new digital-first neighbourhood on a portion of the Toronto waterfront to be known as Quayside.

Sidewalk’s vision for integrating the physical environment with digital technology relies heavily on a digital platform “distributed throughout the neighbourhood via sensors and other connected technology.” As MIT notes, this will allow “for all vehicles to be autonomous and shared. Robots will roam underground doing menial chores like delivering the mail.” Sounds like a digital utopia, amirite?

But the plan is not without potential unintended consequences — as critics and skeptics point out.

The Torontoist, along with open government advocates like Bianca Wylie, have raised a number of questions they feel should be addressed before the project progresses too far. They include issues like privacy, data governance, the public engagement process, inclusivity and funding.

One of the main concerns is how the data from the digital platform will be used.

It’s not just about privacy, though that’s a big deal (I’m looking at you Facebook). It’s also about whether Sidewalk’s fellow Alphabet companies could get preferential access. This could limit the ability of local tech companies to benefit from the value created and present a missed opportunity for the government. As Josh O’Kane wrote in the Globe and Mail (subscription required), “as the economy is becoming more and more data-driven, governments need to consider the value of that data when signing contracts with the private sector.”

At a recent community engagement event, Rit Aggarwala, Sidewalk’s Chief Policy Officer said, “there’s no question that autonomous vehicles will be one of the great innovations that we will see. The question is how do we make this a positive contribution?”

That’s a relevant question for this project as a whole, and smart cities more broadly. A city with an IoT nervous system will soon prove to be immensely valuable. The challenge lies in ensuring that the data created is recognized as a public good and that value is shared accordingly.

2) AI for Everybody

With big players like Google, Microsoft and Amazon offering cloud-based AI, the tech is becoming more accessible to a wide range of industries and smaller companies. While it’s increasingly prevalent in our everyday lives, that doesn’t mean all the kinks have been worked out just yet.

We don’t have to look far to find examples of how AI can go wrong:

Ryan Holmes, founder of Hootsuite, recently wrote of the need for AI developers and the organizations that employ them to find an “ethical underpinning” for the AI they produce. Holmes is right to highlight the need for “an ethical framework to inform how AI converts data into decisions — in a way that’s fair, sustainable and representative of the best of humanity, not the worst.”

If you don’t think this is important, consider recent news from DARPA. They announced a project that “aims to develop software that would gauge an adversary’s response to stimuli and then discern that adversary’s intentions and give commanders intel on how to respond”. Taken too far, this could give the phrase, “I was only following orders” a whole new meaning.

3) Genetic Fortune-Telling

“One day, babies will get DNA report cards at birth. These reports will offer predictions about their chances of suffering a heart attack or cancer, of getting hooked on tobacco, and of being smarter than average,” according to the article.

Sounds great, as long as the individual retains full control of that data. I mean, has anyone else seen Gattica?

It’s interesting to note that more than 5 million people have sent 23andme samples for DNA sequencing, which leaves me wondering if they have fully considered what this means.

The Financial Times reported, “permission from customers to keep and study their valuable DNA has contributed to a formidable bank of genetic information, which 23andMe charges pharmaceutical companies including Pfizer and Roche, to access.”

That’s not to suggest that 23andMe is misusing the genetic data they have, in fact this large dataset could lead to life saving discoveries. It does, as with civic data above, highlight how easy it is to underestimate the value of data.

Another aspect that brings potential unintended consequences is understanding the implications of that data. The development of polygenic risk scores (oh, you know, the use of genome-wide genotype data to calculate a single variable that measures genetic liability to a disorder or a trait) could be a great tool to further a preventative approach to health care. However, as the name implies, this provides only a risk score. That is a probability that a given trait or disorder might develop, rather than a definitive diagnosis.

A legitimate concern to this is how health care and insurance providers might use information about potential health risks. Will individuals be punished through higher costs or denied coverage based on genetic factors out of their control that point to an illness they might never develop?

Think one of the technologies we didn’t feature is more important to discuss? Let us know in the comments below.

Related Reading