Viral social media "challenges," memes, and gimmicks have taken over our feeds in recent years. The term "challenge" is used loosely though since these viral sensations aren't so much challenging as they are just unique ways to spice up your social media presence. But are they also signs of the impending AI apocalypse? Let's look further.

True, these events range from absurd and borderline dangerous (Kiki do you love me); to helpful, such as by bringing awareness to an important cause. (ALS Ice Bucket Challenge). But what do they actually mean? And are there serious, even sinister possibilities lurking beneath their innocent exterior?

Enter Facebook

Most recently, the #10YearChallenge (aka the How Hard Did Aging Hit You and the Glow Up Challenge) took social media (specifically Facebook) by storm when users started posting side-by-side images of themselves ten years apart.  

This challenge, like many others before it, was met with a variety of reactions.  The standard, "Awww! That's cute I wanna join" to the, *eye roll* "Ugh! Millennials."

One reaction that struck a more prominent nerve wouldn't occur to most people unless it was brought up. Is this dangerous? Is there more here than meets the eye?

Case in point: Kate O'Neill, a writer for Wired.com, posted the tweet below which opened a "can of worms" in the discussion of AI/ML and safety.

Me 10 years ago: probably would have played along with the profile picture aging meme going around on Facebook and Instagram
Me now: ponders how all this data could be mined to train facial recognition algorithms on age progression and age recognition— Kate O'Neill (@kateo) January 12, 2019

Kate O'Neill expanded on this in a subsequent article for Wired

The tweet and article above went viral. Soon people were wondering "is this really just a harmless bit of social media fun?" Or is it much worse? Are we just willfully giving important facial recognition data over to big tech? What are the possible ramifications? And will the AI apocalypse be televised?

Pump the brakes.

Let's Unpack This

The author makes some great points in this article.  She explains the logistics of data collection, facial recognition and provides best to worst case scenarios.  

Furthermore, the primary concern which this author and many online users argue is that by participating in the challenge, we are doing all the heavy lifting for the algorithms and facial recognition systems. Fair point.

While many of these pictures are already available online, collecting data from them isn't as straightforward.  Many platforms strip the EXIF data from images to protect user privacy.  

In essence, posting your #10YearChallenge photo gives a clean, simple, side-by-side comparison with all the data and context that was missing from previous image uploads.

AI and ML systems can take this new batch of data and use it to learn human aging processes or update existing databases of known faces. This can be a great thing. Ultimately, the real concern is who is using this data and how?  

Of course, the most extreme and sinister version of this notion is the idea that we are all unwittingly the complicit, hapless architects of our own demise, re: the coming AI Apocalypse. Let's look at this in sensible, real-world terms.

Do we want to update a massive database that could be misused?  Of course not.  

Does virtually any form of technology pose a threat of being improperly used? Yes, absolutely.

Are we ushering in the real-world version of Skynet? Well, no. Not exactly. When the AI apocalypse hits, it might be televised (or live-streamed), but it isn't likely to be hasted by a Facebook meme.

The Risk

In the article, Kate O'Neill breaks down some concerns of the facial recognition programs currently being sold by companies like Amazon and Microsoft. Many of these concerns are valid.

The sale of this technology strikes an uneasy chord with many. The uses can be as mundane as utilizing the data to hyper-target ad placements and making each one more relevant; all the way up to creating aging software that helps determine a person's eligibility for insurance policies.

It can also help find kidnapping and human trafficking victims much faster than ever before.

At this risk of sounding too philosophical; fire can heat your home. It can also burn you.

Nevertheless, some feel that prohibiting the development and sale of this technology is the responsible course due to its potential for misuse.

Naturally, this raises the question: Does the potential negative side of this technology outweigh the potential positives in such a way that it should be prohibited?

In our opinion, no. Not yet.

Here's Why

For starters, the companies who own this technology have terms of service that their customers must adhere to or have their use revoked.  Let's all be honest and admit that most of us don't bother reading these, but we've already 'given up' access to this stuff anyway.

Second, many things in our lives can be abused or used negatively. Alcohol, for example. Do we ban alcohol just because one person has one too many drinks? No. We tried that and basically created (or certainly enhanced) the mob.

Thirdly, the industries and areas most likely to make people uncomfortable (i.e. healthcare, insurance, etc) are so heavily regulated that intense scrutiny exists as a natural, self-correcting push-back against wrong-doing.

No, it won't stop the occasional misuse. The same way alcohol is not 100% safe either. Just like alcohol, misusing data is often illegal and punished severely. Take many recent examples such as Cambridge Analytica and even Facebook recently.

However, fears of impending doom can probably subside for now. When it comes to misusing sensitive consumer data, it's far more likely to be identity theft and wire fraud than it is building a killer, self-aware piece of software that will usher in the AI apocalypse.

These are in no way "good things," but they're a far cry from the end of civilization at the hands of AI.

Your Data, Your Responsibility

If the #10YearChallenge or any other online activity makes you uncomfortable about the data you are generating, that's understandable. Companies do have both a vested interest as well as an obligation to use your data ethically; however, it must also be you who takes steps to protect it.  

While many platforms have adopted certain privacy and security measures to protect their users, there are still many ways your data can be accessed.

Can you recall that Farmer's Insurance commercial that talks about gaps in insurance coverage? "You may think you're covered for this, but you're actually covered for this."

When it comes to online data protection, there can be a big difference between what people want, what people should have, and perhaps most importantly: what people believe they have.

Ideally, these three elements should be aligned. In reality, they rarely are. In reality, a large divide exists between what consumers believe a company is obligated to do (or not do) with regard to their data; and what actually occurs.

Human beings thrive on instant gratification.  Social media companies use this to their advantage, plain and simple.  Believe it or not; their sole purpose for being is to sell your information to companies for the purposes of advertising. It's not to give you a platform to share photos of your dog. No matter how cute he/she is.

You agree to this dynamic both explicitly when you accept the terms (without reading them, of course); and tacitly when you make use of the service.

But Is It Evil?

Not likely. And we probably aren't ushering in the AI apocalypse by using Facebook. Now, could Facebook in theory do a lot of harm with the amount of data they collect? Yeah, sure. They just generally:

  • Don't necessarily stand to benefit from doing so
  • Don't really want to do so
  • Even if they did stand to benefit from doing so it would pale in comparison to the consequences. Court proceedings, lawsuits, plummeting stock price, loss of brand trust, fines and penalties, angry customers, the list goes on

This, among other reasons, is why it is such a big deal when these companies suffer a data breach that exposes sensitive customer information. These companies have a vested interest in garnering your trust, obeying regulatory guidelines, and protecting your information for their own use as opposed to letting it fall into the hands of others.

Now, don't get us wrong. It's true that big tech companies are not always completely up front or transparent about what they're doing with your data. Even if it IS spelled out in the terms of service, they know full well that most of us aren't reading it and wouldn't understand it all if we did. And yes, that's on purpose.

However, their motivations are (boringly) more along the lines of capitalism (selling more ad space more efficiently) than authoritarian techno-fascism, i.e. bringing about the AI apocalypse.

So It's All Good?

If there IS a nefarious side to the #10YearChallenge, it's more likely to be for Facebook's own monetary gain (i.e. a learning algorithm for demographic sorting used to generate better ad targeting) rather than building Skynet.

Given the choice of a few more million in ad or sales revenue or launching killer robots: Facebook, Google, Amazon and the whole lot of them would prefer the former.

However, concern and caution still remains. The idea that large tech companies might go to such extremes and utilize such advanced systems is in and of itself disturbing to some. They do control and have access to data that in theory might cause harm if the wrong hands were to get their, well, hands on it

If that worries you, (and a certain amount of healthy caution is normal, even prudent), then post, surf, and interact in the online world with a clearheaded understanding that every action you take online carries some degree of risk. As do the technologies you engage with while doing so.

And if your privacy is paramount to you, take the additional steps to ensure your settings in your profile are the way you want them.

Where Do We Go From Here?

The next time a #10YearChallenge comes up, you can choose whether to participate or not. If you're worried about what happens to your data, that's normal. You might want to resist the urge to get hundreds of likes and comments.  Exercise your higher sense of self-control and maturity because you know you're leaving a digital trail.

Or if you really want to participate, do what I did and post a picture of your dog as a chubby puppy and now as a grey-faced old man.

The internet is nothing if not a sucker for cute dog pictures.

The AI apocalypse

AWS re:Invent not-quite live blogging with NLP

This is a follow-up to our post yesterday. We attempted to leverage spaCy and TextBlob to perform Natural Language Processing

Read more