TL;DR
Expect an enhanced workforce dedicated to AI safety, big steps in reaching what’s known as “general” intelligence, and more of the same (i.e. “good” and “bad” applications of AI).

First, a small note on a massive idea

Unlike other emerging technologies, AI never seems to go away. It’s perpetually talked about, studied, revered, and feared.

It’s going to give robots sentience! And they’re going to take over the human race! And enslave us! And fold our laundry!

We’ve been obsessed with AI since the mid 20th century, philosophizing about how it will alter society by replacing jobs, girlfriends, and — dare I say it — human intelligence. Will we be better off? Or just WALL-E chair people?

0_fPUEhyNqsoIXn4Bq

As you can see by this ultra-scientific Google Trends chart, AI interest is growing. Maybe it’s amplified by sensational stories of backflipping robots or Stephen Hawking quotes or Jeopardy champions, but unlike other emerging tech trends (looking at you Bitcoin), there are few peaks and valleys. We’re consistently interested.

Screen Shot 2019-01-27 at 1.42.41 PM

So why is it rising now? Is 2019 the year when technology reaches a critical mass and ushers in a new era of machine autonomy? After all, 2018 saw a 73 percent increase in enterprising companies producing papers on AI, and nearly half of all organizations (large and small) implemented some form of AI into their operation.

In short: no. Duh. You all saw this coming.

In 2019, we’re going to make some serious progress toward achieving general intelligence (think machines communicating, learning and even acting like humans), and creating more and more widely used applications of narrow intelligence (bots that are as good if not better than humans at one thing — think playing chess, or operating a car). More on general and narrow intelligence shortly.

0_iu-zYBC61DFzNfy3

Regardless of this uncertain future — one where bots help, hurt, and/or surpass humans — advances in AI will happen in your lifetime.

So, what do you need to know for 2019? I’m so glad you asked!

2019 as the first step toward 2030

Before you scoff at this amount of time as “forever away”, consider where you’ll be in 11 years. Are you married? How many kids do you have? Where are you working?

Sufficiently freaked out? Excellent. Onward!

The Pew Research Center did something amazing recently. They interviewed nearly 1,000 tech pioneers, innovators, developers, business and policy leaders, researchers and activists about AI and what it means for humanity over the next decade.

Note: many of these experts are assuming that by 2030, we’ve achieved artificial general intelligence, or something very close to it. There’s the chance that we’re not even close, and all of this is sensationalized speculation.

Okay, look into my crystal ball.

In short, their fears about 2030 are two parts predictable and eighteen parts existential. Here are the high-level highlights (or lowlights, maybe):

  • We’ll get dumber: Humans will sacrifice agency, creativity, and intelligence as AI tools continue to make decisions for us (think cars switching from stick to automatic… and soon to driverless).
  • We’ll lose power: AIs employed by profit-hungry companies are incentivized to leverage your information for cash, and power-hungry governments are incentivized to surveil. The more control we relinquish to AI, the more we give to Google and the Fed.
  • We’ll lose our jobs: There will be populist uprisings because code will replace humans in the workplace, which means job loss, which means widening economic divide, which means social upheaval. Yes, there’s an argument that AI will create new jobs (e.g. farmers who are also data scientists, or combination roboticist surgeons, or software engineers that train machines to do their own machine learning), but the divide will be significant nonetheless.
  • We’ll die: Yep, that’s right! Greater autonomy to military-grade weapons, misinformation, and propaganda means mayhem. Always end on a positive note!

So how do we mitigate this future? Again, predictable answers, but critical nonetheless.

  • Design for decentralization and empathy: If all parties involved in the development of AI — that being tech companies, software developers, and the public sector — can establish a framework “imbued with empathy,” inclusivity, and established ethical standards, then we’re good! …assuming we ignore the issue of “what is inclusive” and “what is ethical” and OH MY GOD the amount of regulation that’d get in the way of innovation. I digress.
  • Get ’em while they’re young: As one professor responded, “We cannot expect our AI systems to be ethical on our behalf.” Just as we must teach AI, we must teach students to teach AI. Harvard (of course it’s Harvard) is taking philosophy postdoctorates and sticking them in computer science classes. They teach students how to weigh ethical considerations before writing a single line of code. You can see similar themes at schools like MIT, Stanford, Columbia, and Vanderbilt. It’s crucial, some argue, that we teach AI ethics across disciplines and across skill levels (i.e. not just ~elite~ universities).
  • Compete with the robots: This one’s a doozy. Because we can’t stop AI tools and robotic creations from getting smarter, let’s uproot economic and political systems to incentivize expanding humans’ capabilities, thus keeping humans “relevant” in the face of programmed intelligence. In other words, in the not-so-distant future, we’ll all work to make humans smarter. Politicians will advocate for human intelligence, and compete to come up with the best platforms and regulation to make us smarter. Business-people will create products and services that aim only to make us smarter. Non-profit organizations will receive grants and donations based on how they make us smarter. We’ll stop saying “good girl” and start saying “smart girl” to our kids.

What could realistically happen this year?

Increased workforce dedicated to AI safety
Fun fact: there are only an estimated 50 people worldwide working full time on AI safety. That’s it. They work for companies like the Humanity Institute, the Machine Intelligence Research Institute, OpenAI, and DeepMind.

Expect this number to vastly increase. It’s a moral imperative that we address the above concerns about AI early, and dedicate more humans to, you know, prevent mayhem.

2019 might just be the year organizations dedicated to policy planning or public-private partnerships emerge to bridge customers, government stakeholders, developers, and tech giants, and create alignment on ethical AI development practices.

Imagine that: Jeff Bezos, Ajit Pai, a full-stack software engineer, and your mother. Just talking shop.

As more cash is poured into developing AI and the US and China continue to compete for dominance (we’re losing, btw), ethical concerns will become more apparent (reminder: we had our first and only autonomous car death in 2018). Regulators will intervene, people will raise their voices, and AI companies will feel pressure to expand their ethics and safety efforts. Or maybe I’m just being optimistic.

Breakthroughs in achieving “super-intelligence”
Okay, you caught me. We’re nowhere near robots surpassing us as a species. Yes, they can buy stocks and provide customer service more efficiently than the average human, but they can’t do everything the human mind is capable of, and then some (yet).

That said, we’re getting closer to artificial general intelligence. And 2019 will likely see another leap toward that reality.

So what exactly is general AI? We’ve glossed over this already, but it’s worth doing a deeper dive. There are three levels to AI:

You got artificial narrow intelligence, which are smarter than humans at specific tasks, like chess or generating results on a search engine.

Then there’s artificial general intelligence, which can think as well as humans and make decisions. As one of my colleagues pointed out, this is where people tend to anthropomorphize AIs (e.g. “iRobot”, “Ex Machina”… Steven Spielberg’s “AI”, etc). They aren’t necessarily walking and talking like humans, but they can learn like humans. Let’s say you have a robot as a college classmate that can learn the same way you do, and maybe even joins a frat because it understands social hierarchies or peer pressure (s/o Robot House). That’s general AI.

Finally, there’s artificial super-intelligence. This is what we’re aspiring toward and fearful of. This is where AI surpasses human intelligence. Humans no longer need college because any task that required education is done by robots, and we won’t have to learn new tasks as society advances because robots learn more efficiently than we do.

In order for AIs to “think” like humans — or achieve artificial general intelligence — they need a lot of computational power; more than we have available. At least for now. Basically, if these machines can process data as quickly or faster than the human brain, we can achieve general intelligence. More computational power means more processable data per second.

So how do we, humans, currently stack up?

The computational equivalent to human brain power is predicted to be somewhere between 10 petaFLOPs and 1000 petaFLOPs — or between 10 million billion calculations per second and 1 billion billion calculations per second. That’s right. Right now, your brain is calculating like f***ing mad.

Supercomputers are already operating in the 200 petaFLOP range, but the US and China are currently racing to achieve what’s known as “exascale computing,” or a computer capable of that top-line mark of 1 billion billion calculations per second, or 1000 petaFLOPs, which is equal to 1 exaFLOP (hence the “exa” in exascale computing). It’s predicted that by the end of 2019, we’ll triple that number and hit 600 petaFLOPs.

0_KjQP7qQPNsi3z-Jc

So will we have robot companions that we mistake for humans in 2019? Almost definitely not. But in the R&D space, we’ll start to harness the power of the human brain.

Hold on to your hats.

More of the same
There’s good news and bad news, both of which come from the aforementioned Pew survey.

The good news, according to one scientist at Carnegie Mellon: “Life will definitely be better as AI extends lifetimes, from health apps that intelligently ‘nudge’ us to health, to warnings about impending heart/stroke events, to automated health care for the underserved (remote) and those who need extended care (elder care).”

The bad news, according to the founder of an AI research firm: “[We’ll] adopt the logic of our emerging technologies — instant response, isolation behind screens, endless comparison of self-worth, fake self-presentation — without thinking or responding smartly.”

2019 will be telling. Do we, as a society, start to take the development of AI more seriously, and align on an ethical approach to development (in other words, the one that saves and augments human lives)? Or do we continue into the black box, where we succumb to automated services and all the social, economic, and psychological changes that come with it?

Only time will tell. Here’s to hoping our future robot overlords read this and take pity on me.

As always, if you have any corrections, comments, questions, or just want to keep the conversation going, HMU: js@isl.co.

Special thanks to software dev extraordinaire and AI enthusiast Alex Barbato.

Josh Strupp is a marketing director at ISL. Here’s his website.