Let’s talk AR/VR and Apple’s most recent developer conference (which at the time of writing is still ongoing). The Apple keynote mentioned a few large updates to how Apple is going to be evolving it’s AR experiences.

  • ARKit 3:
    To be clear, the changes in ARKit 3 are the big news coming out of WWDC for the AR community. I’m not going to take up your time here reading through the specs (I’ll leave that to CNET), but to sum it up: People can now coexist in an iOS AR environments with AR objects (technically called people occlusion) in a way that enables new, more robust experiences, while also adding motion capture of numerous objects at once.Even dedicated hardware like the Magic Leap One struggle with some of these new features people will be getting on their modern iOS devices with ARKit 3.WWDC AR Demo
  • Reality Composer, and RealityKit:
    Put simply, these are essentially new ways for developers and creatives to make AR experiences for use in iOS apps. The difference this time around is in how they’re created, featuring more “drag and drop” type methods, and less programmatic dependent methods. We might compare this to functionality we see in Android’s ARCore, SnapChat’s Lens Studio, Facebook’s Spark AR Studio, and probably at least a few others.

 

AR in 2019

Before we get into the nuts and bolts of our adventuring into Reality Composer and RealityKit, let’s talk about AR.

When I showed our Marketing Director, Josh, the work I was doing with Reality Composer (which is still very much in private beta for developers), I believe his exact words were:

“Oh, this is cool… I think AR is so cool, but there is basically less than zero client demand for this stuff… so…”

Flustered Gif

After my inner tech nerd simmered down, I conceded he did have an extremely valid point — is AR a thing now and should we be encouraging clients to be investing in AR tech?

The answer is a resounding yes. (For all of those uber pragmatists out there, of course there are caveats and no thing is truly one size fits all, but I’m going to pick a side here and I’m leaning yes.)

I see a few areas where there are AR implementations today that are worth talking about:

  • Education and Training: I personally come from a military background and that is just one of many places where we see examples of AR and VR technology enabling people to get better at or safer in doing their jobs. See Google Glass’ rebrand(?) to Glass Enterprise for more examples.
  • Gaming: Ok, so ISL is not a AAA game studio and we don’t necessarily have a desire to release the next AR Call of Duty, but we have no shortage of cool concepts that have been made by the people of ISL (see Uterus Invaders, Fake News: The Game, Rival Road, VR Cornhole, and Zone). There is certainly an opportunity to do nifty and even truly useful AR experiences for potential clients.
  • Social: I don’t know how to say this and not sound inflammatory, but if you’re doing social marketing and you’re not at least versed or MINORLY dabbling in AR filters and lens you are truly, very likely missing the boat. According to Adweek, in 2018 over 250,000 lenses were submitted by creators on Snapchat (here’s one of ours for the Washington Capitals that didn’t get submitted), resulting in over 15 billion with a B views. Read that again. There is no excuse for not, at a minimum, demanding an evaluation of if there are opportunities for your brand in the AR social space.
  • E-Commerce: According to Apple at this year’s WWDC, Wayfair sees greater than a threefold increase in conversion when someone interacts with furniture in AR. It’s a thing, maybe not for everyone, but it’s definitely a thing.
  • Advertising: Josh did a great job writing about this on Medium (see point #4 in his post). Suffice it to say Apple has AR Quick Look, basically every social platform has some form of AR advertising option, and more – there are countless use cases for AR in marketing and advertising more broadly today.

Ok, let’s step off of the soapbox for a moment and get to the fun with Reality Composer.

Rainbow gif

Making Realities Come True

In typical Alex fashion, I like to put my head down and see how far I can get with using a product before I have to go out and read the documentation. If I can’t even get the product up and running, it’s generally a bad sign for things to come. (For the younger readers out there and the record more generally: Don’t be an Alex. Be like Brittany at ISL and read the docs first.)

After installing a couple of betas (not on my personal phone because I’ve had my phone freeze and effectively turn into a temporary brick while running a beta so I don’t do that anymore), I had Reality Composer up and running on an iPhone X.

Within about 30 minutes I had a few silly, but totally functional scenes that did some super trivial things like animate on touch and fade in and out (all features that are built into Reality Composer’s UI and took zero code).

The cool part was that I didn’t have to have any knowledge of 3D modeling or physics to just start putting a minimal library of Apple defaults on the screen and get running.

Example AR Images

Candidly, I had some issues importing the Reality file that I made on my phone into XCode where I could then integrate it into an actual iOS app (again, read the documentation first, don’t be like me), but I knew I was happy with how easy things appeared to be; that is, how easy it was to create AR experiences on iOS, for iOS.

In talking with Manar Alhamdy, a Senior iOS Engineer at Johns Hopkins, he pointed out that there will be even more promise here as Apple possibly extends the base 3D object library to some sort of marketplace thus making it easier for creatives and developers to meet in the middle. Combine this with the somewhat inevitable reality that is coming of doing 3D scans of objects and making rough (or even potentially good?) renders for use in other apps, there is so much promise in this space of things only getting easier.

Final Soapbox Stand… Promise

There are applications of AI and machine learning that I’m not particularly fond of, looking at you YouTube. AR and VR technology is an area of AI-enabled tech that I’m generally excited for. At its core, it’s designed to deconflict two objects in space, rather than corral video viewers into white supremacy, or analyze the behavior of people to try and get them to hit some user engagement goal… again, looking at you YouTube. I’d like us to talk more about what technologies stand the best chance of enabling us to be more creative and cooperative, versus inflammatory and upset.

Don’t get me wrong, ARKit 3 will only work on the most recent iOS devices and by definition is not truly inclusive, but I don’t believe that negates the broader effort of progress. Again, for the pragmatists, there are negative use cases for basically any tech — I don’t think AR is the holy grail and we shouldn’t look at any tech in that light.

All I can say is that when I saw the Minecraft Earth demo at WWDC, I imagined my daughter and I playing in the backyard when she’s older. Hopefully the tech will enable her to make and dream anything she can think of, while I’m able to be an active participant in her growth and fun.