Dunkin’ Donuts Tries Out Branded Selfie Lenses On Snapchat & Kik

What Happened
In honor of the National Donut Day, Dunkin’ Donuts became the latest brand to experiment with sponsored selfie lenses, a novel, camera-based ad unit that has been gaining traction among brands seeking to reach younger consumers on messaging platform. To celebrate the unofficial holiday, the Boston-based brand is running its first sponsored selfie lens on Snapchat, which will turn a user’s face into a donut. Along with the lens, the brand is also placing sponsored Geofilters in various locations around the country and will be running Snap Ads to promote a frozen coffee drink.

Meanwhile on Kik, Dunkin’ Donut will be the first brand to try out the branded video sticker, which, similarly to the selfie lenses, overlays a sticker on a user’s face during video calls (although it does not integrate the facial features as the lenses do). The brand created three different donut-themed video stickers that Kik users can have fun goofing up their video chats with.

What Brands Need To Do
With the proliferation of face-altering lens feature across messaging and social platforms, mainstream consumers are increasingly getting accustomed to these camera-powered AR features as a result. This is what is laying the groundwork for mobile-powered augmented reality to take off, which will allow brands to infiltrate their target audience’s photos and videos via sponsored Lens or branded AR objects.

Besides, this is a good time to think about ways for augmented reality to drive new opportunities for your brand. AR can, for example, be a great way for customers to envision your products in their lives and to launch digital experiences from signage or product packaging. What we can do now through a smartphone is just the beginning. As Microsoft’s HoloLens, Magic Leap, and the rumored Apple glasses roll out over the next few years, a lot more will become possible.

 


Source: AdWeek

Microsoft Overhauls Skype In The Camera-Centric Mold Of Snapchat

What Happened
Microsoft released the latest update to its IM and video call app Skype on Thursday that brought a complete overhaul to the app. Most significantly, the revamped app now makes the camera just one swipe away from the chats, encouraging users to snap more pictures to share with each other. The app also added a Highlights section, which functions very similar to the Story feature that was popularized by Snapchat and imitated by Facebook’s messaging platforms.

What Brands Need To Do
It seems unlikely this overhaul alone is enough to put Skype back into competition with the other popular consumer messaging and social apps, given that it has neither the user base or engagement that its competitors have. Last year, Microsoft shared that Skype has 300 million monthly active users when it introduced bots to the chat platform, an initiative that has gained little traction over the past year. In comparison, Facebook Messenger recently hit 1.2 billion monthly users, while Snapchat, an app much younger than Skype, now has over 166 million daily active users.

Nevertheless, this update underlines Microsoft’s intention to bring Skype up to speed in the messaging space and better cater to the shifting consumer preference towards ephemeral sharing. It also points to a larger trend in mobile UX design where the camera start to take the center stage as it increasingly becomes an input source for capturing content and understanding user intent.

For more information on how brands may tap into the rapid development in camera-based mobile AR features to create engaging customer experiences, please check out the Advanced Interfaces section of our Outlook 2017.

 


Source: TechCrunch

Header image courtesy of Skype

Blippar Announces AR Ad Unit & Visual Search For Cars

What Happened
Augmented reality solution provider Blippar announced two big additions to its platform. First up, the company is ready to launch what it claims to be “the first AR digital ad unit” that doesn’t require an app. This new ad product, named Augmented Reality Digital Placement (ARDP), will work with standard rich media banner ad units. Upon clicking, it would open a web-view window that, after users grant permission to access the camera, can superimpose ad creatives over the environment that the camera captures, and is viewable in 360-degree view. The creatives can be 3D models or static cut-outs from a 360-degree video. 

Secondly, Blippar is adding a “Shazam-for-cars” feature to its free mobile app that allows users to identify the make, model, and year of any U.S. car made since 2000 simply by pointing their camera at the car. The app will then surface relevant content such as average customer rating, price, a 360-degree view of the car’s interior. Blippar says it has achieved over 97.7% accuracy in automotive recognition, making it the highest in the industry. With this launch, the company is releasing a Car Recognition API that is available globally for companies to license and integrate into their own apps and products.

What Brands Need To Do
Although the AR experience delivered by ARDP seems to be pretty rough and limited in functions according to Blippar’s demo video, it is nevertheless an exciting development that marks the commence of the AR advertising arm’s race. The fact that this AR experience doesn’t require any specific app to run is obviously a plus for the potential reach of this ad unit. Mobile-based AR is a great way for brands to deliver interactive content and feature to drive active engagement and showcase products, and AR ads would be a great tool for brands looking to enhance their mobile ad experience.

For auto brands, the car-recognition feature and its API should be a great tool to transform any ordinary environment into a virtual showroom. It would be a great tool for event activations, pop-up promotions, and engaging car enthusiasts. According to CEO Rish Mitra, the company is working to expand this visual search feature to cover more sectors, with fashion being its next target.

For more information on how brands may tap into the rapid development in AR to create engaging customer experiences, please check out the Advanced Interfaces section of our Outlook 2017.

 


Sources: MarTech Today & TechCrunch

Featured image courtesy of Blippar

Pinterest Lens Now Supports Visual Search For Recipes

What Happened
Pinterest has updated its visual search and discovery feature Lens to make it more useful for food enthusiasts to find recipes and dishes. The new update enables “full dish recognition” for Lens, which let Pinterest users snap a picture of a particular dish, such as quesadillas or fried chicken, and get corresponding recipes for that dish, along with similar-looking dishes. Previously, users can use Lens to get recommended recipes when they snap pictures of the ingredients they have. Pinterest rolled out Lens to all U.S. users in March, and it is integrated with Samsung’s voice assistant Bixby to power visual search on the new Galaxy 8 phones.

What Brands Need To Do
This update for the Pinterest Lens feature should give its search platform a boost and make it more useful for Pinterest’s over 175 million users, who, according to the company, spent 5% more on groceries than the national average. Pinterest have long been a popular platform among marketers with its quick product roll-out, coupled with its emphasis on search and scale. This Lens feature puts it in an advantageous position of developing consumer-facing visual discovery tools, which are increasingly gaining traction as a new way to understand user intent and collect contextual data.

With the quick advancement of machine learning and AI-powered solutions, we are starting to see examples of brands primarily using the camera as an input source of the mobile user interface.  This trend should provide some inspiration to brands looking to update their digital user experience to be more intuitive and convenient for mobile users.

 


Source: AdWeek

 

Fast Forward: Everything Brands Need To Know About Google 2017 I/O Event

This is a special edition of our Fast Forward newsletter, bringing you a summary of the major announcements from Google’s 2017 I/O developer conference. A fast read for you and a forward for your clients and team.

The highlights:

  • Google Lens brings computer vision to Google Assistant and Photos
  • Google Assistant receives major upgrades & branches out Into connected cars
  • Expansion of the Daydream VR platform propels VR development forward
  • Android O brings a more fluid user experience, with Android Go targeting the “next billion mobile users”

On Wednesday, Google kicked off its annual I/O developer conference at the Shoreline Amphitheater in Mountain View, CA. CEO Sundar Pichai took the stage to lead the main keynote address, where he laid out the key developments in several of Google’s areas of interest, including AI, voice assistants, virtual reality, and more. TechCrunch has a comprehensive round-up of everything that Google announced, but we have an exclusive take on what it means for brands.

Google Lens Adds Computer Vision To Google Services

The most significant announcement coming out of this year’s Google I/O conference is the debut of Google Lens, a set of computer vision features that allows Google services to identify what the camera captures and collect contextual data via images. Google has been using similar technology in the Google Translate app (built off their 2014 acquisition of World Lens) to automatically translate words that the camera captures in real time. Now, Google is adding this feature to Google Assistant and, later this year, to Google Photos as well.

Equipped with computer vision capabilities, Google Assistant gains the “eyes” it needs to see what the users are looking at and understand their intent. Google demoed several such scenarios on stage, including pointing the camera at a restaurant’s storefront to receive standard business information and reviews of that restaurant surfaced via Zagat and Google Maps, pointing it at an unidentified flower to ask Google Assistant to identify it, or pointing it at a concert poster to prompt Assistant to find how to buy tickets for the event. Lens allows Google Assistant to tap the smartphone camera as an input source, to inform user intent and create a more frictionless user experience.

For Google Photos, the addition of Google Lens’ computer vision capabilities makes the cloud photo storage service better at identifying the people in your photos and picking out the best shots in your photo library. This facilitates one new feature called Suggested Sharing, in which Google Photos will prompt you to share some AI-selected photos with the people that are in them with a simple tap. Users on the receiving end of the shared albums will also be prompted to add the pre-selected photos to the mix.

One additional feature powered by Google Lens is the Visual Positioning Service (VPS), which works like an indoor GPS, allowing Android devices to map out a specific indoor location and help them find a specific store in the mall or a specific item in a grocery store with turn-by-turn navigation. VPS is already working in select partner museums and Lowes home improvement stores if you happen to have one of two Tango-enabled devices. This advanced AR feature will also appear in the next Tango device, the ASUS ZenFone AR due out this summer.

The introduction of Google Lens brings the search giant up to speed in the consumer-facing AR development. Two of Google’s biggest competitors, Facebook and Amazon, recently unveiled their own take on the “camera-as-input” trend with the launch of Camera Effects Platform and Echo Look, respectively. For Google, the launch of Lens is all the more significant, as it officially branches Google’s core function, search, into the physical real world and opens the door for more offline use cases, which, in turn, massively increases the addressable market of searchable data and creates a virtuous cycle for Google to leverage those image data to fuel its AR and machine learning initiatives.

Google Assistant Grows More Capable With New Features

Beyond the major addition of computer vision capabilities, Google Assistant is getting some other new features to help it stay competitive against Amazon’s Alexa and other digital voice assistants. Among the slew of new features announced on stage, two stood out to us for their versatile uses cases and accessibility for developers.

First up, Actions, Google’s version of ‘skills’ or ‘apps’ for Google Assistant, added support for digital transactions. This allows Google Home and some Android phone users to shop online by conversing with Google Assistant, which will access payment methods and delivery addresses stored in Android Pay for a seamless checkout experience. The feature will launch first with Panera as a third-party partner.

This crucial update will allow more businesses to build mobile ordering and online shopping features into their Google Actions. Previously, Google Assistant could only make orders from partnering Google Express retailers, such as Costco, Whole Foods Market, Walgreens, PetSmart, and Bed Bath & Beyond. It also added the ability to check the inventory at local stores for product availability before users take a trip to the store.

Second, Google Assistant can now respond by sending visuals to your smartphone or TV via Chromecast. Dubbed “Visual Responses,” this important addition enables developers to surface texts, images, videos, and map navigations to user requests. Allowing for a variety of responses helps diversify Google Assistant’s replies beyond voice and add texture to the user experience. Supporting multiple displays entends Google Assistant to more platforms, allowing users to choose the optimal screen to engage with. This new feature comes just a week after Amazon unveiled Echo Show, which also introduced a visual component to Alexa’s voice-based conversational interface.

Beyond these two key updates, Google Assistant is also gaining several other features that make it smarter and more useful. They include:

  • A “proactive assistance” feature that allows Google Assistant to automatically alerts you about travel, weather, and calendar updates by silently showing a spinning light-up ring on Google Home. Users can hear the updates by asking “OK Google, What’s up?” It is unclear when this notification-lite feature will roll out.
  • Hands-free phone calls to U.S. and Canada numbers. It works similarly to Amazon’s recently released Alexa voice calling, but with the added ability to dial real phone numbers. Unlike Amazon, only outbound calls are supported for now because Google says it wants to be “mindful of customer privacy”.
  • New entertainment integrations including the free tier of Spotify, SoundCloud, HBO, Hulu, CBS All Access, and some other popular music and video content streaming services. This allows users to ask Google Assistant to play a specific show or song, provided they have installed the corresponding apps on their devices.
  • Text input for Google Assistant, which allows users to interact with the Assistant on Android devices by typing out their requests instead of speaking them out loud.
  • Google also reminded the audience that Google Assistant will be coming to connected cars, as the company announced on Monday that Volvo and Audi are building new models that will run on Android systems.

Beyond these new features, Google is also aggressively expanding the Assistant to more platforms by announcing it will become accessible on Android TV OS later this year as well as iPhones and iPads via Google’s iOS app. The update to the Android TV platform will be accompanied by a brand-new launcher, allowing users to use voice command to access the over 3,000 Android TV apps available in the Play Store. According to Google, the Assistant is currently available on over 100 million devices. Notably, that’s a fraction of the 2 billion Android devices on the market, and doesn’t reflect user adoption. (For comparison, Apple’s Siri is currently available on 1 billion devices.)

In addition, Google is also following Apple’s lead to process AI-powered apps locally on mobile devices as well as in the cloud. This improves app performance and security, and also enables Google Assistants to adjust to a user’s specific preferences more quickly.

Standalone Daydream VR Headsets Aim To Broaden Consumer Appeal

It’s been a full year since Google unveiled its VR platform, Daydream, and so far, only a handful of compatible handsets have been released.  Facing mounting competitors in the VR space, Google is taking another stab at virtual reality with new  Daydream-enabled phones from partners, and a new standalone headset form-factor.

On the handset front, Google announced that Daydream will be supported by the new Samsung Galaxy S8 phones later this summer. As the best-selling line of Android phones, it’s’ a big win for Google, even if Samsung continues to support their own platform, GearVR, which is powered by a rival, Facebook’s Oculus. Plus, the upcoming flagship phone from LG will also support Daydream VR, making the platform considerably more accessible for mainstream users.

Google is teaming up with HTC Vive and Lenovo to build an untethered, standalone VR headset, allowing an immersive experience without additional phone or PC hardware. The headsets will support inside-out tracking, using the “WorldSense” technology from its Tango AR platform to track virtual space and making sure your view in VR matches up with your movements in the real world without the need for additional cameras or sensors. This move puts Google in the company of Oculus and Intel, both of whom have showed off early standalone headsets with self-contained tracking systems.

Fluid UI Design For Android O & Android Go For Emerging Markets

Near the end of the opening keynote, Google turned the attention to the next Android mobile OS, Android O. The preview highlighted a more fluid UI design, which includes features such as a Picture-in-Picture mode for multitasking while watching videos or during video calls, a more customized notification dots system, and a machine learning-powered smart text selection that makes it easier to choose the texts to copy and paste.

In addition, Google also launched a new data-conscious version of Android O named Android Go, targeting emerging global markets where mobile connectivity is still in development. Android Go is a modified version of Android for the lower-end handsets, completed with apps optimized for low bandwidth and memory. Google says Android devices with less than 1GB of RAM will automatically get Android Go starting with Android O. It is also committing to releasing an Android Go variant for all future Android OS. Google previously created a similar low-cost Android OS to serve the emerging markets called Android One, which initially rolled out in Pakistan, India, Bangladesh, Nepal, Indonesia, and other South Asian countries in 2014.

What Brands Need To Do

Google’s announcements at this year’s I/O event are very much covered by two trends emphasized in our Outlook 2017. The introduction of Google Lens marks Google’s official entry into camera-based mobile AR feature (the Tango AR platform is too inaccessible to count), a leading element in the current meta of Advanced Interfaces. The notable updates that Google Assistant received, in particular the computer vision capabilities that Google Lens brings, make the voice assistant a more helpful and intuitive Augmented Intelligence service for users. And the expansion of the Daydream VR platforms shows Google’s continued investment in virtual reality, another facet of the evolution of advanced digital interfaces.

The integration of Google Lens in Google Assistant poses some exciting new opportunities for brands to explore. For example, CPG brands may consider working with Google to make sure that Android users can use Lens to correctly identify your products and receive the correct information. For retailers, the addition of the VPS feature holds great potential for in-store navigations and AR promotions, once it becomes available to a higher number of mobile devices.

The new features coming to Google Assistant makes it a more capable contender in the fight against Amazon’s Alexa. In particular, the support for handling transactions and the “Visual Responses” should offer brands great opportunities to drive direct sales and engage customers with a multi-media experience. For auto brands, in particular, the integration of Google Assistant into some of the upcoming connected cars bring new use cases for engaging with car owners via conversational experiences. The addition of Visual Responses means it is now possible to deliver additional content, be it videos or images, about your products via Google Asistant, adding a visual component that is crucial for marketing fashion and beauty brands.

In terms of VR, Google’s initiatives should help expand the accessibility of its VR platform and get more users to watch the 360-degree and VR content available on YouTube and other Google platforms. For brands, this means increased opportunities to reach consumers with immersive content on Google-owned platforms. As more mainstream tech and media companies rush into VR to capitalize on the booming popularity of the emerging medium, brand marketers should start developing VR content that enhances your brand messaging and contributes to the campaign objectives.

How We Can Help

While mobile AR technologies and standalone VR devices are still in early stages of development, brands can greatly benefit by starting to develop strategies for these two emerging areas. If you’re not sure where to start, the Lab is here to help.

The Lab has always been fascinated by the enormous potential of AR and its ability to transform our physical world. We’re excited that Google is bringing computer vision to android devices and it allows us to develop AR experiences delivered by Google Assistant reach millions of users. If you’d like to discuss more about how your brand can properly harness the power of AR to engage your customers and create extra value, please reach out and get in touch with us.

The Lab has extensive experience in building Alexa Skills and other conversational experiences to reach consumers on smart home devices. So much so that we’ve built a dedicated conversational practice called Dialogue. The Zyrtec AllergyCast Alexa skill that we collaborated with J3 to create is a good example of how Dialogue can help brands build a voice customer experience, supercharged by our stack of technology partners with best-in-class solutions and an insights engine that extracts business intelligence from conversational data.

As for VR, our dedicated team of experts is here to guide marketers through the distribution landscape. We work closely with brands to develop sustainable VR content strategies to promote branded VR and 360 video content across various apps and platforms. With our proprietary technology stack powered by a combination of best-in-class VR partners, we offer customized solutions for distributing and measuring branded VR content that truly enhance brand messaging and contribute to the campaign objectives.

If you’d like to know how the Lab can help your brand figure out how to tap into these tech trend coming out of Google I/O this year to supercharge your marketing efforts, please contact our Client Services Director Samantha Barrett ([email protected]) to schedule a visit to the Lab.

 

Wayfair Launches Visual Search For Finding Similar Home Goods

What Happened
Online home goods retailer Wayfair has launched “Search with Photo,” a visual search tool that leverages artificial intelligence to enable shoppers to find matching home furnishings to the items they take photos of. Available across mobile and desktop devices, shoppers can access the feature via the camera icon in the Wayfair.com search bar, which allows them to snap a photo or upload one from their photo library. The search engine will quickly returns visually similar items available from Wayfair’s inventory for direct purchases. Users can add the products they like to an Idea Board to save for later or share with others.

What Brands Need To Do
This new visual search engine should help Wayfair maintain a competitive edge over other ecommerce sites. With the quick advancement of machine learning and AI-powered solutions, we are starting to see examples of brands primarily using the camera as an input source of the mobile user interface and leverage images to learn about user intent. Previously, Pinterest and Amazon have both launched similar visual search feature that uses the camera as the input source for better understanding what users are looking for and optimize the product discovery process. Brands looking to stay ahead of the digital curve will need to start formulating a “camera strategy” and broaden their methods of customer data collection.

 


Source: TechCrunch

Instagram Adds Snapchat-Style AR Selfie Lenses And Tests Location-Based Public Stories

What Happened
It looks like Facebook is not done copying Snapchat just yet. With its latest update on Tuesday, now Instagram users can try out so-called “face filters” in the Instagram camera, which works similarly to the Selfie Lenses that Snapchat popularized. Instagram users can tap the new face effect icon to try out eight different filters, including animated crowns, cute animal features, and other AR effects that track your face and respond to motion.

In addition, Instagram has also started testing a new feature that allows users to view all publicly shared Stories with same location sticker. Users can then visit that business, landmark or place’s Instagram page and watch a slideshow Story of posts from there shared by strangers they don’t follow. Snapchat used to have a similar local Stories feature that compiles together user-generated content based on locations, but it has discontinued the feature to focus on more on live events.

What Brands Need To Do
Both new features are part of the homogenizing trend in social and messaging app design, led largely by Facebook’s relentless efforts to curb Snapchat’s growth. For brands, the rapid growth of Instagram and its camera-focused update signal the increasing opportunities for brands to conquer the smartphone’s camera screen. With the quick advancement of machine learning and AI-powered solutions, we are starting to see examples of brands primarily using the camera as an input source of the mobile user interface and leverage images to learn about user intent. Combined with the upcoming developments on Facebook’s Camera Effects Platform, announced last month at its F8 developer conference, this trend means brands should be looking into camera AR features as a way to update their digital user experience to be more intuitive and convenient for mobile users.

 


Source: Amazon Alexa Blog

Snapchat Debuts Sponsored World Lenses And Readies Branded Stickers

What Happened
On the heels of a less-than-optimal earnings report, Snap Inc. is releasing two new AR camera ad products as it aims to drum up more ad revenues. The popular messaging app introduced a new World Lens feature in February to let users embellish their surroundings with cute animations, now brands can sponsor those World Lenses in the same way they did with the face-altering Selfie Lenses. The sponsored lenses can now be targeted to specific audiences with a guaranteed number of impressions. Netflix and Warner Bros. are among the first advertisers to try out the AR-powered ad unit.

In addition, Snap is also making it easier for brand advertisers to customize Sponsored Geofilters down to specific locations, such as a school or a movie theater. Warner Bros. is promoting the film Everything, Everything with a branded geofilter, in addition to the sponsored World Lens, targeting high schoolers by featuring the name of their school. Moreover, the company is also reportedly ready to unleash branded stickers such as ones that feature Hello Kitty.

What Brands Need To Do
While slowed installs of Snapchat have worried some brand advertisers of the platforms’ growth potential, recent studies and surveys commissioned by TechCrunch concluded that U.S. Millennial and Gen Z users are staying loyal to Snapchat and in no hurry to jump ship to Instagram. As Snapchat continues to lead the charge in exploring AR camera effects and monetization, brands should consider taking advantage of the new camera-based ad products it offers to reach younger users active on its platform.

 


Sources: Marketing Land & AdWeek & Mashable

 

Audi Sponsors The Washington Post’s First Entry Into AR Content

What Happened
The Washington Post is launching a new mobile content series that uses AR technology to learn more about cool stories behind famous buildings around the world. The first installment is a 10-second-long AR experience that readers can activate on their smartphone via the Post’s iOS app to learn about the unique ceiling design in the Elbphilharmonie concert hall in Hamburg, Germany. Audi is the sole brand sponsor of the series. Its first ad will appear as a visual, but the Post it will work with Audi to create branded AR stories in upcoming installments.

What Brands Need To Do
This is an exciting example of a brand leveraging a publisher’s AR efforts to experiment with new ways to reach mobile consumers. While Snapchat has been credited as the pioneer in popularizing AR camera effects, Facebook made a big AR move last month with the launch of its Camera Effects platform, which offers brands a platform and the tools they need to create interactive experiences which use the camera as an input. As more and more media platforms and publishers start to get on board with mobile-based AR technology, it is up to brands to find the right content creator to partner with to explore camera-based AR experience to reach customers.

 


Source: Digiday

Brawny Uses Snap Spectacles To Capture Kids’ POV For Mother’s Day Ad

What Happened
Paper towel brand Brawny found a unique usage of Snap Spectacles for this year’s Mother’s Day campaign. The company worked with ad agency Cutwater to create a 60-second commercial titled “Once a Mother, Always a Giant” that uses footage shot by putting the camera-embedded camera on kids to capture their point of view. The result is a heartwarming montage of mothers looking like “giants” from the kids’ perspective.

What Brands Need To Do
This is not the first time a brand has used Snap Spectacles to generate unique video content for marketing purposes. Both Marriott and Hyatt have been leveraging Spectacles to create authentic video content from their properties around the world. While Snap’s first quarterly report as a public company released on Wednesday doesn’t exactly paint a rosy perspective for the company, CEO Evan Spiegel says he’s not bothered with Facebook’s aggressive imitation of Snapchat features while re-stressing the company’s camera-first strategy. Regardless of which company will prevail in the race to make the camera the first mass platform for augmented reality, brands need to start exploring using new tools to spice up their campaigns.

 


Source: Marketing Dive

Image courtesy of Brawny’s YouTube