It’s undeniable that a great deal of our entertainment media is based around personalization these days.
Not sure what music to listen to? A data-driven playlist will recommend songs you’ll like. What about a new book? Get a suggestion for a great read, based on what others with similar profiles have purchased. Need a new binge-worthy series? Netflix introduced us to video recommendation years ago — and the company has said its engine is now worth about $1 billion a year.
Algorithms that enable media platforms to predict and recommend content have certainly rocked the entire entertainment media space. But personalized media will undoubtedly advance a whole lot further from what we see today. So, the question remains: just how well can our devices get to know us?
CHANGING MEDIA LANDSCAPE
Consumers have been stepping away from traditional TV consumption for years. Digital streaming services such as Netflix, Hulu and Amazon Prime Video have changed the way video is being watched, discovered and shared. According to a bcg.perspectives report, those players will capture 20% of the total U.S. video industry value in 2018 — up from 10% in 2014 — and this represents more than $30 billion of total U.S. TV-industry revenues.
And as the revenue accelerates, so does the technology. Companies like TiVo and MediaHound — a firm UpRamp is involved with — give pay TV providers a platform to replicate and expand on Netflix features. MediaHound, for instance, uses the Entertainment Graph to create an even more personalized media experience.
While Netflix bases recommendations solely off its own database, the Entertainment Graph pulls in recommendations from all sources, incorporating behaviors, actions, moods, and more. So since the platforms aren’t limited to one static library, in turn, personalization capabilities are advanced.
The way that we interact with these recommendation tools has also started to change. The Amazon Fire TV Stick integrates with Alexa, thus enabling voice-controlled content from Netflix, Hulu, YouTube and, of course, Amazon. Comcast’s X1 platform, which allows users to watch live TV, on-demand and access DVR recordings, also recommends content and features the X1 voice remote, allowing consumers to speak into the device and get TV and video suggestions. Just ask, “What should I watch?” and you’ve got your night covered.
While we’re certainly enjoying the beginning of personalized media — and have even begun to interact with it — there’s obviously more space for it to advance and for our devices to get to know us on a more human level.
Tech companies big and small are working on new technologies — the most interesting of the bunch involve teaching computers to sense emotion — and it’s tempting to not connect the dots and imagine how these could completely revolutionize the recommendation space.
Take what Amazon’s doing. The company is currently working on teaching Alexa to recognize emotions through the tenor in a user’s voice. While this means Alexa’s responses to us might become a little less robotic, it also means that we may one day be able to speak into our Amazon Fire TV Sticks and be recommended a movie based whatever emotion Alexa senses we might feel.
Apple and Microsoft have entered this emotion recognition market as well. In January, Apple bought a startup called Emotient, a company that uses facial recognition technology to discern people’s emotions. Similarly, Microsoft released an emotion-sensing platform at the end of last year that recognizes human emotions by analyzing photos. Neither company has said what they plan to do with the technology at this point.
While this advancement could be a fun tool for the recommendation space — i.e. put on a sad face, take a selfie and be presented with a list of sad films, music or books (or happy ones to cheer you up) — it certainly has the potential to become much more powerful.
If we gave recommendation platforms permission to our cameras, for example (and without a doubt, this would come with a range of privacy issues), perhaps one day in the future the tools will be able to take one look at our faces, know how we’re feeling, and recommend the perfect content for us to consume. It might know that after work, we’re tired. Or that on Sunday mornings, we’re energized.
It will be so much fun to see where this technology will lead us in the next few years. As recommendation tools are quickly advancing, the day when these tools know us more as unique individuals is likely not too far off. Now, if only they could stir up a gin and tonic and put on our favorite slippers, too.
Scott Brown is the managing director of UpRamp, a next-generation startup accelerator sponsored by CableLabs and designed to help emerging technology companies make deals within the global cable and broadband industry.
The smarter way to stay on top of the multichannel video marketplace. Sign up below.
Thank you for signing up to Multichannel News. You will receive a verification email shortly.
There was a problem. Please refresh the page and try again.