#030 PATENT DROP

Spotify mood recommendations, Facebook VR keyboard & Ford's AR system

This Patent Drop is going out to 8,036 people! Hit subscribe to get a peek into the future with 3 summaries of new patent applications from big tech companies every week ✨🔮


Hi - Happy Monday!

This week’s Patent Drop looks at some fascinating applications from Spotify, Ford and Facebook.

Before diving in, I just wanted to tell you about Masterworks, this week’s Patent Drop sponsor.

Masterworks help you invest in this secret asset class that Jeff Bezos, Eric Schmit, and Bill Gates LOVE.

These tech billionaires are pouring millions into an exclusive asset class that has generated attractive returns for decades—but you would never know it by looking at their portfolios. What do they invest in that you don’t? Blue-chip art: an exclusive asset class that is expected to grow from $1.7 to $2.6 trillion by 2026.

The only problem? Unless you have a cool $10,000,000 laying around you can’t be a part of this billionaires boys club. Right? Think again. Introducing Masterworks, my favourite platform for investing in proven artists like Basquiat, Warhol, and Banksy. Contemporary art prices outperformed the S&P 500 by 172% from 2000-2020, so it’s no surprise 84% of ultra-high-net-worth individuals collect art.

If you’re looking for a solid and nearly uncorrelated asset to add to your portfolio, I recommend checking out Masterworks.

Bonus: I’ve partnered with Masterworks to let Patent Drop subscribers skip their 25,000 person waitlist, so do yourself a favour and sign up today.*

Skip the waitlist


*See important information.

Okay, now let’s jump in…

1. Facebook - porting physical objects into virtual reality

If you’ve been an early follower of Patent Drop, you might remember some of the patent applications from Apple that explored how users might interact with input devices (such as keyboards) in Virtual Reality.

This work is important because it touches on how ‘work’ could actually be done in a VR environment.

In this filing application, Facebook suggests multiple approaches for how users could interact with input devices in VR.

The main approach is for the physical object (e.g. a keyboard) to be activated with the VR headset, and then for a virtual model of the keyboard to be rendered in the VR space. So even though the user won’t see their physical environment, the user will still be able to see where they have to move their hands in order to interact with the keeyboard.

When a user’s hand hovers over the keyboard, users will see a virtual representation of their hand, so that they can better judge their distance from the keyboard and where their hands need to move towards.

As users type on the keyboard, the keys that are tapped on will be highlighted virtually in order to give users in order to give a sense of visual feedback from the actions.

One question you might ask is why does there need to be a physical object at all for typing in VR? Could it not just be taking place with a purely virtual keyboard?

One big problem that comes from tapping just a virtual keyboard with no parallel physical object is that there’s no tactile feedback to typing, no sense of how hard to tap a key, and you don’t know what depth to tap a virtual key. Imagine typing into air versus typing onto a physical keyboard - this would be uncomfortable and unintuitive.

With existing VR headsets, users usually ‘type’ by moving their controller over a virtual keyboard and individually selecting the letters. While this is fine for consumer entertainment applications where typing is minimal, it becomes too time consuming for productivity applications.

This patent filing is further insight into how the big tech companies are thinking about enterprise applications for VR and how productive work could be completed - starting with the simple keyboard.

2. Ford - AR wearable system for vehicle occupants

In a patent filing this week, Ford gives a peek into how it plans to connect its car to AR (augmented reality) glasses of drivers and passengers.

Currently, some cars leverage AR in the heads-up displays of the cars. Usually, it will consist of a display streaming live-video data from a camera at the front of the car. Then on this screen, the car system might superimpose some content that highlights points of interest or pedestrians etc.

The limitation of these existing AR systems is that it requires the driver to shift their gaze away from the road and towards the display screen. These AR systems may end up distracting a user from what they ought to be focusing on - the road.

In this filing, Ford describe creating a system that connects to AR glasses - e.g. let’s say the upcoming Apple Glasses. Through these glasses, Ford could then superimpose any additional content directly onto the driver’s field of view, instead of on a separate display screen that distracts a user’s focus away from the road.

When wearing these AR glasses, users would be able to rotate their head in any direction and interact with the AR system, overcoming the limitations with existing displays.

The patent application gives some ideas for what users might expect to see with these AR glasses. One example is the AR system generating a business logo of a nearby place of interest that hovers over the top of the buildings. Another example, more strangely, is showing the social media connections of a passing vehicle - maybe Tinder will be something we play in traffic?

It’s always fascinating to see how emerging technologies can interact with each other. In this instance, AR wearables and connected cars could create entirely new driving experiences.

3. Spotify - suggesting songs based on a user’s physical parameters

No pretty pictures for this one.

In January 2021, Spotify was granted a patent for suggesting songs based on analysing a user’s speech for emotional states, gender, age or accent.

In this latest filing, Spotify begins to explore making song recommendations based on analysing a user’s physical parameters and determining their mood.

The filing mentions using data from wearables to track blood pressure, pulse, body temperature, facial information and contextual information such as the air temperature surrounding the user.

It will work by attempting to map values from these physical parameters to emotional tags. For example, a high heart rate may be mapped to ‘excitement’, while a low heart rate may be mapped to being ‘mellow’. So when songs are being recommended to a user, recommendations based on user preferences will also be filtered based on what a user’s current mood is, and what songs display similar emotional tags to what a user is currently feeling.

The idea of mood-based recommendations for music isn’t a new one. In fact, right now, Spotify does have a lot of mood-based playlists - e.g. the ‘chill playlist’. The bridge that Spotify is trying to cross by assessing a user’s current mood is having the product immediately provide the ‘emotional fix’ for a user, without the user having to do the work to understand their own mood and find the right playlist. In fact, maybe Spotify’s mood-based recommendations will begin to understand our mood and what music we need to listen to better than we do.

The danger of mood-based recommendations is the subjective judgement that needs to be made - do you suggest songs that allow a user to revel in that mood, or do you suggest songs that try to shift a user’s mood? YouTube’s recommended videos have often been criticised for leading people down topical rabbit holes that are difficult to escape from. For instance, maybe you wanted to see that video of Ben Shapiro owning a liberal snowflake, but 2 months on, you’re now being presented videos on the latest Q-Anon conspiracy. The parallel when it comes to mood may be you’re feeling depressed, Spotify recommending “sad boi / sad girl’ songs, and making it harder for you to escape that mood.


Before you leave…