#029 PATENT DROP
Alexa as a psychologist, Superhuman Disney robots & Wellness insights from Microsoft
This Patent Drop is going out to 8,000 people! Hit subscribe to get a peek into the future with 3 summaries of new patent applications from big tech companies every week ✨🔮
This week’s Patent Drop is brought to you by…
Why do 84% of ultra-high-net-worth individuals collect art? For decades, the top 1% have used art to generate outsized investment returns with virtually no correlation to the stock market. From 1995 to 2020, contemporary art prices outperformed the S&P by 172%. But unless you have an extra $10 million lying around to buy an entire painting, you've been locked out of this private $1.7 trillion asset class. Until now.
Masterworks is an exclusive community that lets its members invest in blue-chip art by artists like Basquiat and KAWS. Recently, Masterworks sold their first work, a painting by Banksy, for a net 32% annualized return. They even have a secondary market so you can get liquidity anytime.
The best part? They've hooked up Patent Drop Subscribers with a fast pass to skip their 15,000 person waitlist with this special link.*
*See important info
Hi, how are you?
Apologies for the Patent Drop delay. This last week has been a tough one - my cousin in India passed away because of Covid, leaving behind his wife and 3 kids. The situation out there is dire right now. The system is in total collapse. My timeline is full of friends using Twitter to coordinate supplies of Remdesivir and Oxygen tanks for their loved ones. The government have been caught napping.
Anyway, let’s get into some patent applications that I’ve seen over the last few weeks.
Microsoft is thinking about giving wellness recommendations based on a user’s biometric data (e.g. heart rate, blood pressure) and data around work events such as writing /reading emails and attending meetings.
To do this, Microsoft would request access to the productivity applications used by an employee. From this, Microsoft could track a number of different data points, such as:
duration of time spent writing an email
time spent reading an email
number of times a user refreshes their inbox
number of emails written / number of emails read
number of corrections made when writing an email
who’s on the recipient list for each email
number of meetings in a day
audio data from meetings (via microphones on a device)
how hard each key press is on the keyboard
the tone of language in an email
By cross-checking this with biometric data (e.g. from a secondary device like a Fitbit), Microsoft could begin to understand what work events are triggering anxiety.
For example, if a user receives an email from their manager, Microsoft might see that the user spent a higher-than-average amount of time reading the email, and their blood pressure was also elevated during this time.
Based on these insights, Microsoft could surface recommendations for helping users manage their stress levels, as well as helping highlight events that trigger anxiety.
With Office and Teams, Microsoft has a deep understanding of almost all work related events that a user might participate in. If Facebook built their business from understanding our social lives, Microsoft seems to be building one based around understanding our work lives.
As is a recurring theme in Patent Drop, this filing also raises the risk of enterprise surveillance. As remote work becomes a more permanent fact of life, and more work events have shifted online, there are more data points to be captured around how we work. While this specific filing is directed towards helping employees manage their anxiety, it could easily be flipped to helping managers and HR teams deeply understand the productivity and mental health of their employees.
Amazon’s latest patent application looks at classifying speech from a user based on emotion or sentiment.
In the filing, Amazon give a few examples of where it might be useful to understand a user’s state of mind. For example, if a user is dictating a text message via Alexa, the user might tell Alexa to add an emoji. Alexa could then suggest an emoji based on the emotional state detected in your speech - if there’s a slight giggle as you’re dictating the message, Alexa could put in a laughing face emoji.
Another example is a user asking Alexa to recommend a movie. Based on the emotional state detected in the user’s speech, Alexa could recommend an appropriate movie.
More interestingly, Amazon also mention tracking sentiment over a period of time, either for a user to see themselves, or for a third party to look at, such as a doctor or a psychologist.
In order to understand a user’s sentiment, Amazon will combine lexical information and acoustic information.
Lexical information refers to the information derived from the meaning of the words said. If a user utters “I am angry”, the meaning of the word “anger” tells us that a user is displaying the emotion of anger.
Acoustic information refers to the information derived from the actual audio data of what’s said. For instance, the rising pitch and volume in someone saying “I am angry” may signal the emotion of anger.
The capability of Amazon / Alexa to understand a user’s sentiment over time through their speech raises a lot of ethical issues. One interesting one that comes to mind is that users thought they were buying a digital assistant to have their instructions executed upon. Some savvy users understood that their instructions were giving data to Amazon. What they may not have understood is that they were eventually giving Amazon a peek into their emotional world.
What’s the end game for Amazon understanding a user’s emotion?
Not sure - and Amazon may not even have a specified end game yet.
Understanding a user’s emotional state would be useful for having a full conversational agent that can interact, understand and respond in a more human way. Right now, Alexa is used for fulfilling orders. In the future, Alexa may expand to more use cases. For instance, Alexa could serve as a carer' that interacts and tracks the emotional state of the elderly or the vulnerable, and feeds that data back to relatives.
Related to this filing, in #007 PATENT DROP, we saw Amazon file a patent application for a wearable that would track and understand the emotion in human speech.
Bipedal robots (robots that walk with 2 feet) are generally being designed to take the place of humans in dangerous spaces, e.g emergency responders, soldiers, or manufacturing. They need to be able to navigate all types of terrains and deal with unexpected changes in an environment.
In the context of trying to build robots for theme parks, the engineering challenge is different: the robots will navigate a controlled environment and the objective is to make it move in a way that resembles a lifelike character.
So in this filing, Disney describe building a robot system where the controlled environment provides “cheats” that can help the robots perform in more lifelike ways.
For example, Disney’s robot system would include a floor that is made up of ferrous material. If the robots have electromagnets in its feet, the robot can then dynamically clamp and unclamp its feet from the floor surface in order to move. The control of the environment is something that is impractical for military applications, but useful for entertainment in a theme park.
By creating such a system, Disney’s robots would be able to resemble superhuman characters that display superhuman physics. For example, the robots could have extreme leaning abilities, with the support surface helping generate balancing forces.
Generally, Disney’s patent applications are focused around bringing the magic of the Disney universe outside of the screen - through robotics in a theme park, augmented reality, and conversational agents that feel lifelike.
For more Disney patents, check out #028 PATENT DROP for Disney’s filing to create emotionally expressive synthetic voices.
Before you go…
Join over 200,000 people following Patent Drop on the social investing platform Public.com
Have a nice week :)