This Patent Drop is going out to 2,720 people! Hit subscribe to get a peek into the future every week with 3 summaries of new patent applications from big tech companies ✨🔮
This week’s Patent Drop is brought to you by…
Public helps make the stock market social by letting you buy and discuss stocks with smart, curious people - like other Patent Drop readers ;)
Patent Drop are partnering with Public.com to provide people with more context around public companies (US readers only for now).
As part of this tie-up, you can get a free $10 slice of stock to invest in Public when you open an account*. Click here to claim it:
*Offer valid for U.S. residents and subject to account approval. See Public.com/disclosures/
Happy Monday - 3 new (or updated) patent applications from Apple, Uber and Amazon!
Apple’s first patent application for flexible displays was filed in 2011, and has since been updated 4 times. During that time, Samsung have released foldable phones - the most recent one being the Galaxy Z Fold 2, which started shipping in October.
Samsung’s latest device showcases some interesting use-cases with flexible displays. The devices sit at the intersection of smartphones and tablets, enabling users to switch between the two as they please. When the device is half-folded, it becomes a screen with its own stand. The unfolded device allows for multi-tasking, or simply just a more immersive user experience.
So, are Apple working on their own version? A smartphone-tablet hybrid with a flexible display? Or is this patent refresh just Apple’s R&D team taking stock of their work, without any immediate view to it featuring in a consumer device? I don’t have any answers, but I’ll be keeping an eye on any future patent updates in this space from Apple ;)
Two interesting patent filings from Uber this week that I wanted to highlight.
(i) Route + Location Safety
Uber are thinking about how they can make the passenger experience safer.
They are looking to do this by suggesting appropriate pick-up and drop-off locations for passengers, and safer travel routes for drivers, on the basis of a ‘safety score’.
The safety score of different locations could be based on the following:
presence of surveillance cameras
brightness or ambient light level
crime rates for a neighbourhood
On the one hand, optimising for safe locations could bring about a virtuous cycle in Uber. Firstly, optimising for safer routes may inspire more trust in Uber from potential passengers. For example, in India, Uber’s reputation has been tarnished from instances of drivers raping their passengers.
Secondly, having pick-up and drop-off points optimised for safety may make drivers feel more confident about picking up passengers from specific areas. I’m sure some drivers have ‘no-go’ areas for picking up passengers. By suggesting safe pick-up points, Uber might be looking to try to minimise the behaviour of not picking up passengers from specific neighbourhoods.
But on a practical level, there are some interesting considerations. For example, if a passenger needs to walk through an unsafe area in order to get to a ‘safer area’, will the platform take this into consideration when optimising for safety? Is the platform optimising for the safety of the driver or the passenger? These two won’t always necessarily align.
Secondly, should drivers be allowed to see the safety scores of a location before agreeing to a pick-up? If this information is feeding into the Uber algorithm, what are the ethical implications of keeping this information hidden? What balance do we strike between “trusting in the platform” and giving information for humans to make decisions for themselves?
And then if safety scores are made visible to drivers, could we see a situation where drivers begin to more proactively avoid certain neighbourhoods, and in turn, passengers from ‘less safe’ areas are charged even more of a premium in order to attract drivers?
(ii) AR Uber Experience
This is an update to a filing last made 2 years ago, but it’s an interesting UX change.
Uber are thinking about introducing an augmented reality experience when a driver and a passenger within a threshold distance from each other.
When a passenger starts looking for a driver, the smartphone screen will shift to a live video view from the camera. Then on their screen, instructions will appear directing the passenger to their driver’s car, and when the car is in sight, AR graphics will highlight the correct car.
This is interesting because it looks to minimise the pain-point that all passengers and drivers experience when looking for each other, especially in areas where there are lots of cars (e.g. Airports).
At present, when passengers are looking for their driver, the experience consists of passengers looking on the Uber map, trying to orient themselves in the correct direction. Then there may be a phone call or messages exchanged asking the driver for their exact location. And then there’s time lost with passengers needing to check the registration plate of their car, versus other possible cars.
Magnified at scale, this is a lot of time being lost from passengers looking for their cars. Besides being a pain for passengers and drivers, this time lost ultimately impacts how many rides drivers can do, how quickly ride requests are fulfilled, and the overall liquidity of the marketplace.
Check out #005 PATENT DROP for another Uber patent application that looks to minimise driver waiting time.
Last week, Amazon updated a patent filing that hadn’t been touched in 2 years.
Amazon are looking to build more intuitive ways of searching for products - namely by mixing and matching the features that you want.
In the example of buying shoes, users can choose what type of shoe they’re looking for, and then begin to refine their search by selecting more specific features - such as the strap of the shoes, or the type of heel. This refinement could be done by users looking at examples of shoes and highlighting the features that they like the most.
The goal is to help users find items of clothing that are more suited to their tastes, in a visual way, as opposed to relying on textual filtering.
As well as searching through images on Amazon’s listings, the patent also describes being able to upload your own images of items that you like. So in theory, you could upload a photo of your favourite celebrity, find similar shoes the ones that they’re wearing, and further refine the search by highlighting the specific aspects of the shoes you like (or don’t like).
The most interesting aspect of this patent is how Amazon is looking at leveraging its huge inventory (past and present) with computer vision to enable new ways of finding items for purchase. It remains to be seen whether Amazon are actually looking to implement this, or if this is just some exploratory work that won’t see the light of day.