We’re excited to announce that our software, which incorporates word embeddings, can be run on a Raspberry Pi – the equivalent of an iPhone 5. In trials done earlier this month, we demonstrated that using only a Raspberry Pi and about 100 megs of ram, our AI software can process four novels’ worth of text in around eight minutes.
None of the data required to do this ever leaves the device, and it all happens in an amount of processing power the phone itself can deliver. This capability has many implications, notably as an OEM offering for device manufacturers who want to run an intelligent agent or semantic search on phone, without the privacy concerns connected with having to connect to the cloud.
The ability to run natural language understanding software locally on a phone would give that device tremendous capabilities that devices currently do not have, specifically: remembering all of the unique language people use on a regular basis, from everyday slang and workplace lingo to more private data that one wouldn’t want shared like their children’s names. A phone could understand and store this information quickly and easily, without ever having to ship data to the cloud. Device manufacturers could use this capability to locally customize a virtual assistant, provide conceptual search functionality, prioritize emails and messages, and tailor the entire mobile experience to an individual user’s linguistic environment.
“There’s a lot of tension right now around the privacy of our devices, the security of people’s info in the cloud, and who gets to look at it to do what,” said Catherine Havasi, Luminoso CEO and co-founder. “Google, for example, probably has to read your email to do anything really intelligent with it. If these capabilities can be brought to end users’ phones, we can actually open up a broad range of capabilities that honor people’s privacy and decrease device manufacturers’ reliance on cloud providers, while still giving an amazing user experience.”