Collections SDK - finding stories to tell...
This is where the magic all starts. The Collections SDK is a traveler poking around, finding stories to tell. It groups photos into "Collections" which tells a Story. So we can suggest books that even your user didn't know they want!
Putting together a Story
Collections SDK takes data from EXIF, meta-descriptors from the Curation SDK analyses and if available, iFACE SDK descriptors plus other phone sensors. It then groups relevant photo-sets into Collections which has a Story or Theme.
From “Summer in Berlin”, “Me and Wife in New York last Fall” to “Cats”, “Dogs”, “Meghan and John” and a book with close-ups of Flowers. Your users take photos of the most amazing things in the world. The Collections SDK finds the thread that binds them.Here are three of the most common ones. If none of these describe you, drop us a line and let’s talk.
This feature requires face recognition and person-clustering information to be provided to the Collections SDK. This is supported by our iFACE SDK, but it can also work with your own Face Group information via APIs. If you have a manually sorted album (if you are a wedding or school yearbook photographer), the Collections SDK can also able to take that information and automatically craft stories.
But when used with our iFACE SDK, it becomes a very powerful engine to generate photobook stories using our smart proprietary Social-Net algorithm.
By analyzing the Frequency and Recency of the occurence of two or more persons in the same photos and at events, we are able to predict their social relationship and “closeness”, then group them into photo albums naturally to optimize conversion.
We also provide APIs to capture user-submitted ground-truth information on relationships.
Using these data-points, we then automatically propose Collections based on Persons.
Time and Location-based Collections
These are the fastest to generate as we only require time-stamps and location information typically found in the EXIF tag. Using proprietary behavior-predictive algorithms, we group photos using a time and location proximities.
The algorithm first determines where your “home” base city is, and your typical travel patterns to determine how far you go before its “a trip”.
For someone who lives in San Francisco and commutes to San Jose (120km) every day for work compared to someone in tiny Singapore where the typical commute is 15km, the definition of a “trip” is subjective and relative. Our algorithm figures it all out. Even if the user moved from compact Ho Chi Minh to work in sprawling Los Angeles, the algorithm will automatically re-calibrate over time.
In case you’re wondering: “doesn’t this infringe privacy?”
Because the entire SDK works natively on the device, nothing gets stored nor transmitted to anyone. Your app won’t know where the user’s “home” is, our SDK won’t know it either. We calculate an area which is expressed as a “threshold” used in determining if a significant trip has occurred.
These are Collections based on specific themes powered by the AI Image Recognition models. It could be wedding photos, night shots, an anthology of Flowers, Birds, Sports, or any other recurring themes and patterns that we sense in the phone gallery. Get on our newsletter to stay abreast of our developments in this area!
(This is currently in Beta and not yet available to evaluate.)