In her recent keynote speech at the PACA conference Maria Kessler taled about looking at new technologies and other industries. At the same conference Robert Henson of Blend images facilitated a seminar about semantic search. The seminar involved Google and Imense and looked at the various innovations in visual search and how they impact the industry. The panellists were:
- Imense: Ulrich Paquet. Researcher and innovator at Imense. Development of automatic face recognition and responsible for similar search
- Google: Matt Sitzman. Project manager at Google. Image search monetization.
The 80 minute video consists of 45 minutes of presentation followed by questions. You can watch the video or read the summary below.
Google’s mission is to organize the world’s information and make it universally accessible and useful. The company has a 300 year mission and they are 10 years into that mission, some time away from solving the world’s problems.
Key stats: 300 million digital photos produced every day / 100 billion new images in 2009 / ½ trillion images in circulation. Hundreds of millions of image searches every day. Billions of images indexed. This quantity enables us to try new things.
How does it work? There are three ways to search for images on google.com. 1) Do a search on google.com 2) Universal search results, pulling images in from different sources 3) images.google.com
The number one focus is the end-user. Who are those people? 4 examples: Inspiration for new hairstyle or tattoos / Shopping for a light fixture / Celebrity from a recent movie / Finding a Halloween cake.
What are the technical aspects of this? Really starting to focus on computer vision / Getting information from images / Making computers “see”
- Object recognition
- Facial recognition
- Image similarity
Proceeds to do a demo at 10.00 mins : Face restrict (understanding the pixels, only scratching the surface) / Restrict to clipart or linedrawing. Goes into explaining google labs where people can try out new technology at 13 mins. Similar search is explained at 14 mins. What are the weaknesses at Google? Boy in park with dog: Not all images are relevant or high quality. Stock industry has high quality images and metadata to back it up at 16 mins. Conceptual searches are also underwhelming at Google for now.
What about monetization? They are using a query to serve up ads through adwords. How can these be made more visual ? 18 mins. Thumbnails next to text ads are live now. Google maps is explained at 19 mins and this is another opportunity . Panoramio is a company google acquired. They can now overlay images on the map. This is a new way to browse images that has not been fully explored at 20 mins.
Why does Google want to be different?
- The web is disruptive.
- Change is opportunity.
- Experiment, experiment, experiment.
The ones that ‘get it’ are the ones to succeed during periods of change.
- There are a lot of images in the world
- Users have a variety of needs
- New technologies are making search easier…but
- Search is far from done
- Ads should be useful, like content
Imense at 25 mins
The four services that Imense offers:
- Picturesearch launched last year at PACA
- Auto tagging
Semantic search. How far have we come and where are we going? How can it improve users to find the right image? What’s in a picture? What is a good query? Keywords are hit and miss, if it’s not given you won’t find it.
27 mins is a technical explanation about the technology . Sketching an image to find it comes up and he mentions Berkeley’s blobworld at 29 mins. Core: There is a huge gap between what you mean and what’s in the database and getting it. We use Ontology: It’s a glue; are we related or correlated 33 mins. At 33.26 mins he gives the definition of Ontology. You can find more information about Imense in this interview with Tony Rowland here.
Applications of Ontology based search:
- Automated analysis and retrieval of un-annotated images
- Improved search of pre-annotated images
- New retrieval capabilities
- Automated image annotation
The most disruptive technology is number 4, automatically keywording images at 43 mins
Questions start at 45 mins with Jim Pickerell kicking things of. Here are a few selected questions:
- Will Google adopt the Picscout technology to point out where you can buy the image? They haven’t explored it yet but are interested in talking to Picscout
- What is Google doing to rank search results? It looks at anything you can possibly imagine in an image, hundreds of things and then uses its algorithm “secret sauce” to show up the best returns.
- Why is automated image annotation disruptive? Because you can cut up to 80% of your keywording time.
- What are the criteria you use to get thumbnails in Adwords? You can take your products and upload them to Google base. You can attach your base feed to your ad-campaign. This works well for retailers for example who sell multiple products. Beta is still early, with only a few thousand advertisers right now.
- Will Google have similar search for video? Video is trickier. They have no product offering today. YouTube has related video’s. This is done through user behaviour, not through information in the product.
- How about working with Google to get more education about copyright? Teaching users is important and Google would like to hear about the right type of message.
- How to make images retrievable by Google? Allow robots to crawl the site and ensure the right title and captions. As much information as possible will help.