Tuesday, May 12, 2009
The mini surprised me. I have not played with one before, since our local Apple store would have to order one for me. It was fast. It did not bog down playing two movies, one in HD and one in SD. Each application opened quickly, ran smoothly and was very responsive. Even with iTunes playing music and the HD movie running. All on 2 GB of RAM and a 2.0 ghz processor.
Something on this Mac was configured differently, and it had a right click on the mouse. Not sure how, since the whole top moves, but if you push on the left it reads a left click, right reads a right click.
I think I may bite the bullet and finally buy a Mac. Only a couple decisions left to make. I need to decide if I want the wireless Apple keyboard, or if I should go with the Logitech dNovo keyboard with the built in click wheel. And, do I want to put in the 4 GB right away or wait till I need a speed boost? Apple RAM is a rip off, but the thing is hard to open.
I'm heading out now to try the Burger Joint at Le Parker Meridian, then maybe off to the Apple store to look at the Mini again.
UIE collects 2000 pieces of data per participant. In their studies they found that the #1 predictor of success was the # of pages the user visited between the home page and the sales cart. Fewer pages led to greater success. The best way to improve scent is to build the trigger words into the link titles, or the description of the tool. To find the trigger words, examine the search logs - those words are the words the users are scanning the pages to find. Referring again to his studies, he found that no one only went to search. Everyone started by scanning the page, looking for categories or links that contained the trigger words. They only went to search when nothing could be found.
An interesting point he made, if you can follow the users click path and know when they leave the page through search, then you can know when the trigger words dried up. Adding a link on that page that takes the user to the search term will increase success for that one trigger word.
Search works well for known item location. So it is easy to find "The Princess Bride" or "Tom Clancy". Search does not work well for less concrete items like "an inexpensive yet high quality SLR" or "Novels written by Nobel Prize for Literature authors". To solve this issue he recommends using editorial capability to improve content finability.
Looking at 77 installed versions of search engines on e-commerce sites, the same vendors that were the best, were also the worst. The difference is the implementation, not the technology. Focus on doing a good job on configuring the installation.
They defined social search as social tagging.
You do it because it enhances the findability of an object.
There is a psychological burden of asking a question within an organization. "Didn't we hire you to know that?" There is also a cognitive load on asking and posting within an organization as people take the time to format the question well. People don't share because they don't want to tip their hand.
The Steve museum project explores leveraging social tagging to help identify and find art. They build the tools internally, but examples are on the web at www.steve.museum.com. They included a statistical analysis tool set to help tell when tags were useful. They found that the tags were useful for 80% of the time.
In a study with ACM, they found that when authors post tags themselves, they are precise but they are not very broad. They recycled the authors tags, added them to a pseudo controlled vocabulary and found more use of the tags that were created, and more tagging on the documents as a whole. Providing the leverage to reuse the tag, increased the usage of the tags. People saw the reuse as a value, giving them a reason them to tag in the first place.
In the Steve museum project, they had a two part study where some users know that they are helping an organization and others did not see a connection to a organization. The users who knew and saw the value they were adding were more than twice as likely to help.
ConnectBeam seems to have closed. They discussed it as a good product, but there does not seem to be any new products, and people have left the firm.
"Knowledge Plaza is interesting." No other data given on the product.
Arrdvark - ask a question, be routed to someone to answer it. Most of the QA is private.
Hunch is a system to find previous questions.
Panel sees these tools as a way to pull tacit knowledge from users.
Bottom up, he suggests, is working with the information and asking questions. Do not get caught up in specific statistical analysis methods, instead just look at the metrics and see what questions you have. Examples of these are:
- What are the most frequent search queries?
- What are the most frequent results clicked on?
- Which pages are referring the most queries?
- Which pages are launching the most 0 hit queries?
He suggested that you should look at your search results to create better metadata. This involves looking at your top 50 to 100 queries, developing a set of categories that these queries might fall into, then seeing what percentage of your queries fall into each category. Once you have the values for the categories, you can also look at the attributes.
You can also scan your data for content types that people are searching for. If you track this data over time, you begin to see the seasonally of the queries and the adaptations that the enterprise makes to its environment.
He suggests digging deeply into the failure states, specifically for the pages within the site that are not working. Look for where the *content* is failing as well as where the search is failing.
His last suggestion was to leverage your best bets to become a A-Z listing for the site. He showed an example from the University of Michigan where the list was of the terms the best bets were created, paired with the best bets. Seems a simple, quick win.
Top Down is the process of thinking about the larger business issues and seeing what the metrics can tell us about them. So if the bottom up is thinking about the most common queries, top down would be the question "can users find the items they are looking for, with these top queries?" His actual example was e-commerce conversion for the top queries.
The slide deck will be available at SlideShare.net, but here is a similar one from a few years ago
The methodology starts with establishing the goal of the web site, it's users, and it's content. They then use this information to create a taxonomy, a labeling and naming structure as well as the navigation of the site. This process is similar to the one outlined at http://www.finance.gov.au/e-government/better-practice-and-collaboration/better-practice-checklists/information-architecture.html. She referenced this checklist as a good example for anyone to follow.
The goal of the project was to enable knowledge sharing across the local governments. This would allow a person in Chichester to learn from a similar project completed in Yorkshire. In addition, the groups would create a sense of community across the UK on various topics of interest. The tools provide a blog, a wiki, a library, a discussion forum, and search. There is no governance of the social networking application, communities have formed around the Cornish language as well as around road improvements.
The search implementation is interesting. It allows a community to identify the top 20 web sites for the community. It leverages Exalead's external Google like search engine to then search those web sites via a typical search box. From the results screen the user can further narrow the search by eliminating any of the web sites from the results. This gives the community the ability to dynamically create a custom domain of public web sites. This is similar to the functions EY has embedded within the Community HomeSpace product for internal search, but focused instead on the greater world wide web.
They are now able to leverage the social intelligence of the entire group, the "wisdom of the crowd". The feature has been taken up by 15% of the communities and the self reporting is very positive. Overall they find that the strength of the community reflects the skill of the community manager. A good community manager who keeps the conversation alive, ensures that discussion posts are followed up on, that the content added to the site is fresh, and who "weeds and feeds the garden" will have the strongest communities.
This presentation was an overview of the Onomi product development and deployment at MITRE.
The initial project goal was to understand the use of social bookmarking within and enterprise. Specific sub goals were on social bookmarkings ability to provide an environment for knowledge sharing, ability to feed expertise finding or user profiling, ability to form networks and to enhance the value of information retrieval.
Features of Onomi:
- Based on a open source product called Scuttle
- Similar interface to del.icio.us
- People create the bookmarks with a title, URL, description
- Bookmarks can be kept private
- RSS feeds are available
- It is integrated with e-mail subscription service
- Incorporates broken URL checking
- Incorporates a people network section
- Related tags
- Related users
-83% of the tags are from external web sites.
-86% of the tags are shared
-50% of the company uses the software
-23,676 bookmarks total in the system
-130,310 total tags
-16,371 unique tags
The search engine indexes all of the public bookmarks in the system and presents the bookmarks as search results. The result indicates which tags have been applied to the term. The integration is done via XML feeds. It is still not 100% complete, a separate capability to search only tags has not been created. They integrate the bookmarks and tags into a separate search ala OneBox on some of their different enterprise searches experiences. They are planning on leveraging the items that are bookmarked for relevancy improvements in a future release. The bookmark search results include a presence awareness to allow users to IM or call the author of the bookmark.
An interesting capability they have is around best bets. They enable anyone to identify a bookmark as a potential best bet. This automatically populates a database of potential best bets which is searched. Results of the search against this database are presented similar to our EY Professionals Recommend. It is different from our approach of specifically identifying promotions for key words.