This is the keynote I gave in London in May. I think the concept of having a vision for how you will improve search is important. You can see in this presentation my perspective of having a measured approach, reliant on content, that is simple and easy for the user to use.
Enterprise Search and Usability
Tuesday, December 10, 2013
Keynote from Enterprise Search Europe
This is the keynote I gave in London in May. I think the concept of having a vision for how you will improve search is important. You can see in this presentation my perspective of having a measured approach, reliant on content, that is simple and easy for the user to use.
Tuesday, June 4, 2013
Tuesday, June 12, 2012
Enterprise Search Summit 2012
We are not alone. Semantic web technology is moving away from cutting edge and towards mainstream. Mobile search is key for intranets to deliver value.
Tuesday, April 26, 2011
Why Angry Birds is so successful and popular: a cognitive teardown of the user experience
This is a great way to think about products and how to engage users through system and visual design. The article will require a few minutes to read but is worth the investment.
http://www.mauronewmedia.com/blog/2011/02/why-angry-birds-is-so-successful-a-cognitive-teardown-of-the-user-experience/
Tuesday, March 22, 2011
Enterprise Search what is it, why is it important, how do we measure it?
Knowledge management is the process of converting tacit information into explicit information.
- This incorporates the concepts of content creation, content storage and content retrieval.
- This embraces the ideas of content strategy.
- Content strategy is the process of determining the important content within an organization.
- Content is format agnostic, it includes text, speech, audio, video, media.
- Acquisition is the process of bringing in new content.
- Curation is the process of applying content strategy at the tactical level.
- Findability is a descriptive term for enabling the finding of content, via navigation and search.
Goal of information retrieval is to provide the right document to the right person at the right time.
- Right document might depend on the role of the user, the application launching the query, the profile of the user (area / subarea / country / city, the industry, service line / sub service line, service) or the time of the year.
- Right person might not be the user launching the query.
- Right time may be when they ask, before they ask, as it is created, when connected, or when on other devices.
Right document must be on the first page of results.
Right document must be visible to the user as the right document through good title, good summary, and a clear relationship between the document and the users information need.
- This requires a relationship between information retrieval and content storage to identify the right data for the title, summary, keywords, and other metadata. This includes incorporating the right fields in the storage area and the concept of a connector between the search engine and the repository.
- This requires a relationship between information retrieval and content creation to ensure content that answers the users information need exists, and is findable. This is sometimes known as enterprise search engine optimization.
- This requires a relationship between information retrieval and content strategy to ensure the best content is found. This could be relevance adjustments, query cooking, best bets, any tool to ensure the right content is moved to the top of the list.
This can be measured by:
- Calculations of precision and recall leveraging an expert assessment of content and query.
- Calculations of mean reciprocal rank of the best document for a query, leveraging an expert assessment of content.
- Evaluation of frequent query results by users. (70% of first 10 results being “relevant” seen as good).
- Penetration rate of user population (indirect measure, should be ~30%).
- Feedback of end users via survey, feedback forms, interviews, etcetera.Feedback from primary stakeholders.
Thursday, September 9, 2010
Google changes the search landscape, again
I like how this new feature reduces the feedback loop in search. I particularly like how the feedback you are reacting to is search engine results pages, not just the suggested search terms from type ahead. Some bloggers are saying this makes SEO irrelevant. I disagree. For content creators, this means it is even more important to have your good content at the top of the right key terms and phrases search results. I am more likely to try different phrases and searches, since results are "instant". I am less likely to click on a result that does not have a strong "scent of information". A new result is only a letter away, why try something that does not appear correct? Having good content, content that looks correct to the user, has become even more important. Ensuring that this content is found by the user has become harder. The SEO challenge has changed, and increased in difficulty, but it is *not* irrelevant. This makes content even more important to your search results, not less.
What it does seem to effect significantly is SEM. My cost per impression just changed. Google will be rendering my ad more often, but users will be clicking on the ad less often - this should mean that I get more impressions for my cost, assuming I am paying per click. I think it makes the top line of SEM better - I, as an advertiser, get more value for my money. Assuming exposure offers some value. If my only value comes from clicks, the top line stays the same.
What I like about Google Instant is the fact that this leverages Google's traditional strength, SPEED. It does it in a way that Microsoft will have trouble responding to - due to the infrastructure costs that speed requires in terms of local servers to reduce response time. Google has already made that investment. I do not think Microsoft has to the same extent. This innovation leverages Google's massive dataset of user behavior as well. It puts the work on the computer, and not the user - which is always the right place to put it.
I'm not sure how much this will impact the enterprise search arena, however. Most enterprises are unwilling to support the infrastructure needed for millisecond response time in the search results page, both in terms of network and in terms of hardware. I wonder if the GSA is going to support this feature, and if so what they will specify for the network and hardware requirements.
Monday, November 16, 2009
Wednesday, July 1, 2009
Business value of social networking tools
Until I go to New York, I cannot understand the difference in scale. Once there, I see many different distractions - shows, plays, clubs, museums, what have you. But there is also much more opportunity there. I think social networking tools are like New York - if you are there, you can't understand why anyone would not be there. If you are not there, you really can't see why anyone would want to be there.
Yes, there is a lot of non-work related things that happen on these tools. There is a lot of non-work related things that happen in the office, from the water cooler conversations to the quick coffee run. But if you are not taking part in the conversation, you will miss the serendipitous moments that happen because you are part of the conversation. Like when a water cooler conversation leads to a new idea or a shorter process. Similarly there are things that happen in Twitter or Facebook that will add value.
This is only a half formed analogy, comments are welcome and needed. Anyway, I need to get back to work.
Tuesday, May 12, 2009
The burger joint and the mini
The mini surprised me. I have not played with one before, since our local Apple store would have to order one for me. It was fast. It did not bog down playing two movies, one in HD and one in SD. Each application opened quickly, ran smoothly and was very responsive. Even with iTunes playing music and the HD movie running. All on 2 GB of RAM and a 2.0 ghz processor.
Something on this Mac was configured differently, and it had a right click on the mouse. Not sure how, since the whole top moves, but if you push on the left it reads a left click, right reads a right click.
I think I may bite the bullet and finally buy a Mac. Only a couple decisions left to make. I need to decide if I want the wireless Apple keyboard, or if I should go with the Logitech dNovo keyboard with the built in click wheel. And, do I want to put in the 4 GB right away or wait till I need a speed boost? Apple RAM is a rip off, but the thing is hard to open.
Enterprise Search Summit 2009
I'm heading out now to try the Burger Joint at Le Parker Meridian, then maybe off to the Apple store to look at the Mini again.
Search, Scent and the Happiness of Pursuit
UIE collects 2000 pieces of data per participant. In their studies they found that the #1 predictor of success was the # of pages the user visited between the home page and the sales cart. Fewer pages led to greater success. The best way to improve scent is to build the trigger words into the link titles, or the description of the tool. To find the trigger words, examine the search logs - those words are the words the users are scanning the pages to find. Referring again to his studies, he found that no one only went to search. Everyone started by scanning the page, looking for categories or links that contained the trigger words. They only went to search when nothing could be found.
An interesting point he made, if you can follow the users click path and know when they leave the page through search, then you can know when the trigger words dried up. Adding a link on that page that takes the user to the search term will increase success for that one trigger word.
Search works well for known item location. So it is easy to find "The Princess Bride" or "Tom Clancy". Search does not work well for less concrete items like "an inexpensive yet high quality SLR" or "Novels written by Nobel Prize for Literature authors". To solve this issue he recommends using editorial capability to improve content finability.
Looking at 77 installed versions of search engines on e-commerce sites, the same vendors that were the best, were also the worst. The difference is the implementation, not the technology. Focus on doing a good job on configuring the installation.
Emergent Social Search Experiences
They defined social search as social tagging.
You do it because it enhances the findability of an object.
There is a psychological burden of asking a question within an organization. "Didn't we hire you to know that?" There is also a cognitive load on asking and posting within an organization as people take the time to format the question well. People don't share because they don't want to tip their hand.
The Steve museum project explores leveraging social tagging to help identify and find art. They build the tools internally, but examples are on the web at www.steve.museum.com. They included a statistical analysis tool set to help tell when tags were useful. They found that the tags were useful for 80% of the time.
In a study with ACM, they found that when authors post tags themselves, they are precise but they are not very broad. They recycled the authors tags, added them to a pseudo controlled vocabulary and found more use of the tags that were created, and more tagging on the documents as a whole. Providing the leverage to reuse the tag, increased the usage of the tags. People saw the reuse as a value, giving them a reason them to tag in the first place.
In the Steve museum project, they had a two part study where some users know that they are helping an organization and others did not see a connection to a organization. The users who knew and saw the value they were adding were more than twice as likely to help.
Vendors:
ConnectBeam seems to have closed. They discussed it as a good product, but there does not seem to be any new products, and people have left the firm.
"Knowledge Plaza is interesting." No other data given on the product.
Arrdvark - ask a question, be routed to someone to answer it. Most of the QA is private.
Hunch is a system to find previous questions.
Panel sees these tools as a way to pull tacit knowledge from users.
Search Analytics
Bottom up, he suggests, is working with the information and asking questions. Do not get caught up in specific statistical analysis methods, instead just look at the metrics and see what questions you have. Examples of these are:
- What are the most frequent search queries?
- What are the most frequent results clicked on?
- Which pages are referring the most queries?
- Which pages are launching the most 0 hit queries?
He suggested that you should look at your search results to create better metadata. This involves looking at your top 50 to 100 queries, developing a set of categories that these queries might fall into, then seeing what percentage of your queries fall into each category. Once you have the values for the categories, you can also look at the attributes.
You can also scan your data for content types that people are searching for. If you track this data over time, you begin to see the seasonally of the queries and the adaptations that the enterprise makes to its environment.
He suggests digging deeply into the failure states, specifically for the pages within the site that are not working. Look for where the *content* is failing as well as where the search is failing.
His last suggestion was to leverage your best bets to become a A-Z listing for the site. He showed an example from the University of Michigan where the list was of the terms the best bets were created, paired with the best bets. Seems a simple, quick win.
Top Down is the process of thinking about the larger business issues and seeing what the metrics can tell us about them. So if the bottom up is thinking about the most common queries, top down would be the question "can users find the items they are looking for, with these top queries?" His actual example was e-commerce conversion for the top queries.
The slide deck will be available at SlideShare.net, but here is a similar one from a few years ago
Enhancing Findability and Usability through taxonomy and search
The methodology starts with establishing the goal of the web site, it's users, and it's content. They then use this information to create a taxonomy, a labeling and naming structure as well as the navigation of the site. This process is similar to the one outlined at http://www.finance.gov.au/e-government/better-practice-and-collaboration/better-practice-checklists/information-architecture.html. She referenced this checklist as a good example for anyone to follow.
Social Search for Knowledge Sharing
The goal of the project was to enable knowledge sharing across the local governments. This would allow a person in Chichester to learn from a similar project completed in Yorkshire. In addition, the groups would create a sense of community across the UK on various topics of interest. The tools provide a blog, a wiki, a library, a discussion forum, and search. There is no governance of the social networking application, communities have formed around the Cornish language as well as around road improvements.
The search implementation is interesting. It allows a community to identify the top 20 web sites for the community. It leverages Exalead's external Google like search engine to then search those web sites via a typical search box. From the results screen the user can further narrow the search by eliminating any of the web sites from the results. This gives the community the ability to dynamically create a custom domain of public web sites. This is similar to the functions EY has embedded within the Community HomeSpace product for internal search, but focused instead on the greater world wide web.
They are now able to leverage the social intelligence of the entire group, the "wisdom of the crowd". The feature has been taken up by 15% of the communities and the self reporting is very positive. Overall they find that the strength of the community reflects the skill of the community manager. A good community manager who keeps the conversation alive, ensures that discussion posts are followed up on, that the content added to the site is fresh, and who "weeds and feeds the garden" will have the strongest communities.
Enterprise Social Bookmarking at MITRE
This presentation was an overview of the Onomi product development and deployment at MITRE.
The initial project goal was to understand the use of social bookmarking within and enterprise. Specific sub goals were on social bookmarkings ability to provide an environment for knowledge sharing, ability to feed expertise finding or user profiling, ability to form networks and to enhance the value of information retrieval.
Features of Onomi:
- Based on a open source product called Scuttle
- Similar interface to del.icio.us
- People create the bookmarks with a title, URL, description
- Bookmarks can be kept private
- RSS feeds are available
- It is integrated with e-mail subscription service
- Incorporates broken URL checking
- Incorporates a people network section
- Related tags
- Related users
-83% of the tags are from external web sites.
-86% of the tags are shared
-50% of the company uses the software
-23,676 bookmarks total in the system
-130,310 total tags
-16,371 unique tags
The search engine indexes all of the public bookmarks in the system and presents the bookmarks as search results. The result indicates which tags have been applied to the term. The integration is done via XML feeds. It is still not 100% complete, a separate capability to search only tags has not been created. They integrate the bookmarks and tags into a separate search ala OneBox on some of their different enterprise searches experiences. They are planning on leveraging the items that are bookmarked for relevancy improvements in a future release. The bookmark search results include a presence awareness to allow users to IM or call the author of the bookmark.
An interesting capability they have is around best bets. They enable anyone to identify a bookmark as a potential best bet. This automatically populates a database of potential best bets which is searched. Results of the search against this database are presented similar to our EY Professionals Recommend. It is different from our approach of specifically identifying promotions for key words.
Thursday, April 30, 2009
Free keynote
Anyone who registers using this link ( https://secure.infotoday.com/forms/default.aspx?form=ess2009&priority=COVVIP3 ) will have the option of joining us for one or both keynotes for free, as well as visting our exhibits hall.
In case you want to include details, this year's keynotes are:
May 12 Keynote: Harness Information to Deliver Enhanced Business Performance
9:00 am – 9:45 am
Ramesh Harji , Head of Information Exploitation, Capgemini UK
Despite the billions of dollars invested in information technology, organizations are still failing to realize the latent potential of their information. To be successful, organizations need to take a different approach—one that views information as a critical business asset, not an afterthought. Organizations must put exploiting information at the heart of the way they do business. A recent Capgemini research report concludes that poor information exploitation is costing the U.K. economy over $100BN/year in lost profits, representing an average 29% suppression of business performance per organization. Better information exploitation is one of the last bastions: an opportunity to both grow revenue and improve profitability.
May 13 Keynote: Improving Security Through Information Awareness
9:00 am – 9:45 am
Wim van Geloven, VP Information Technology, National Coordinator for Counterterrorism, The Netherlands
Approximately 20 agencies in The Netherlands are involved in combating terrorism. To boost the effectiveness in understanding terrorism and serious crime, four organizations decided to join forces in a unique venture, dubbed Improving Security by Information Awareness. The Program strives to improve the quality of intelligence and investigations within the public security sector. The approach revolves around the collaboration of the different agencies, operates from the perspective of the operational cases, and defines the IT case from these basic principles. Over the past 3 years, the Program invested in pan-organizational change, as well as IT techniques and also the (scientific) development of methods and techniques that improve the exchange, presentation, analysis, and storage of large quantities of data. This keynote will address the background and problems faced, the technological and organizational challenges, encountered, and the way in which this new approach has reformed information awareness.
Wednesday, April 15, 2009
Enterprise Search Summit 2009
As a speaker, I can offer you a $200 discount on the Gold Pass or Full Two-Day Conference Pass, with the option to attend the Showcase for free.
For this offer, please use this URL: https://secure.infotoday.com/forms/default.aspx?form=ess2009&priority=SPEAK3 file://localhost/forms/default.aspx.
Or use discount code SPEAK3 when registering.
Look forward to seeing you there!
Friday, March 27, 2009
Developing engagement with our users
I found a brief blog post from JMC referencing a TED talk to be very interesting.
What I really want is a synopsis, a brief set of 15 things I should do to create a sense of engagement. I think a start on this is available from a IA Summit presentation from Stephen Anderson starts to get to a more tactical approach.
The real issue is culture. I will need to adapt these techniques to match my corporate culture.
Wednesday, November 19, 2008
Great article
http://www.portfolio.com/news-markets/national-news/portfolio/2008/11/11/The-End-of-Wall-Streets-Boom
Monday, November 17, 2008
Feedback form failure
Monday, November 10, 2008
NEOUPA Event
Contact:
Cathleen Zapata
President, NEOUPA
NEOUPA
P.O. Box 24503
Cleveland, Ohio 44124-9998
Phone: 440-320-1084
president@neoupa.org
www.neoupa.org
NEOUPA Celebrates World Usability Day in Cleveland, Ohio
Cleveland, OH -- November 1, 2008 -- NEOUPA, the local chapter of the global Usability Professionals’ Association, is celebrating World Usability Day Thursday, November 13 from 6-8:30pm at KeyBank’s Tiedeman offices by discussing how professionals in the community are infusing and advocating usability in the companies they work for and the Web work they do. The panel of presenters is from a variety of local organizations, including Ernst & Young, Cleveland Institute of Art, KeyBank, Progressive Insurance, American Greetings, and more. Additional details and registration can be found at www.neoupa.org.
World Usability Day was founded in 2005 as an initiative of the Usability Professionals' Association to ensure that services and products important to human life are easier to access and simpler to use. Each year, on the second Thursday of November, over 225 events are organized in over 40 countries around the world to raise awareness for the general public, and to train professionals in the tools and issues central to good usability research, development and practice.
"Web users today are task-oriented. They don’t have time to waste and they’re on a mission to get done what they’re trying to do," says Cathleen Zapata, President of NEOUPA, "Usability is the fundamental foundation of creating an outstanding customer experience that meets both customer needs and business goals."
The event is FREE but registration is required at www.neoupa.org. Food, networking, knowledge-sharing and the chance to win over $300 in prize giveaways are all part of the event.
This year’s platinum sponsor is Brulant, Inc./Rosetta, with additional sponsorship by SMI: Eye & Gaze and Progressive Insurance. The event is being held at KeyBank, 4910 Tiedeman Road, Brooklyn, Ohio. Registration is required at www.neoupa.org.
About NEOUPA
NEOUPA is the official Northeast Ohio chapter of the international Usability Professionals' Association. NEOUPA was started to educate, motivate and promote usability throughout Northeast Ohio for individuals interested in, involved in or responsible for Websites, applications, software or any other type of user interface where usability is a key to success.
For information: http://www.neoupa.org or
Contact: president@neoupa.org
Phone: 440-320-1084
Wednesday, October 29, 2008
ASIS&T 2008 - Connie Yowell Plenary Session
Typically, children list video game or PC activities as their most popular activities. She demonstrates the great number of variables children need to manipulate to play Pokeman. Showed a video of kids playing Pokeman together, and their activities they do around Pokeman - blogs, web searching, comics, and fan fiction.
Some statistics:
97% of kids play video games
60% of teens use a computer
72% use instant messaging
50% have created media content
33% have shared content via the internet
Ethnographic study
700 participants
25 researchers
5000 hours of observation
www.futuresoflearning.org
Types of participation
1. Friendship driven participation
2. Interest driven participation
These interactions are the same as "real life" interactions. It is not a place for adults. It is not a place for strangers. Young people do not interact with strangers in social networking.
Interest driven networks are highly social. The worries about these interactions being socially isolating is incorrect. Social interactions tend to be very individual based. They tend to be very participatory and productive. They use these networks to create content. The networks are peer based. This is where adults and strangers interact with young people, because of a similar based interest. These networks have converging media, so TV works with PC works with books, not competing. For the young people it is about the content, not the technology. So they are following their interests across different media. The learning is networked.
Learning is happing outside of school. School becomes another node in their learning, not the primary node. Rich array of learning opportunities, with low barriers to participation and production.
Some of the pitfalls are around developing expertise in areas that are harmful, there are strong commercial influence. There is a concern that there will be a different set of skills with each kid, worse than the digital divide, as the set of skills needed are growing. There is also a fragmented experiences, not every kid has the same set of experiences, creating different behaviors. Each experience stands on its own, and there is not a strong interrelation between one experience or the next.
Four core skills in detail out of eleven identified for the future, over and above traditional literacy
Performance - the ability to adopt alternative identities for the purpose of improvisation and discovery. People will need to be able to adopt roles to explore and understand environments instead of the traditional positioning.
Appropriation: the ability to meaningfully sample and remix media content. Otherwise known as plagiarism or piracy. Think instead of the way Shakespeare reused or remixed classic tales in new ways with his plays. Digital media makes this easier.
Collective intelligence: ability to pool knowledge and compare notes with others toward a common goal. Everyone knows something, but no one knows everything. This is about the ability to tap the community at the right time, and to get answers from everyone.
Transmedia Navigation: the ability to follow the flow of stories and information across multiple modality and different forms of media.
How do people pick up these skills, especially as schools ban the use of these skills?
Places to stay up to date on this work and the findings:
www.spotlight.macfound.org
www.digitallearning.org
www.holymeatballs.org
www.newmedialiteracy.org
www.idiit.edu/thinkeringspaces
ASIS&T 2008 - The effect of page context on magazine image categorization.
The number of categories, the time taken to sort the photos and the number of times a image was placed into a category were all not significantly different. When you look at the types of categories that are created, there was a significant difference. Added context resulted in more categories based on theme and story, versus functional, or of the objects in the photo. People were grouped by fictional or real, and nonliving items were grouped by symbolic, object or scenes when context was present. Without context, people were grouped by posed photos versus action photos (context) and non living items were grouped by interiors, objects, or scene. Without context, images were more often set into multifaceted categorization, more hierarchy to the structure.
Text was seen to anchor the image, explained the image and why it was published, or the text was seen as elaborating or extending the image. This means we can manipulate the categorization of images by the archivists, so can determine how we want the image to be categorized. This implies that text data mining can be applied to image categorization through automated / software.
ASIS&T 2008 - Revisiting search task difficulty: behavioral and individual difference measures
He found that there was a correlation between objective and subjective task difficulty. He found that subjectively difficult tasks were related to more search actions, greater than objective difficulty. So if people think the task will be hard, they do more work, regardless of the objective assessment of difficulty. He also found that better search task outcomes are associated with lower levels of objective difficulty. In terms of the efficiency of the systems, people are slower on more complex systems. They find more complex systems to have a higher level of subjective difficulty.
ASIS&T 2008 - Values and Information
Discussion covered the difference of value of real objects versus digital objects, such as pictures, books and music. Generally the digital objects were seen as easier to have, but not as valuable.
Another study ran a value survey against three different groups, one in corporate, one in academia and one in government. This survey showed significant difference in the values of curiosity, loyalty, and obedience. The studies show a difference in adherence to values by different organizaitins. Differences in organizations lead to different values wihch in turn lead to different priorities and potenetially different designs.
Tuesday, October 28, 2008
ASIS&T 2008 - Google on-line marketing challenge: A multidisciplinary global teaching and learning initiative using sponsored search
- Jim Jansen jjansen@acm.org
- Mark Rosso mrosso@nccu.edu
- Dan Russell drussell@google.com
- Brian Detlor detlor@mcmaster.edu
Search drives on-line activity. Search drives over 5 billion monthly queries in the US. People are spending less time with their family, the TV and less sleep.
Looking at the search marketplace, Google has over 60% of the market, with no other company having even half as much market share. On-line marketing is a large business, and keyword advertising is the fastest growing advertising business. Google earned 16 billion from advertising, mostly keyword advertising. The sponsored links are the main drivers of revenue and are the business model of all search engines. Key differentiator is that it is targeted, pull aligned and has the lowest acquisition cost of any advertising.
The Google on-line marketing challenge was a world wide project. The challenge consisted of 1) register the class 2) recruit the client 3) Students develop a pre - campaign proposal 4) Students run a 3 week AdWords campaign 5) Students develop and submit a post campaign summary 6) Google judges teams on a algorithm to narrow to 150 teams 7) Academics judge teams on written reports 8) top ten teams fly to Google headquarters.
A campaign consists of one or more ad groups. Ad groups should be based on a theme. Each ad group has a set of ads and keywords that generate the ad. Each keyword has a bid, how much are you willing to pay for that keyword. For each keyword you supply a matching criteria - exact, partial, broad, negative.
Ads display, the rank and the cost per click depend upon: keywords in query, ad title and text must be relevant to query, content of landing page must be relevant to query, past click through rate, bid on keyword, other configurable factors.
The Google on-line marketing challenge - experiences from the course. - Mark Rosso presenting.
Public school, evening MBA program completed as part of a Management Information Systems course. Class consisted of 10 students, so two teams entering the challenge. Clients were a small law firm and a African American cultural center. Entered into the course as a experiential learning opportunity and to answer common criticisms of MBA IS courses, too theoretical and not adequately conveying "know how". Students took from this class real world experience in on-line advertising will work, or not work. The on-line advertising did not really work for the cultural center, as they were not selling anything on-line and did not have a way to measure the value from a click. It did bring home the need to coordinate their projects with the information systems function. Both teams conflicted with client scheduled web site outages. It also brought home how the design of the web site impacted advertising costs. The cultural center had a single page design that kept them from targeting a page for the ad. The law firm did not have this same challenge.
The Google on-line marketing Challenge: an outside commentator's opinion. - Brian Detlor presenting.
Three main wins - Students love the challenge, they get job offers. Teachers love it, Students are engaged and there are real world evaluation of student work. Participating businesses love it, they get free work and a chance to screen potential employees.
Brian's school does other experiential learning courses, but not this specific one. He sees some challenges, but thinks it is worth while.
Google on-line marketing challenge: a view from inside - Daniel Russell (the search quality, User Experience Research, Ads Quality & Search Education person)
www.gomcha.com is a social networking site for the challenge. Place for the students to continue to work together. Google wanted to do was teach students the in and outs of marketing in the internet age, giving them real world data on what is good, what is bad. They should also understand how web readers look at + understand web pages. Plus they begin to understand the consumer versus reader psychology. They wanted to inject these ideas, about how large the search marketing market is. As an example, specific keywords are better than generic keywords. Negative keywords are important to learn as well - when not to bring people in. Ad quality is about the same as Search quality. Having good sponsored links will cause users to see it as valuable. Search query length in the US is typically 2 words. Most queries are multiple words.
Web Site Optimizer allows you to do trade offs between multiple versions of pages. This allows you to understand and optimize your conversion rate. You can do this with only text changes, or larger site changes. Nice tool for learning how your site works, our EY.com team should leverage this.
The impact of the study was that the students left with a better understanding of the web ecostructure.
Advertising & Awareness with Sponsored Search - donturn@ischool.utexas.edu.
An Alternate study. Examine the idea of advertising for awareness of the UT iSchool. Ads for graduate studies, not driven by click through revenues. They developed an ad campaign to evaluate the Google Ads application. Goal was to build on models of web information seeking as part of the larger web experience.
Planned global & local search campaigns using statement and question ad copy for comparison. About 208000 impressions, 200,000 global. 160 click through rate (CTR). Their average position was 4th. The best click through rate was on the local campaign. The statement ad copy out performed the question ad copy. Gave them a large dataset from sponsored search services and tools. Start to understand more complex search tasks is of growing importance.
donturn@ischool.utexas.edu
Gave an example of how words matter. Changing the word on the link from Sign up to Start using caused a 5x improvement in conversion. This has interesting connotations for our work on site design and metrics. We need to track button clicks, and be able to change things fairly quickly, so we can determine which links work and which do not.
ASIS&T 2008 - Better to Organize Personal Information by Folders or by Tags? : The devil is in the details.
Examining the two different models of organizing information: hierarchical folders and tagging with labels. Does placing with folders and tagging make any difference in the ability to find things. The faculty advisor on this project was William Jones. Had people use two real world systems, for a period of time, then had them relate their experiences back to the project. Used Hot Mail and gMail as their study, as they are two different tools, achieving the same function, via two different mechanisms.
Initial interview to explore their use of folders and tags in the past. Selected two topics of 25 item collections. Each day the participants received 5 articles in their email, in each product. They then spent 5-10 minutes organizing the information. They self reported on the experience via email, as well as data capture by the project team. After the project they gathered recall details, re-find 5 articles, and had them sketch their collection. Each participant was exposed to both environments, serially.
Results - a number of similarities, from a retrieval performance, recall, time to retrieve, number of places looked and in terms of the organizational schemes both systems worked about equally well. In both conditions the participants were unable to express the complexity of their internal map of the information, as expressed in the sketch. In general, there were multiple categorization activities going on in each environment. Some participants had a workflow orientation, others had a hierarchical orientation.
Tagging required less cognitive effort. Participants found the tagging to be easy, whereas they felt a strong need to be choosy with the folder names. Tagging required more physical effort. Tags needed to be applied over and over again, where as the folder structure only needed to be placed once. Folders made it easy to hide information, easy to move items out of the in-box. Tagging was seen as more cluttered, as everything stayed in the in-box. Folders were better for systematic search, each time they could look through each folder and be confident that they had correctly cleared a folder. Multiple tags were applied to an item, making it harder to do a systemic search, but easier for serendipitous finding.
The conclusion is that there is no clear winner. Each structure offers tradeoffs. The implications: don't leave the good stuff behind. Filter for untagged items, support hierarchy. Support collections of tags that are content and format oriented.