Google Affirms the Vital Role of Marketing and Advertising Agencies


This post is by from Traffick: The Business of Search


Click here to view on the original site: Original Post




It's great news that Google has taken the time to think through the pivotal role of agencies in helping advertisers advertise on the Google AdWords platform, and to release a new AdWords Certification program. As the head of a search marketing agency, I value the fact Google is explicitly affirming their philosophical support for the agency world at the same time as they release specific changes in programs and pricing that support that relationship. Official mission statements are important; they ensure that no one at any level in the company is hearing contradictory messages. Sometimes, all it takes for us to be able to work better together is to hear someone say (and write): you've got a formal place in our ecosystem, and a special place that won't be interchangeable with everyone else's place, or too easily devalued.

So, the obligatory punch on the shoulder, and "aww shucks, thanks, guys".

To be sure, no one is naive enough to think that Google won't also work directly with some advertisers. But there should be no more talk that Google is uncertain in its approach to the agency ecosystem, or that the powers that be at Google somehow want to "cut agencies out of the equation." You don't invest in support, agency teams, new certification programs, and new API models unless you're sincere in the support.

Waiving AdWords API fees for agencies using their own bid management tools adds up to a significant chunk of change. It also, as Google notes, leads to more innovation. In developing new ways to automate marketing, developers at agencies (and the end client) won't have to mentally subtract out the cost of the API tokens when deciding how much time and money to invest in new tools. If some agencies abuse the privilege, that's easy enough to stop. Tell them to stop it, or the fees will kick back in (and their Partner status could be revoked).

Outside the AdWords ad world, this might seem like a minor deal. To those in it, it's pretty significant because it means Google has indeed evolved into a mature player much as many of us hoped and expected.

Here's a quick before and after to give you a sense of things:

Before: A confusing Google AdWords Professionals certification that was very easy to achieve, handed out to a wide variety of semi-qualified individuals, with no clear delineation between scrappy upstarts who can pass a simple test, and who would be really interested in helping you with your AdWords advertising; and real agencies with infrastructure and a track record of working cooperatively with Google and solving many client problems over time. Later, a Qualified Company certification got bolted onto that. While a step in the right direction, it left too much confusion in the marketplace and didn't give enough credit to the difference between entities (agencies) and individuals (anyone who can get a decent grade on what amounts to an open-book exam).

After: A redefinition of the Qualified Individuals status to help individuals showcase their talents to potential employers (not directly competing for clients with more qualified agencies or experienced in-house talent); a redefinition of the Partner Certified Qualified Companies to mean more rigorous exams, and a range of other benefits like a searchable Google Partners listing.

There's quite a bit more to it, but that's a start.

I started as early as 2005 trying to articulate the case for such an evolution at Google -- mainly, in both editions of Winning Results with Google AdWords. While many in the space sort of took Google's informality at face value (panting with lust at any announcement of any kind of Google certification), I always figured they'd have to take another crack at this because the ecosystem of resellers and partners (assuming it demonstrates its value and shows itself deep, wily, and resilient enough to maintain customer relationships as opposed to being disintermediated/crushed) must be treated formally as such, much as it always has been in the technology world, with companies like Microsoft serving as the global standard (but there have been hundreds of others). As Google began rolling this type of thing out with Google Website Optimizer and Google Analytics (as strange as it is to be a "reseller" of free products), the logic became clearer, and you knew/hoped that Google would soon be on its way towards formalizing those relationships on a few fronts.

The old approach and the old programs were a bit tantamount to us out here being asked to "fan" Google on a Facebook page, without too much interaction, formality, or "anything in it for us," and as a result, on the other side, Google couldn't ask too much in terms of stated qualifications, business characteristics, more rigorous certifications, etc.

The new approach takes aim at the future and walks us all kicking and screaming into adult relationships. The old, informal ways were fun and we will miss them. But they're a thing of the past.

I'll leave off quoting at length from Winning Results with Google AdWords, 2nd ed. (2008), where I addressed this sort of thing.

"Third parties often advise clients on how to use AdWords, or directly manage complex campaigns. ... Observing Google's progress in dealing with the environment of marketing and advertising agencies, they have never fully given up on the idea that advertisers really should be coming directly to them for advice. However, this situation appears to be improving.

A Google Advertising Professionals (GAP) program, launched in November 2004, was an interesting initiative that was supposed to sort out qualified from unqualified individual AdWords campaign management practitioners. A company wide (agency) version of this is also available. This is more of a training and indoctrination program than anything else, however. The reward to the qualified professionals and agencies is minimal at best, though ostensibly it helps advertisers avoid working with "hacks".

Agencies certainly get much less out of Google in terms of financial rewards (such as a commission) than they have in any relationship in the history of advertising. On a variety of fronts, including the Google-agency relationship, observers have asked the question: is Google sucking the proverbial oxygen out of the room? While consultative relationships have improved and become more formalized -- a key improvement, to be sure -- many of the leading AdWords consultants and evangelists must make their living from service fees alone ... while Google's extreme profit margins continue to fuel the company's growth. There are practical hurdles to be addressed before such traditional advertising industry practices can be adopted, particularly in the "geek culture" which has served Google so well. However, the goodwill ... of the search marketing agency community ... may hinge on a recalibration of their financial relationship with Google.

In its formative years, having the right (geeky, iconoclastic, world-beating) attitude at the right time was a big part of what made Google into a global powerhouse. Some critics predict that this same attitude could be its undoing. Experts believe that the degree of cooperation with the developer community (and I would add, the marketing ecosystem) will determine whether the company has the staying power of a Microsoft.

Through the back door, Google may be studying ways of responding to the above analysis. Beyond AdWords, the company has new, highly technical products, like Google Analytics and Google Website Optimizer. It has initiated partner and reseller programs for these products. By instituting criteria for membership, working closely with the community on product development, and figuring out ways of steering valuable consulting business to such resellers and partners, Google can study the ins and outs of forming such productive relationships. Such relationships seem to be founded on classic models common in the software industry, especially in high-ticket enterprise software. What makes this unorthodox (as usual) is that Google's products are often free, and many of the customers for them are small to midsized businesses. What will it mean for my consulting firm to "resell" Google's free product to a small customer, I wonder? Like many others, including Google themselves, I can't wait to unravel that puzzle. ..."

Google Affirms the Vital Role of Marketing and Advertising Agencies


This post is by from Traffick: The Business of Search


Click here to view on the original site: Original Post




It's great news that Google has taken the time to think through the pivotal role of agencies in helping advertisers advertise on the Google AdWords platform, and to release a new AdWords Certification program. As the head of a search marketing agency, I value the fact Google is explicitly affirming their philosophical support for the agency world at the same time as they release specific changes in programs and pricing that support that relationship. Official mission statements are important; they ensure that no one at any level in the company is hearing contradictory messages. Sometimes, all it takes for us to be able to work better together is to hear someone say (and write): you've got a formal place in our ecosystem, and a special place that won't be interchangeable with everyone else's place, or too easily devalued.

So, the obligatory punch on the shoulder, and "aww shucks, thanks, guys".

To be sure, no one is naive enough to think that Google won't also work directly with some advertisers. But there should be no more talk that Google is uncertain in its approach to the agency ecosystem, or that the powers that be at Google somehow want to "cut agencies out of the equation." You don't invest in support, agency teams, new certification programs, and new API models unless you're sincere in the support.

Waiving AdWords API fees for agencies using their own bid management tools adds up to a significant chunk of change. It also, as Google notes, leads to more innovation. In developing new ways to automate marketing, developers at agencies (and the end client) won't have to mentally subtract out the cost of the API tokens when deciding how much time and money to invest in new tools. If some agencies abuse the privilege, that's easy enough to stop. Tell them to stop it, or the fees will kick back in (and their Partner status could be revoked).

Outside the AdWords ad world, this might seem like a minor deal. To those in it, it's pretty significant because it means Google has indeed evolved into a mature player much as many of us hoped and expected.

Here's a quick before and after to give you a sense of things:

Before: A confusing Google AdWords Professionals certification that was very easy to achieve, handed out to a wide variety of semi-qualified individuals, with no clear delineation between scrappy upstarts who can pass a simple test, and who would be really interested in helping you with your AdWords advertising; and real agencies with infrastructure and a track record of working cooperatively with Google and solving many client problems over time. Later, a Qualified Company certification got bolted onto that. While a step in the right direction, it left too much confusion in the marketplace and didn't give enough credit to the difference between entities (agencies) and individuals (anyone who can get a decent grade on what amounts to an open-book exam).

After: A redefinition of the Qualified Individuals status to help individuals showcase their talents to potential employers (not directly competing for clients with more qualified agencies or experienced in-house talent); a redefinition of the Partner Certified Qualified Companies to mean more rigorous exams, and a range of other benefits like a searchable Google Partners listing.

There's quite a bit more to it, but that's a start.

I started as early as 2005 trying to articulate the case for such an evolution at Google -- mainly, in both editions of Winning Results with Google AdWords. While many in the space sort of took Google's informality at face value (panting with lust at any announcement of any kind of Google certification), I always figured they'd have to take another crack at this because the ecosystem of resellers and partners (assuming it demonstrates its value and shows itself deep, wily, and resilient enough to maintain customer relationships as opposed to being disintermediated/crushed) must be treated formally as such, much as it always has been in the technology world, with companies like Microsoft serving as the global standard (but there have been hundreds of others). As Google began rolling this type of thing out with Google Website Optimizer and Google Analytics (as strange as it is to be a "reseller" of free products), the logic became clearer, and you knew/hoped that Google would soon be on its way towards formalizing those relationships on a few fronts.

The old approach and the old programs were a bit tantamount to us out here being asked to "fan" Google on a Facebook page, without too much interaction, formality, or "anything in it for us," and as a result, on the other side, Google couldn't ask too much in terms of stated qualifications, business characteristics, more rigorous certifications, etc.

The new approach takes aim at the future and walks us all kicking and screaming into adult relationships. The old, informal ways were fun and we will miss them. But they're a thing of the past.

I'll leave off quoting at length from Winning Results with Google AdWords, 2nd ed. (2008), where I addressed this sort of thing.

"Third parties often advise clients on how to use AdWords, or directly manage complex campaigns. ... Observing Google's progress in dealing with the environment of marketing and advertising agencies, they have never fully given up on the idea that advertisers really should be coming directly to them for advice. However, this situation appears to be improving.

A Google Advertising Professionals (GAP) program, launched in November 2004, was an interesting initiative that was supposed to sort out qualified from unqualified individual AdWords campaign management practitioners. A company wide (agency) version of this is also available. This is more of a training and indoctrination program than anything else, however. The reward to the qualified professionals and agencies is minimal at best, though ostensibly it helps advertisers avoid working with "hacks".

Agencies certainly get much less out of Google in terms of financial rewards (such as a commission) than they have in any relationship in the history of advertising. On a variety of fronts, including the Google-agency relationship, observers have asked the question: is Google sucking the proverbial oxygen out of the room? While consultative relationships have improved and become more formalized -- a key improvement, to be sure -- many of the leading AdWords consultants and evangelists must make their living from service fees alone ... while Google's extreme profit margins continue to fuel the company's growth. There are practical hurdles to be addressed before such traditional advertising industry practices can be adopted, particularly in the "geek culture" which has served Google so well. However, the goodwill ... of the search marketing agency community ... may hinge on a recalibration of their financial relationship with Google.

In its formative years, having the right (geeky, iconoclastic, world-beating) attitude at the right time was a big part of what made Google into a global powerhouse. Some critics predict that this same attitude could be its undoing. Experts believe that the degree of cooperation with the developer community (and I would add, the marketing ecosystem) will determine whether the company has the staying power of a Microsoft.

Through the back door, Google may be studying ways of responding to the above analysis. Beyond AdWords, the company has new, highly technical products, like Google Analytics and Google Website Optimizer. It has initiated partner and reseller programs for these products. By instituting criteria for membership, working closely with the community on product development, and figuring out ways of steering valuable consulting business to such resellers and partners, Google can study the ins and outs of forming such productive relationships. Such relationships seem to be founded on classic models common in the software industry, especially in high-ticket enterprise software. What makes this unorthodox (as usual) is that Google's products are often free, and many of the customers for them are small to midsized businesses. What will it mean for my consulting firm to "resell" Google's free product to a small customer, I wonder? Like many others, including Google themselves, I can't wait to unravel that puzzle. ..."

Twitter Ad Potential: Huge (Source: History, Users’ Love of Searching)


This post is by from Traffick: The Business of Search


Click here to view on the original site: Original Post




Regarding that last post about Twitter and monetization, I haven't changed my mind on all of it, but for the projection/prediction part about Twitter potentially putting up very modest ad revenue numbers in the first two years. That part, I realize, is wrong!

Certainly, they're well behind Facebook in many areas (revenues included) and possibly will continue to be forever, but what they have going that Facebook doesn't have as much of yet? Search! (Fascinating piece today by Eli Goodman of comScore: What History Tells Us About Facebook's Potential as a Search Engine, Part 1).

Goodman's point so far seems to be that as search improves and as users come to expect it to be highly useful, usage increases, familiarity with the tools increases, etc. This is going to happen with Facebook, and it's going to happen with Twitter.

By contrast with Facebook, though, Twitter already gets 19 billion monthly searches -- about 19% of what Google does in a month. Astounding. That with a search platform that often doesn't work well, or sometimes return any results at all. Twitter searching is going to grow to incredible levels. And where inventory and granularity are that huge, even very cautious forms of monetization lead to sizeable revenues and positive feedback loops in CPM rates and user satisfaction.

So I'm coming to the realization that 2011 is going to be a strong year for Twitter's ad revenues, and 2012 could shock people.

A huge wrinkle here, though. Those supposed 19 billion monthly searches count API calls from third parties, and that would include standing queries from users, more like how people use feeds to display their favorite content. But hang on, isn't that a good thing? That's great contextual information and where there is such great contextual information, eventually there will be ad deals, and ad revenues. Sure, there will be ad-free ways to use third party tools, just as some advertising will actually appeal to users (or at least they will tolerate it).

Based on a more conservative definition of a "search," let's dial the 19 billion back, then, to around potentially one billion actual searches per month in 2011 for either Twitter or Facebook (so, more like 1-2% of Google's overall volume). That's still impressive. Based on Eli Goodman's logic, that could certainly lead to a snowball effect of bona fide search product development and bona fide user addiction. In essence, the search product and ad product teams at Twitter and Facebook alike won't be able to hire people and build products quickly enough.

Promoted tweets, then, should be viewed just a light pilot project to try something sort of "alternative" in the space for a reasonably guaranteed amount of cash. Down the road, Twitter can monetize something we're all very familiar with as the highest-CPM, win-winningest, digital advertising channel: search and keywords. I doubt Twitter's founders or any of the early adopters predicted this type of user behavior in the early going. Certainly, it's a credit to them and their far-sighted investors that they all bet big on the potential and the direction of user excitement, rather than trying to get too specific about how it would get used or how it would make money, too early on.

Twitter Ad Potential: Huge (Source: History, Users’ Love of Searching)


This post is by from Traffick: The Business of Search


Click here to view on the original site: Original Post




Regarding that last post about Twitter and monetization, I haven't changed my mind on all of it, but for the projection/prediction part about Twitter potentially putting up very modest ad revenue numbers in the first two years. That part, I realize, is wrong!

Certainly, they're well behind Facebook in many areas (revenues included) and possibly will continue to be forever, but what they have going that Facebook doesn't have as much of yet? Search! (Fascinating piece today by Eli Goodman of comScore: What History Tells Us About Facebook's Potential as a Search Engine, Part 1).

Goodman's point so far seems to be that as search improves and as users come to expect it to be highly useful, usage increases, familiarity with the tools increases, etc. This is going to happen with Facebook, and it's going to happen with Twitter.

By contrast with Facebook, though, Twitter already gets 19 billion monthly searches -- about 19% of what Google does in a month. Astounding. That with a search platform that often doesn't work well, or sometimes return any results at all. Twitter searching is going to grow to incredible levels. And where inventory and granularity are that huge, even very cautious forms of monetization lead to sizeable revenues and positive feedback loops in CPM rates and user satisfaction.

So I'm coming to the realization that 2011 is going to be a strong year for Twitter's ad revenues, and 2012 could shock people.

A huge wrinkle here, though. Those supposed 19 billion monthly searches count API calls from third parties, and that would include standing queries from users, more like how people use feeds to display their favorite content. But hang on, isn't that a good thing? That's great contextual information and where there is such great contextual information, eventually there will be ad deals, and ad revenues. Sure, there will be ad-free ways to use third party tools, just as some advertising will actually appeal to users (or at least they will tolerate it).

Based on a more conservative definition of a "search," let's dial the 19 billion back, then, to around potentially one billion actual searches per month in 2011 for either Twitter or Facebook (so, more like 1-2% of Google's overall volume). That's still impressive. Based on Eli Goodman's logic, that could certainly lead to a snowball effect of bona fide search product development and bona fide user addiction. In essence, the search product and ad product teams at Twitter and Facebook alike won't be able to hire people and build products quickly enough.

Promoted tweets, then, should be viewed just a light pilot project to try something sort of "alternative" in the space for a reasonably guaranteed amount of cash. Down the road, Twitter can monetize something we're all very familiar with as the highest-CPM, win-winningest, digital advertising channel: search and keywords. I doubt Twitter's founders or any of the early adopters predicted this type of user behavior in the early going. Certainly, it's a credit to them and their far-sighted investors that they all bet big on the potential and the direction of user excitement, rather than trying to get too specific about how it would get used or how it would make money, too early on.

Twitter’s Monetization Model: On the Mark, or Off-Target?


This post is by from Traffick: The Business of Search


Click here to view on the original site: Original Post




As Twitter moves to pilot its first experiments in monetization, it might be interesting to speculate on its prospects for success. To help, I'll go through some of the elements of success and failure that have been proven in the last twelve years or so of online advertising experimentation. Without all of these elements being in place, ad-supported models have tended to fail.

1. Large enough audience to matter. Wrapping some ads around content or functionality geared to a relatively small audience is tricky on a number of levels. First, no one in the press cares, and investors don't care. Most importantly, advertisers and agencies don't care, since there's not enough to buy, so you get lumped into remnant or at least underpriced network inventory unless you've got a really smart little sales force. Second, any hiccup gives you a greater chance of killing the golden goose of whatever you wrap the ads around. Third, you lack statistically significant data for testing and refining, so it's hard to perfect. Fourth, related to the third point, dipping a toe into the water becomes difficult. Large publishers can run tests without alienating anyone as they test the model in a small sliver of the content.

2. Targeting by keyword. Publishers and ad mavens have bent over backward to insist that targeting can be based on concepts, personalization, demographics, and factors other than keywords. Even Google, the King of Keywords, began fairly early in its attempt to paint the keyword as only one sub-facet in the global effort to better align advertising with user tastes and intent. (Bonus: that effort to blend into the woodwork might have helped Google in court if trademark and patent lawsuits really started to escalate out of hand, or if they started losing cases so badly that they'd need to substantially revise their business model ahead of schedule.) Deny it all you like, but keywords still "click" with advertisers. Users like them too, because it's a way of seeing relatively relevant ads without feeling too creeped out. Keywords triggering relevant text ads and offers are the display-advertising-in-content cousin to permission marketing as it was conceived by Seth Godin for email. Somewhere, a line can get crossed. Keywords do a really great job of helping advertisers and users connect without that line being crossed as often.

3. Doesn't get in the way, or even at times enhances the experience. Advertising is a necessary evil to some, but to a substantial part of the population, it's a buying aid or even a cultural experience. Glossy ads in fashion magazines are part of the "art" and "positioning" and are seen as less intrusive than advertising that really "gets in the way" of reading an article online. The same goes for billboards by the highway: an eyesore to some, they're a part of cultural history to others -- and hence, provide free buzz over and above the advertising cost. Burma Shave was before most of us were born, but chances are, you've heard of the roadside signs.

4. Is in a place online that people willingly go to or are addicted to, rather than being an app that is a bit cumbersome to use, take-it-or-leave-it, overly incentivized (paid in points or cash to "surf,"), or weakly appreciated but maybe a flash-in-the-pan. Related to this, the user base has to understand what the owners plan to do around advertising and what kind of "trade-off" they can expect. Do they get involved in using something for one reason, then find it's infected their user experience or device (i.e. "scumware")? Or is the format and the trade-off relatively transparent?

5. Isn't susceptible to "banner blindness". For the time being, we can consider this one relatively unimportant, as initially, enough advertisers will be lining up to try new things where the audience is big enough and attention can be grabbed. But performance marketers are turned off by ads that don't perform, and historically these types of ad formats have had limited upside when compared with personal, anticipated, and relevant communications (especially when the latter are connected with keywords). You can be "big" with the support of brand-building advertisers, but with the approval of direct marketers on top of that, you can be huge... because then any advertiser, large or small, can justify it to themselves or to someone on their board of directors. And agencies too can come up with those justifications.

To look at some quick examples:

  • Intrusive or oversized display ad formats -- leaderboards, "popovers," garish animations, etc. -- have had mixed success. They've driven online advertising to a degree, but somehow got surpassed by little old search, despite their reach. That's because they fail on counts 3, 4, and 5, and aren't even all that great on 1 and 2.
  • Weird apps like Pointcast, eTour, and Gator eventually fail because people uninstall the apps, don't install the apps, etc. Performance is uneven and users squeal. To incentivize users to do things they wouldn't otherwise do, you either deceive them or pay them too much (thus killing profit). Fail all around.
  • Point 4 relates to Facebook -- in both senses. The network effect and addiction factor actually outweigh the fact that Facebook has been particularly brazen in doing wacky, unpredictable, privacy-invading things to its users. Facebook is very strong on point 1 and has point 2 covered also. Because its audience is very large, it can be cautious relating to points 3 and 5, monetizing below "potential," thus leaving long-term potential on the table. Huge win.
So how about Twitter? Twitter's scheme sounds like it will largely succeed on points 1 through 4. The ad revenue, once disclosed, will appear pitifully small for the first year or two. As long as trust is built gradually and testing provides insight, that revenue should pyramid up over time.

Some will question whether users will remain addicted to Twitter long term. Facebook is an entire social environment, and Twitter still feels like a "feature," a quick hit, despite a large user base on paper. That one hangs in the balance. Perhaps the litmus test for any would-be top-tier destination would be: are users choosing to download and keep their favorite mobile app related to that content, brand, function, or community, in the most accessible place on their mobile device? Will people get bored with them and stop? Will ads be easier to ignore on mobile devices? Will people look for versions of apps that allow them to ignore ads? (That's where point #3 really comes in.)

Change will be rapid, but based on these criteria, it appears that Twitter has the correct fundamentals and the right strategy in place for a long-term win. But if Facebook has a two-year head start here, you still have a nagging feeling that Twitter just needs to keep hitting certain user targets and to look reasonably dangerous revenue-wise, for the more realistic goal of selling the company to Google or Facebook.

Twitter’s Monetization Model: On the Mark, or Off-Target?


This post is by from Traffick: The Business of Search


Click here to view on the original site: Original Post




As Twitter moves to pilot its first experiments in monetization, it might be interesting to speculate on its prospects for success. To help, I'll go through some of the elements of success and failure that have been proven in the last twelve years or so of online advertising experimentation. Without all of these elements being in place, ad-supported models have tended to fail.

1. Large enough audience to matter. Wrapping some ads around content or functionality geared to a relatively small audience is tricky on a number of levels. First, no one in the press cares, and investors don't care. Most importantly, advertisers and agencies don't care, since there's not enough to buy, so you get lumped into remnant or at least underpriced network inventory unless you've got a really smart little sales force. Second, any hiccup gives you a greater chance of killing the golden goose of whatever you wrap the ads around. Third, you lack statistically significant data for testing and refining, so it's hard to perfect. Fourth, related to the third point, dipping a toe into the water becomes difficult. Large publishers can run tests without alienating anyone as they test the model in a small sliver of the content.

2. Targeting by keyword. Publishers and ad mavens have bent over backward to insist that targeting can be based on concepts, personalization, demographics, and factors other than keywords. Even Google, the King of Keywords, began fairly early in its attempt to paint the keyword as only one sub-facet in the global effort to better align advertising with user tastes and intent. (Bonus: that effort to blend into the woodwork might have helped Google in court if trademark and patent lawsuits really started to escalate out of hand, or if they started losing cases so badly that they'd need to substantially revise their business model ahead of schedule.) Deny it all you like, but keywords still "click" with advertisers. Users like them too, because it's a way of seeing relatively relevant ads without feeling too creeped out. Keywords triggering relevant text ads and offers are the display-advertising-in-content cousin to permission marketing as it was conceived by Seth Godin for email. Somewhere, a line can get crossed. Keywords do a really great job of helping advertisers and users connect without that line being crossed as often.

3. Doesn't get in the way, or even at times enhances the experience. Advertising is a necessary evil to some, but to a substantial part of the population, it's a buying aid or even a cultural experience. Glossy ads in fashion magazines are part of the "art" and "positioning" and are seen as less intrusive than advertising that really "gets in the way" of reading an article online. The same goes for billboards by the highway: an eyesore to some, they're a part of cultural history to others -- and hence, provide free buzz over and above the advertising cost. Burma Shave was before most of us were born, but chances are, you've heard of the roadside signs.

4. Is in a place online that people willingly go to or are addicted to, rather than being an app that is a bit cumbersome to use, take-it-or-leave-it, overly incentivized (paid in points or cash to "surf,"), or weakly appreciated but maybe a flash-in-the-pan. Related to this, the user base has to understand what the owners plan to do around advertising and what kind of "trade-off" they can expect. Do they get involved in using something for one reason, then find it's infected their user experience or device (i.e. "scumware")? Or is the format and the trade-off relatively transparent?

5. Isn't susceptible to "banner blindness". For the time being, we can consider this one relatively unimportant, as initially, enough advertisers will be lining up to try new things where the audience is big enough and attention can be grabbed. But performance marketers are turned off by ads that don't perform, and historically these types of ad formats have had limited upside when compared with personal, anticipated, and relevant communications (especially when the latter are connected with keywords). You can be "big" with the support of brand-building advertisers, but with the approval of direct marketers on top of that, you can be huge... because then any advertiser, large or small, can justify it to themselves or to someone on their board of directors. And agencies too can come up with those justifications.

To look at some quick examples:

  • Intrusive or oversized display ad formats -- leaderboards, "popovers," garish animations, etc. -- have had mixed success. They've driven online advertising to a degree, but somehow got surpassed by little old search, despite their reach. That's because they fail on counts 3, 4, and 5, and aren't even all that great on 1 and 2.
  • Weird apps like Pointcast, eTour, and Gator eventually fail because people uninstall the apps, don't install the apps, etc. Performance is uneven and users squeal. To incentivize users to do things they wouldn't otherwise do, you either deceive them or pay them too much (thus killing profit). Fail all around.
  • Point 4 relates to Facebook -- in both senses. The network effect and addiction factor actually outweigh the fact that Facebook has been particularly brazen in doing wacky, unpredictable, privacy-invading things to its users. Facebook is very strong on point 1 and has point 2 covered also. Because its audience is very large, it can be cautious relating to points 3 and 5, monetizing below "potential," thus leaving long-term potential on the table. Huge win.
So how about Twitter? Twitter's scheme sounds like it will largely succeed on points 1 through 4. The ad revenue, once disclosed, will appear pitifully small for the first year or two. As long as trust is built gradually and testing provides insight, that revenue should pyramid up over time.

Some will question whether users will remain addicted to Twitter long term. Facebook is an entire social environment, and Twitter still feels like a "feature," a quick hit, despite a large user base on paper. That one hangs in the balance. Perhaps the litmus test for any would-be top-tier destination would be: are users choosing to download and keep their favorite mobile app related to that content, brand, function, or community, in the most accessible place on their mobile device? Will people get bored with them and stop? Will ads be easier to ignore on mobile devices? Will people look for versions of apps that allow them to ignore ads? (That's where point #3 really comes in.)

Change will be rapid, but based on these criteria, it appears that Twitter has the correct fundamentals and the right strategy in place for a long-term win. But if Facebook has a two-year head start here, you still have a nagging feeling that Twitter just needs to keep hitting certain user targets and to look reasonably dangerous revenue-wise, for the more realistic goal of selling the company to Google or Facebook.

Don’t Go to Google, TripAdvisor, or OurFaves for Restaurant Reviews


This post is by from Traffick: The Business of Search


Click here to view on the original site: Original Post




If you come to my town, where's the best place to go to look for that perfect restaurant, or opinions about a place you're considering dining at? Me. Seriously.

And maybe a few of my friends.

We know the truth.

You can sometimes get some of that truth from Toronto Life and Zagat.

Once in awhile, we'll maybe go write a review on Yelp. But probably not. If you ate at 50 places a year and 30 of them are really good, you'd tire of writing it up.

So, if you go to Google's review aggregation that includes results from TripAdvisor, OurFaves, and Google itself... you'll see a bunch of inexplicable one-star reviews for some of the best restaurants in the city.

"I could barely see my food..." Have you heard of ambience?

"The staff was unfriendly..." ... after you put ketchup on the filet of sole.

"Cramped..." Sorry it isn't the Rainforest Cafe.

Don't get me wrong: I'm a big fan of user-generated content and recommendations. Unfortunately, when you go to TripAdvisor, you often have to wade through the most inexplicable, knuckle-dragger "reviews" of some of the best hotels and restaurants known to mankind.

It's also an interface issue with Google's review aggregation, though. The Harbord Room actually averages four stars on Yelp... but you wouldn't know this from Google's tally, which makes it look like there are a lot of dissatisfied visitors, and the average looks like it's just over one star.

You know what? Just let me make the reservation, and if the place is no good, I'll take full responsibility. :)

In the meantime though you could trust good reviewers like this guy.

Don’t Go to Google, TripAdvisor, or OurFaves for Restaurant Reviews


This post is by from Traffick: The Business of Search


Click here to view on the original site: Original Post




If you come to my town, where's the best place to go to look for that perfect restaurant, or opinions about a place you're considering dining at? Me. Seriously.

And maybe a few of my friends.

We know the truth.

You can sometimes get some of that truth from Toronto Life and Zagat.

Once in awhile, we'll maybe go write a review on Yelp. But probably not. If you ate at 50 places a year and 30 of them are really good, you'd tire of writing it up.

So, if you go to Google's review aggregation that includes results from TripAdvisor, OurFaves, and Google itself... you'll see a bunch of inexplicable one-star reviews for some of the best restaurants in the city.

"I could barely see my food..." Have you heard of ambience?

"The staff was unfriendly..." ... after you put ketchup on the filet of sole.

"Cramped..." Sorry it isn't the Rainforest Cafe.

Don't get me wrong: I'm a big fan of user-generated content and recommendations. Unfortunately, when you go to TripAdvisor, you often have to wade through the most inexplicable, knuckle-dragger "reviews" of some of the best hotels and restaurants known to mankind.

It's also an interface issue with Google's review aggregation, though. The Harbord Room actually averages four stars on Yelp... but you wouldn't know this from Google's tally, which makes it look like there are a lot of dissatisfied visitors, and the average looks like it's just over one star.

You know what? Just let me make the reservation, and if the place is no good, I'll take full responsibility. :)

In the meantime though you could trust good reviewers like this guy.

Can Search Engines Sniff Out "Remarkable"?


This post is by from Traffick: The Business of Search


Click here to view on the original site: Original Post




I never tire of listening to experts like Mike Grehan speaking about the new signals search engines are beginning to look at, because it's so important to bust the myths about how search engines work.

To hear many people talk, today's major engines are faced with little more than a slightly-beefed- up, slightly larger, version of a closed database search. Need the medical records for your patient Johnny Jones, from your closed database of 500 medical records, just type in johnny or jones or johnny jones, and you're good to go. Isn't that search, in a nutshell? It is: if you can guarantee that you're referring to a nutshell like that. But with web search, it's nothing like that.

The World Wide Web now has a trillion pages or page-like entities... that Google knows about. (They don't know what to do with all of them, but they'll admit to the trillion.) Some observers estimate that there will soon be five trillion of these in total, too many to index or handle. Who knows, maybe 10% of that could be useful to a user or worthy of indexing. But until some signal tells the search engine to index them in earnest, they'll just sit there, invisible. That's out of necessity: there's just too much.

The difference isn't only quantitative, it's also qualitative. User queries have all sorts of intents, and search engines aren't just trying to show you "all the pages that match". There are too many pages that match, in one way or another. The task of measuring relevancy, quality, and intent is far more complex than it looks at first.

And on top of that, people are trying to game the algorithm. Millions of people. This is known as "adversarial" information retrieval in an "open" system where anyone can post information or spam. The complexity of rank ordering results on a particular keyword query therefore rises exponentially.

In light of all this, search engines have done a pretty good job of looking at off-page signals to tell what's useful, relevant, and interesting. The major push began with the linking structure of the web, and now the effort has vastly expanded to many other emerging signals; especially, user behavior (consumption of the content; clickstreams; user trails) and new types of sharing and linking behavior in social media.

This is a must, because any mechanical counting and measuring exercise is bound to disappoint users if it isn't incredibly sophisticated and subtle. Think links. Thousands of SEO experts are still teaching you tricks for how to get "authoritative" inbound links to your sites & pages. But do users want to see truly remarkable content, or content that scored highly in part because someone followed an SEO to-do list? And how, then, do we measure what is truly remarkable?

Now that Twitter is a key source of evidence for the remarkability of content, let's consider it as an interesting behavioral lab. Look at two kinds of signal. The first is where you ask a few friends to retweet your article or observation, and they do. A prickly variation of that is where you have a much larger circle of friends, or you orchestrate semi-fake friends to do your bidding, with significant automation involved.

But another type of remarkable happens when your contribution truly makes non-confidantes want to retweet and otherwise mention you. When your article or insight achieves "breakout" beyond your circle of confidantes, and further confirming signals of user satisfaction later on when people stumble on it.

Telling the difference is an incredible challenge for search engines. Garden variety tactical optimization will work to a degree, mainly because some signals of interest will tend to dwarf the many instances of "zero effort or interest". But we should all hope that search engines get better and better at sniffing out the difference between truly remarkable (or remarkably relevant to you the end user) and these counterfeit signals that can be manufactured by tacticians simply going through the motions.

Can Search Engines Sniff Out "Remarkable"?


This post is by from Traffick: The Business of Search


Click here to view on the original site: Original Post




I never tire of listening to experts like Mike Grehan speaking about the new signals search engines are beginning to look at, because it's so important to bust the myths about how search engines work.

To hear many people talk, today's major engines are faced with little more than a slightly-beefed- up, slightly larger, version of a closed database search. Need the medical records for your patient Johnny Jones, from your closed database of 500 medical records, just type in johnny or jones or johnny jones, and you're good to go. Isn't that search, in a nutshell? It is: if you can guarantee that you're referring to a nutshell like that. But with web search, it's nothing like that.

The World Wide Web now has a trillion pages or page-like entities... that Google knows about. (They don't know what to do with all of them, but they'll admit to the trillion.) Some observers estimate that there will soon be five trillion of these in total, too many to index or handle. Who knows, maybe 10% of that could be useful to a user or worthy of indexing. But until some signal tells the search engine to index them in earnest, they'll just sit there, invisible. That's out of necessity: there's just too much.

The difference isn't only quantitative, it's also qualitative. User queries have all sorts of intents, and search engines aren't just trying to show you "all the pages that match". There are too many pages that match, in one way or another. The task of measuring relevancy, quality, and intent is far more complex than it looks at first.

And on top of that, people are trying to game the algorithm. Millions of people. This is known as "adversarial" information retrieval in an "open" system where anyone can post information or spam. The complexity of rank ordering results on a particular keyword query therefore rises exponentially.

In light of all this, search engines have done a pretty good job of looking at off-page signals to tell what's useful, relevant, and interesting. The major push began with the linking structure of the web, and now the effort has vastly expanded to many other emerging signals; especially, user behavior (consumption of the content; clickstreams; user trails) and new types of sharing and linking behavior in social media.

This is a must, because any mechanical counting and measuring exercise is bound to disappoint users if it isn't incredibly sophisticated and subtle. Think links. Thousands of SEO experts are still teaching you tricks for how to get "authoritative" inbound links to your sites & pages. But do users want to see truly remarkable content, or content that scored highly in part because someone followed an SEO to-do list? And how, then, do we measure what is truly remarkable?

Now that Twitter is a key source of evidence for the remarkability of content, let's consider it as an interesting behavioral lab. Look at two kinds of signal. The first is where you ask a few friends to retweet your article or observation, and they do. A prickly variation of that is where you have a much larger circle of friends, or you orchestrate semi-fake friends to do your bidding, with significant automation involved.

But another type of remarkable happens when your contribution truly makes non-confidantes want to retweet and otherwise mention you. When your article or insight achieves "breakout" beyond your circle of confidantes, and further confirming signals of user satisfaction later on when people stumble on it.

Telling the difference is an incredible challenge for search engines. Garden variety tactical optimization will work to a degree, mainly because some signals of interest will tend to dwarf the many instances of "zero effort or interest". But we should all hope that search engines get better and better at sniffing out the difference between truly remarkable (or remarkably relevant to you the end user) and these counterfeit signals that can be manufactured by tacticians simply going through the motions.

Yusuf Mehdi’s Too-Candid Comments About Abandoning the Long Tail


This post is by from Traffick: The Business of Search


Click here to view on the original site: Original Post




Credit Yusuf Mehdi for honesty: in his remarks at SES New York last week, as reported by eWeek, he noted that Microsoft fell well behind Google in search because it focused on doing well for popular queries, when it should have known that search is "all about the long tail."

It is bizarre, because every notable failure in search since 1994 has basically been in the realm of curated results and chances are, that trend will continue. Whether they're hand-edited search results or partially "produced" variations of web index search focusing on improving the treatment of head terms using the efforts of channel producers, the market kept coming back with the same response: this approach doesn't scale. A website with opinions about what people should focus on is not a search engine, it's just a website. And that creates a serious positioning problem when you're competing in the "search engine" space, which needs to scale to help people find hard-to-find information. Forget the long tail: channel producers and editors even do a poor job of producing information around the "torso". As information and customer demands evolve, it becomes difficult to keep up, and many of the real world uses of the search engine begin to look like a "demo" of "well, this is how it works over here, on this query, in theory, and eventually we'll get back to extending the technology so it works for the stuff you're looking for, with partners who provide information in a way that you prefer, which changed in the past year."

Here's a list of some of the search engines that haven't caught on precisely because they failed to understand and gear up for the massive scale required in the search engine business, focusing instead on curating results for a limited set of popular queries or categories:

  • Yahoo Directory
  • Open Directory
  • LookSmart
  • Ask Jeeves
  • Mahalo
The list could probably be much longer.

Others have fared a bit better because they didn't claim to be search engines. These include:
  • About.com
  • Squidoo
Obviously, many of these properties are of limited use in the real world of finding info.

The bizarreness doesn't stop there, however. A significant aspect of the PR rollout of Bing was focused on the fact that Microsoft knew it would be most effective -- again -- at doing better for users in the realm of more popular types of searches, ceding long tail excellence to Google. In terms of positioning, that's like saying Microsoft is good at negotiating partnerships, designing interfaces, and subscribing to web services. That's like saying Microsoft is building a portal. That's like saying Microsoft is Yahoo.

Google itself is no saint when it comes to long tail accomplishments and relevance. On many counts, all search engine companies have waved white flags on truly scaling to address all potential content, because there is just too much of it (and too much spam). Dialing back on the ambitions of comprehensiveness, to devote more screen real estate to trusted brands and search experiences that are tantamount to paid inclusion, is Google's current trend, much as it was for companies like Inktomi and Yahoo in the past.

The industry consensus is that search is far from solved. But a prerequisite to solving any problem is trying. Microsoft is signaling that they will continue to dip a toe in the water and essentially "wimp out" when it comes to addressing scale and complexity issues. This is in line with what they've done all along, and the positioning for Bing. The question is: if Google's wimping out too, wouldn't you rather use the relatively less wimpy search company that has committed a massive budget to R&D, probably 30X Microsoft's? By sending these signals, Microsoft is not exactly giving users good reasons to use their products. It's reminiscent of the trajectory taken by companies like AOL and Yahoo, who didn't feel that search was a problem that could or should be solved by them, so they contented themselves with staying hands-off and creating a workable project largely driven by feeds, partnerships, and ideas external to their own company.

To SEO's, Mehdi's ruminations on the long tail must be heartening. It says, in essence, "spam away."

Yusuf Mehdi’s Too-Candid Comments About Abandoning the Long Tail


This post is by from Traffick: The Business of Search


Click here to view on the original site: Original Post




Credit Yusuf Mehdi for honesty: in his remarks at SES New York last week, as reported by eWeek, he noted that Microsoft fell well behind Google in search because it focused on doing well for popular queries, when it should have known that search is "all about the long tail."

It is bizarre, because every notable failure in search since 1994 has basically been in the realm of curated results and chances are, that trend will continue. Whether they're hand-edited search results or partially "produced" variations of web index search focusing on improving the treatment of head terms using the efforts of channel producers, the market kept coming back with the same response: this approach doesn't scale. A website with opinions about what people should focus on is not a search engine, it's just a website. And that creates a serious positioning problem when you're competing in the "search engine" space, which needs to scale to help people find hard-to-find information. Forget the long tail: channel producers and editors even do a poor job of producing information around the "torso". As information and customer demands evolve, it becomes difficult to keep up, and many of the real world uses of the search engine begin to look like a "demo" of "well, this is how it works over here, on this query, in theory, and eventually we'll get back to extending the technology so it works for the stuff you're looking for, with partners who provide information in a way that you prefer, which changed in the past year."

Here's a list of some of the search engines that haven't caught on precisely because they failed to understand and gear up for the massive scale required in the search engine business, focusing instead on curating results for a limited set of popular queries or categories:

  • Yahoo Directory
  • Open Directory
  • LookSmart
  • Ask Jeeves
  • Mahalo
The list could probably be much longer.

Others have fared a bit better because they didn't claim to be search engines. These include:
  • About.com
  • Squidoo
Obviously, many of these properties are of limited use in the real world of finding info.

The bizarreness doesn't stop there, however. A significant aspect of the PR rollout of Bing was focused on the fact that Microsoft knew it would be most effective -- again -- at doing better for users in the realm of more popular types of searches, ceding long tail excellence to Google. In terms of positioning, that's like saying Microsoft is good at negotiating partnerships, designing interfaces, and subscribing to web services. That's like saying Microsoft is building a portal. That's like saying Microsoft is Yahoo.

Google itself is no saint when it comes to long tail accomplishments and relevance. On many counts, all search engine companies have waved white flags on truly scaling to address all potential content, because there is just too much of it (and too much spam). Dialing back on the ambitions of comprehensiveness, to devote more screen real estate to trusted brands and search experiences that are tantamount to paid inclusion, is Google's current trend, much as it was for companies like Inktomi and Yahoo in the past.

The industry consensus is that search is far from solved. But a prerequisite to solving any problem is trying. Microsoft is signaling that they will continue to dip a toe in the water and essentially "wimp out" when it comes to addressing scale and complexity issues. This is in line with what they've done all along, and the positioning for Bing. The question is: if Google's wimping out too, wouldn't you rather use the relatively less wimpy search company that has committed a massive budget to R&D, probably 30X Microsoft's? By sending these signals, Microsoft is not exactly giving users good reasons to use their products. It's reminiscent of the trajectory taken by companies like AOL and Yahoo, who didn't feel that search was a problem that could or should be solved by them, so they contented themselves with staying hands-off and creating a workable project largely driven by feeds, partnerships, and ideas external to their own company.

To SEO's, Mehdi's ruminations on the long tail must be heartening. It says, in essence, "spam away."

Google AdWords: No More Last-Click-Attribution Blues


This post is by from Traffick: The Business of Search


Click here to view on the original site: Original Post




Getting credit for an online conversion - and giving due credit to all recent influences - has been one of the hottest topics in digital marketing over the past couple of years. The urgency of the matter has grown as media costs -- especially click prices on paid search keywords -- have risen.

Marketers have been so hungry for better attribution of "keyword assists" (or simply, the non-overriding of the first click in the sequence towards purchase, whether that's over a matter of hours or many months), they've been willing to explore cumbersome customizations in a variety of analytics platforms, including Google Analytics.

But if you're looking to simply analyze the contribution of paid keyword searches on Google Search that preceded the keywords that led directly to a sales conversion (aka "assists"), you'd prefer to see all that data rolled up conveniently within Google AdWords itself, showing the data in handy formats that might make it easy to change your bidding patterns. In particular, earlier stage keywords (typically, before a last-click brand search) would now be revalued in your model; you'd bid them higher in cases where they made assists.

Happily all of this is now rolling out in AdWords as part of a reporting initiative called Search Funnels. A variety of reporting options help you tap into the power of this new information.

Earlier, when I defended the "last click"'s merits as an attribution method, I pointed to some data by Marin Software showing 74% of etail conversions only have one associated click - even counting assists. Moreover, Marin's approach bucketed prior clicks categorically, arguing that if a prior click was very similar in intent or style to the last click, then the extra information wouldn't be enough to cause you to alter bidding patterns anyway. That knocked the number of truly "assist-powered" conversions (that you could actually attribute properly) down to 10% or less.

This is where Google's new reporting needs to be scrutinized closely. In your individual case it could be quite valuable, but in current individual case studies Google may have on hand, anywhere from 70-95% of conversions only have one click to speak of. If Marin's logic above is even close to sensible, then it does underscore the limits to assist data. There will be some value attributable to assist keywords in around 10% of conversions, give or take. That's actionable but not earth-shattering. Of course, this is going to be most valuable to advertisers who have a lot of prior influencer clicks hiding behind a high number of clicks that are currently attributed to a last-click on the brand name.

To pump up the role of prior keywords, it might be fair to also point to assist impressions - views of the ad on Google Search where the ad wasn't clicked, but shown. But in those cases was the ad really seen? Perhaps not, but there may be some value in knowing what search keywords got the searcher's research motor running. Perhaps they clicked on a competitor's ad. Google is offering impression assist data as well with this release, which will be sure to delight trivia buffs, AdWords junkies, and Google's accountants alike.

Remember, we're not just talking about multiple searches all done in a single day, or in one session. Google is logging the time and date of every search by that user prior to a purchase/lead, and when a conversion happens, full funnel information is available as to the time lag between clicks and before the conversion.

Adding in impression assists to the mix, we may see past search query information for up to 20-25% of conversions in some advertiser accounts. Again, while not stupendous, this at least counts as extremely important and material to how you approach keyword value.

The ease of sorting in order of frequency of conversion by assist keyword helps not only to see the keywords in question, but with the "keyword transition path" view, you can see what last click converters they preceded, to better understand the consumer mindset. The screen shot below is a canned Google example while the program is still in beta. In my briefing I saw a more typical and valuable case example that showed the frequency (fictitious example to replace the one I saw) paths like "almond milk calories" > planethealthnut or "milk alternative" > planethealthnut. Whereas the brand might have got disproportionate credit for this conversion in the past, now, keywords like [milk alternative] or [almond milk calories] might attract higher bids, even more so if you experiment over time, allowing for more repetitions of your "research stage keywords" over many months.

In my opinion, "paths" work fairly well as a metaphor here and are not too misleading because the "funnel" steps tend to be relatively coherent and causal in practice. They aren't necessarily so, however. The reason these reports can look sensible is because they're drawn from a narrow universe of high-intent keywords that advertisers are avidly bidding on. You're not going to see a paid search keyword funnel path like "drawbridge in mexico" > james mcbleckr phone 415 > nike > air jordans used > nike.com largely because Nike doesn't have most of the keywords in that path in their paid search account. Truly generating causal paths out of all the things someone does online prior to a conversion is likely to be incredibly messy, but that's a much longer story.

Long story short: life is indeed a lot simpler when viewed through the prism of an AdWords account. And today, advertisers are getting what they desperately seek: easy-to-use information about paid keyword search attribution so that the last click doesn't override all other attribution data.