Making social gaming scale: Lessons from the Democrat and Chronicle’s adoption of alternate reality


This post is by from Nieman Journalism Lab


Click here to view on the original site: Original Post




Just over a year ago, Rochester’s Democrat and Chronicle launched an ambitious Alternate Reality Game (ARG) called Picture the Impossible. The seven-week game was a collaboration with the Lab for Social Computing at the Rochester Institute of Technology, and it built web, print, and real-life challenges over a fictional storyline designed to connect players with with Rochester’s history. Participants were divided into three teams that competed against each other to earn money for three local charities. The players completed a scavenger hunt in a local cemetery, created recipes for a cooking contest focused on local ingredients, and earned points each week for both web-based games like jigsaw puzzles and print newspaper challenges like assembling a mystery photo. The game concluded with a Halloween costume party for the top players.

Over 2,500 people signed up for the game in all, and it attracted a highly engaged core of about 600 people, including members of the young professional demographic that the Democrat and Chronicle had been most hoping to attract. But running an ARG was also very resource-intensive. I talked with Traci Bauer, the Democrat and Chronicle’s managing editor for content and digital platforms, about what the paper learned from Picture the Impossible, and how they are building social gaming into the paper’s day-to-day operations.

Picture the Impossible emerged through a collaboration between Bauer and RIT professor Elizabeth Lane Lawley. It was funded through sweat equity from both organizations and a donation from a local charity and Microsoft Bing. (Kodak, which is based in Rochester, provided cameras and printers as weekly prizes.)

The game attracted players of all ages, including families, students brought in through RIT, and plenty of Baby Boomers. (“They’re easy,” Bauer said. “Boomers do everything.”) Two-thirds of the players were women. The most important strengths of the game were the collaborative team structure and the focus on earning money for charity. Team spirit was high on the message boards for the three different “factions,” and players strategized ways to maximize their weekly point totals. The scavenger hunts and real-life games (some powered by the text-messaging/smartphone app SCVNGR) were popular, as was the cooking competition, which brought in 104 entrants. When I spoke with Bauer and Lawley last year, they had also been very excited about the way the game used the print paper as a physical element of play.

Bauer said the newspaper had learned enough from the collaboration that the experiment would have been worthwhile even if the game flopped. It didn’t.

“The beauty of it wasn’t in the volume of players, but in the amount of time that they spent in the game,” Bauer said. “In the end we had 62 minutes on-site per unique, and that’s compared to 30-35 minutes on our core sites.”

Bauer came out of the project believing that the news industry needs to harness gaming strategies. “There’s something in there, for sure,” she said.

Her goal is for the Democrat and Chronicle to always have some kind of social gaming presence. When Picture the Impossible closed last Halloween, “I wanted to quickly get another initiative out there,” Bauer said. “I hate when you build something and it’s a success and you put it up on a shelf and don’t pay attention to it for years.”

The problem was that Picture the Impossible had taken a huge amount of time and resources. The newspaper’s collaboration with RIT had ended, and the pressures of making social gaming a normal part of newspaper operations meant figuring out a more pared-down, sustainable model.

For the Democrat and Chronicle, that has meant abandoning the Alternate Reality Game model, with its fictional storyline that united the different elements of the game and propelled it forward. As a news organization, Bauer said, creating fictional scenarios didn’t really fit with their mission. It also meant fewer real-life challenges, even though they were very popular with players. RIT had been “instrumental” in making those in-person activities work. “It’s not what we’re really good at, organizing baking contests and things like that,” Bauer said. “It wasn’t what we’re about.”

This time around, the Democrat and Chronicle’s new social gaming project, score!, is focused around one of the newspaper’s core activities: political coverage. Launched in June, score! focuses on the November elections, and consists mainly of politically-themed web games and quizzes. One new game, Headline Hopper, has players propel little politicians through a landscape of quotes, Mario Brothers-style.

As in Picture the Impossible, players accumulate points and earn money for charity, and the profiles of score! players note whether they participated in Picture the Impossible, to build continuity between the two games.

This time, Bauer said, they thought the team loyalty that had powered Picture the Impossible would be formed around political parties, the Democrats competing with the Republicans. But that team structure flopped when only four Republicans signed up. As a result, Bauer said, they’ve mostly abandoned the team focus. The in-person component of score! has also been scaled back; players can get “stalker” points for snapping photographs of politicians at local events, but Bauer said the challenge hasn’t really taken off. Part of the problem is that candidates are unpredictable, so it’s hard to get information about possible stalker events until the very last minute.

The election focus has been one of the strengths of score!, in part because it gives the game a natural theme that’s easy to build content around, and in part because the games build on the status that comes along with being well-informed about politics — and with having other people know that you are.

“In the forums they talk about how much they’ve learned about the election, and how they feel like smarter voters because of it,” Bauer said.

Players now need to log in to the game through Facebook, which has generated about a dozen complaints from people who can’t play — not enough to be a real concern. And the benefit of the Facebook platform is that it allows players to compare their scores with friends.

Like Picture the Impossible, Bauer said that the 2,300 score! players fall into three tiers: a smaller group of 250 hardcore players, a middle tier of casual players, and then the remainder — who scored a few points and then didn’t come back.

“That’s really our target, is the causal player,” Bauer said.

One of the biggest challenges of running games when you’re not a full-time gaming company is negotiating the relationship with the hardcore players versus the larger group of more casual ones. The most devoted players are also the ones who post the most complaints on forums and Facebook. “We have to keep reminding each other as a team [that] this is an initiative that is going to be constant on our site, and we can’t wear ourselves out catering to five people,” Bauer said.

At the same time, those small number of hardcore players are responsible for a lot of the games’ energy. “That’s where the conundrum is,” Bauer said. “We owe all of our success to those kinds of people.”

When score! ends, Bauer will evaluate the game’s analytics to see which parts of it generated enough engagement to make the time invested in it worthwhile, and continue to think about how to automate parts of the game to make it more sustainable. The next game will debut sometime early in the winter, Bauer said, and it may involve competition between players in Rochester and other cities.

So far, the Democrat and Chronicle is the only Gannett paper to implement a major gaming initiative, Bauer said. She said this was disappointing, but not surprising, since the success of Picture the Impossible didn’t translate into a bump in revenue. (Unlike Picture the Impossible, score! has advertising on its site.) As much as she believes in making social gaming part of a newspaper’s toolbox, Bauer said, “it certainly doesn’t produce a lot of revenue, and until it does, it’s not going to get a lot of attention.”

Why spreadable doesn’t equal viral: A conversation with Henry Jenkins


This post is by from Nieman Journalism Lab


Click here to view on the original site: Original Post




For years, academic Henry Jenkins has been talking about the connections between mainstream content and user-produced content. From his post as the founder and former co-director of the Comparative Media Studies program at MIT, Jenkins published Convergence Culture, which is about what happens when, as the book puts it, “old and new media collide.” It’s a tale of fan mashups and corporate reactions.

And now he’s back with a new catchphrase. If convergence culture was 2006, spreadable media is now. The argument: If it doesn’t spread, it’s dead. For things to live online, people have to share it socially. They also have to make it their own — which can be as participatory as just passing a YouTube clip on as a link or making a copycat video themselves.

But what does this mean for news? If news is growing more social, how does Jenkins’ notion of spreadability work for traditional media? And how can traditional media harness user energy to make content not just meaningful but also profitable?

These were some of the questions I had when I first heard the concept, which Jenkins and his collaborators first put out in a white paper in 2009. But I’ve had a chance to read the first few chapters of the book, due out in late 2011. Spreadable Media (coauthored with Sam Ford and Joshua Green) doesn’t mention traditional journalism. But as I’ve had a chance to work with Jenkins, who’s now a professor at USC, I wanted to see what spreadable media might mean for news. Here’s how Jenkins explained the idea’s implications for journalism in an email interview. Among the topics: why all journalists are citizen journalists, journalists and their possible conversations with audiences, paywalls, and most-emailed lists.

NU: What is spreadable media?

HJ: The concept of spreadable media rests on the distinction between distribution (the top-down spread of media content as captured in the broadcast paradigm) and circulation (a hybrid system where content spreads as a result of a series of informal transactions between commercial and noncommercial participants.) Spreadable media is media which travels across media platforms at least in part because the people take it in their own hands and share it with their social networks.

This kind of informal circulation may be solicited or at least accepted by media producers as part of the normal way of doing business or it may take forms which get labeled piracy. Either way, the widespread circulation of media content through the conscious actions of dispersed networks of consumer/participants tends to create greater visibility and awareness as the content travels in unpredicted directions and encounters people who are potentially interested in further engagements with the people who produced it.

So, at its heart, our book is interested in the value being generated through this grassroots circulation and how various sectors of the media industry are being reconfigured in order to accept the help of grassroots intermediaries who help expand their reach to the public. Along the way, we dissect many of the myths about how media circulates and how value gets generated in the digital era.

NU: How does spreadable media relate to your term convergence culture?

HJ: Convergence culture starts by rejection of the technologically focused definition of convergence as the integration of media functions within a single media device — the magic black box — in favor of one which stresses the flow of media content across multiple media channels. Certainly the rise of the iPod, the iPhone, and the iPad, have made the magical black box much closer to reality now than it was when I wrote Convergence Culture, but I would say we’ve had much more experiencing living in a convergence culture than living with convergence devices. We live at a moment where every story, image, or bit of information will travel across every available media platform either through decisions made in corporate bedrooms or decisions made in consumers’ living rooms.

The book outlined what this means for entertainment, branding, education, politics, and religion, placing a strong emphasis on what I call participatory culture. Citizen journalism is the application of participatory culture to the news sector but similar kinds of trends are impacting each of these other spaces where media gets produced and distributed. The emphasis in that book though is on participation in the form of cultural production — people creating videos, writing fan fiction, and otherwise generating their own media.

Spreadable Media takes the convergence culture context as given. We are now half a decade deeper into the trends the first book describes. Since the book was published, we’ve seen the expansion of mobile communication, social network sites, Web 2.0, and the rise and fall of Second Life, all extending our understanding of participatory culture and transmedia communication. So, what are the consequences of those shifts to how information, brands, and media content circulates? We certainly are still interested in participatory models of cultural production but we are now much more interested in acts of curration and circulation, which on both an individual and aggregated level, are impacting the communication environment.

NU: Let’s talk specifically about what spreadable media might mean for news. What are your thoughts on the way the news industry might make sense of this concept?

HJ: A central idea animating the book is “if it doesn’t spread, it’s dead.” There is a constant tension at this moment of media transition between wanting to lock down content and meter access on the one hand (a model based on “stickiness”) and wanting to empower consumers to help spread the word (a model based on “spreadability.”) We can see that tension in terms of the desire to gate access to news content and the mechanisms of spreading which characterize Twitter and blogs. Journalists have long embraced a central idea in this book — that content represents a resource which community use to talk amongst themselves. Journalists need to know how they fit into those circuits.

In the book’s opening chapter, I reflect on the role of Twitter in the aftermath of the Iranian elections. I argue that its central role was not in helping to organize the protests but rather in getting information about what was happening to the outside world and to increase people’s emotional engagement with it. Twitter stepped in to bring what was happening in the streets of Tehran closer to people in the west — with key roles played by the Iranian diaspora in the United States and Europe who helped to facilitate the circulation of this information. The general American public felt greater closeness to the people in Iran because they were learning about these events through the same tools as they used to share cute cat pictures with their friends. And they felt a greater investment in what was happening because they were actively helping to alert others about the events.

As this unfiltered information was flowing through Twitter, those on the social networks started putting pressure on news agencies to provide more cover. You could imagine Twitter as a self-contained news system, but the opposite happened: they used #cnnfail because they wanted the skills and resources that professional journalists could bring to the process. They were signaling how much they still relied on legacy media to sort through the pieces and help provide a context for the information being circulated. While it was framed as a critique of journalism, it was actually a call for help. News organizations need to be more alert in registering these signs of public interests and more nimble in responding to them.

NU: Are bloggers an example of people experimenting with media spreadability? What do we do for news organizations who want to bring all of that user engagement and monetize it?

HJ: We’ve long known that news stories generate conversations that people cut out news articles to put on bulletin boards and refrigerators, that we clip news stories and send them to friends. This happened in a pre-digital world and it happens now with more speed and scope thanks to the affordances of digital networking tools.

Blogs originated as a tool for sharing links; Twitter is now used extensively to share links with other consumers. News sites which prevent the sharing of such content amongst readers may look like ways to protect the commercial interest of that content, but in fact, they kill it, destroying its value as a cultural resource within networked communities, and insuring that the public will look elsewhere for news that can be spread.

In the book, we use the example of how the Susan Boyle video moved through the blog community, being situated into a range of different ongoing conversations wherever she was relevant — with science blogs talking about her vocal cords, church blogs organizing prayer groups, mommy blogs dealing with her role as a caretaker for her elderly mother, music blogs discussing her song choices, and fashion blogs talking about her make-over for the show. Every news story today spreads through these grassroots intermediaries and gets inserted into various conversations across a range of different communities. The better journalists understand how value gets created through this process, the more effective they will be both at serving their ever more diverse constituencies and at developing a business model which allows them to capture value through circulation.

NU: You say in your white paper and current drafts of the book that content that users can’t manipulate and whose intellectual property is controlled by organizations will be the least likely to spread. That seems to describe a typical news article, and maybe a typical news organization. How can news organizations make their content more spreadable?

HJ: Spreadability is partially about technical affordances. YouTube videos spread well because they allow users to embed them on their blogs and Facebook profiles. At the same time, the embedded video’s interface makes it easy for us to follow it back to its original context on YouTube. It is content which is designed to be spread.

Spreadability is also about social relations with consumers. Many of those who create spreadable content actively encourage readers to spread their materials, often directly courting them as participants in the process of distribution. We are certainly seeing news sites right now — Slashdot comes to mind — which encourage readers to gather and appraise content, but far fewer are encouraging us to help create awareness through actively circulating their content.

It is interesting to think about groups which have a strong investment in seeing content spread and a lower investment in controlling its distribution. Think about political campaigns with low budgets who want to maximize their reach to voters. Think about religious media who place a higher value on spreading the gospel than monetizing the circulation of information. Think of activist groups who want to reach beyond their core group of supporters. In each case, they build in direct appeals to their fans to help them spread the content rather than constructing prohibitions on grassroots circulation.

Right now, news organizations are caught between their civic mission — to meet the information needs of their communities and their economic needs — to stay in business long enough to serve their publics.

NU: What does spreadable media mean to the conversations journalists need to have with their audiences?

HJ: As information spreads, it gets inserted into a range of conversations which help people to process the information and understand its value for them as members of a community. In the book, Sam Ford, my co-author, draws on his experience in the PR world to talk about companies who actively listen to and respond to what their consumers say about them. He argues that the conversations seeded by spreadable media are much richer ways to monitor public response than narrowly structured focus groups. And he cites some examples of companies which identified problems in their customer relations and rectified them as a result of listening closely to what consumers said about them.

Newspapers have historically relied on letters to the editor to perform some of these functions, but this focuses only on those groups who seek to influence directly their editorial decisions, while there are other things a news organization might learn by actively listening to conversation people are having around and through the circulation of their content.

NU: Spreadable media seems to be a reaction to the idea that things are viral and that people have no agency. But doesn’t the whole idea of viral mean that people are actually taking action to share something? Don’t we want our news stories to be most-emailed and our videos to be viral?

HJ: Very much so. Viral media asks some of the same questions we are asking, having to do with how media content circulates through grassroots communities outside the direct control of the people who originates it. But the language of viral media mystifies how this process works. Many talk as if things just happened to “go viral” when they have no way to explain how or why the content has grabbed the public imagination. Other framings of “viral media” strip away the agency of the very communities whose circulation of the content they want to explain. It is a kind of smallpox-soaked blanket theory of media circulation, in which people become unknowing carriers of powerful and contagious ideas which they bring back to their homes and work place, infecting their friends and family.

Our work starts from the idea that people are making conscious decisions to aid the circulation of certain content because they see it as a meaningful contribution to their ongoing conversations, a gift which they can share with people they care about. As they circulate this content, they first are playing key roles in appraising its value at a time of exploding media options; they also help to frame the content, helping it to fit better into the ongoing social interactions; they may also build upon, appropriate, transform, and remix the content further extending its shelf life and enabling its broader circulation.

NU: One of the things I found most fascinating about your current exploration was your distinction between ordinary Internet users, who operate according to the gift economy, and media companies that operate according to market logic. Can you explain?

HJ: Basically, spreadable media moves between commercial and noncommercial economies. For the producer, the content may be a commodity or a promotion; for the consumer, it is a resource or a gift. The producer is appraising the transaction based on its economic value. While the consumer makes a decision about whether the price is too high for the value of the content, they are also making decisions based on the social or sentimental value of the content. When they pass that content along to their friends, they do so because they value their friends far more than because they want to promote the economic interests of producers. When they consume media, they often do so so that they have currency they need in the social interactions we have around media.

Media producers need to understand the set of values and transactions which shape how their media flows in order to understand when and how it is appropriate to monetize the activities of their consumers. We are used to transforming commodities into gifts. We do it every time we go to a store to buy a bottle of wine to a dinner party. We bought it as a commodity, we give it as a gift, and the moment of transformation comes when we remove the price tag. We need to better understand the same transformation as consumers take content from commercial sites and circulate it via Twitter or Facebook to their communities.

NU: If you had to project, what might this mean for user-generated content? And what happens when we start putting paywalls up on sites?

HJ: In the case of news, we might think about many different types of user-generated content. Often, we are talking about the citizen as reporter (especially in the case of hyperlocal news), producing content which can be uploaded to news sites. We might also think about the citizen as editor, determining which news matters to their community and passing it along in a more targeted way to their friends. We might think about the citizen as commentator, who responds to the news through what they write on their blogs or updates. We might think of these media as amplifying their role as consumers, allowing them to more fully express demands for what should get more coverage, as occurred in the #cnnfail debates after the Iranian elections.

Right now, we dump all of this into a box called “citizen journalism,” which is in its own way as misleading as categories like “viral media.” We might start from the fact that journalists are themselves citizens, or that these groups are doing many things through their sharing of news, only some of which should be understood as producing journalism. Focusing on citizen journalism results in an oppositional framing of blogging as competing with professional news production. Spreadable media would push us to think about journalists and bloggers as each making a range of contributions through their participation in a larger civic ecology.

NU: And finally: How many people do you expect to actually engage in making media mashups? I see more people watching Auto-Tune the News mashup videos on YouTube than making their own media out of existing media.

HJ: Our book makes the point that there are many different forms of participation, some requiring more skills, more technical access, more community engagement than others. Spectacular forms of grassroots cultural production rest on one end of a continuum of different forms of community participation. So some people certainly will be mashing up the news, just as they are remixing songs, films, and television shows. And we can point to many exciting examples of political remix videos which emerge from people’s engagement with news and commentary — think about the recent mashup of Donald Duck and Glenn Beck.

But many more people will help to shape their news by appraising its value and passing it along to specific people or groups who they think will be interested in it. We all probably have friends or relatives who mostly communicate through forwarding things. They may or may not be exerting great selectivity in their curatorial roles, but they are helping to insure the circulation of that information. More people in the future will be engaging with news on that level and their acts of circulation will play a larger role in shaping the flow of information across the culture.

Photo by Joi Ito used under a Creative Commons license.

A handbook for community-funded journalism: Turning Spot.Us experience into lessons for others


This post is by from Nieman Journalism Lab


Click here to view on the original site: Original Post




In creating a new system to fund reporting directly by donations from a geographic or online community, Spot.Us broke some of the traditional rules of journalism — namely that reporting is funded through a combination of advertising dollars and subscriptions.

That was two years ago, and now a network of individual journalists and small news organizations are attempting to use Spot.Us as a model to find new ways to fund their work and strengthen their connections to the community. And what they need are a new set of rules.

As part of his fellowship at the Reynolds Journalism Institute, Spot.Us founder David Cohn is developing a handbook for community-funded reporting that will cover everything from how reporters can pitch stories to establishing partnerships in the community to learning whether crowdfunding is right for your project. I spoke with Cohn and Jonathan Peters, who are working together on the project. In their eyes, it’s as much an assessment of how Spot.Us methods work as it is a handbook.

“I don’t want it to evangelize Spot.Us,” Cohn told me. “I want it to evangelize the type of community-funded reporting of Spot.Us.”

Spot.Us has worked with more than 70 organizations, from MinnPost and Oakland Local to The New York Times. “In my experience so far, it’s been the journalism community that has been adopting the Spot.us model, not the journalism industry,” Cohn said.

Which is why the book will serve not only as a how-to, but also something of a Hitchhiker’s Guide to the (Community-Funded Journalism) Galaxy, pointing out what has (and hasn’t) worked for Spot.Us, introducing the new players in community journalism, new methods of generating funding and a helpful glossary of terms (the difference between micro donations, crowdfunding and crowdsourcing for instance).

What they did not want to do, Peters says, is try and create a paint-by-numbers book that applies the same method to every community. “The community-funded model relies wholly on a very local focus, not only in the reporting that sites provide, but also in the structure of the site,” Peters said, adding that what works for one site may not work for another.

Only a few months into the project (they expect to be done by the spring), Cohn and Peters have found that one of the biggest questions the handbook can answer is how to explain the way community-funded reporting — and Spot.Us — works. For their research the two are surveying reporters who have worked with Spot.Us to fund and report stories. “The most interesting thing to the two of us was the majority of reporters who talked to us could not give an elevator speech to someone who does not know what Spot.Us does,” Peters said.

Making a pitch to an editor and convincing groups of people to help pay for a story are different things — largely because reporters tend to think journalism should be supported simply because it’s journalism, Cohn said. This is where a little entrepreneurship and the art of the sale come in, teaching journalists to articulate their goal and show their work meets an identifiable need. Just as important as the pitch is knowing how much of a story to tease out when trying to get funding. Cohn said reporters need to show what an investigation could reveal instead of giving up all the information their story will hold. Why would anyone pay to fund your story if you tell them the whole thing during the pitch?

Becoming something of a salesman and being more transparent in reporting are part of a broader question the handbook will deal with: Is community-funded journalism right for you? Those considerations, along with the amount of time it takes to raise money for reporting and having regular interaction with the audience, are key to whether a reporter will be successful working in Spot.Us model, Peters said.

Just as important is being able to navigate the playing field. Peters said its important for journalists to be aware of the varying options for getting funding for the work, whether it’s Kachingle and Kickstarter or GoJournalism (for Canadians).

Cohn and Peters say they don’t expect the handbook to be the definitive resource on community-funded reporting, but they expect it can help people who are curious. (As far as the actual book part of the handbook, they expect to publish it online.) Cohn said a large part of what he does now is talk to others about how Spot.Us works and how it can be applied elsewhere. Now all of that will be in handy book form.

“The audience is — as far as we can tell — writing for reporters who want to work with people like Spot.us or GoJournalism, and don’t know what it’s like,” Peters said. “We can knock down barriers and misconceptions.”

How much can we trust e-edition numbers? Depends on the paper


This post is by from Nieman Journalism Lab


Click here to view on the original site: Original Post




The latest numbers from the Audit Bureau of Circulations, tracking from March 2009 to September 2010, show a major proliferation in the number of e-editions reported by newspapers. Nearly 450 papers currently have weekday e-editions, which tallies to over 2 million subscribers and a 47-percent increase since this time last year.

As print circulation falls, e-editions swell in numbers. Not so startling. But the data can be misleading: Ballooning e-edition numbers don’t necessarily point to wholesale reader rejection of print, or even widespread usage of e-editions. For some local newspapers, if you want a print subscription, newspapers make it very financially agreeable — and in some cases give you no choice — to throw on an e-edition subscription as well.

Look no further than some of the smaller-market papers that cracked paidContent’s top-25 chart of newspaper e-edition subscriptions. Like, say, number 18, The Bend Bulletin, which grew from 1,108 e-edition subscriptions to 24,611 between Sept. 2009 and Sept. 2010. That’s an increase of over 2,000 percent (!) for a company that doesn’t circulate more than 35,000 weekday papers. But local Bulletin readers don’t even have the option of a print-only subscription, according to the paper’s website: It’s an e-edition or bust.

Or take number 25, The Schenectady Gazette. After launching a free site three years ago, the paper put up a paywall 18 months later and began offering a weekly print-plus-e-edition subscription package for one penny more per day than the print-only option — $3.99 versus $4.00 a week. You’d be hard pressed to find a better way to spend 52 cents a year.

Around the time the Gazette changed its subscription offering, weekly paid print circulation sat at 45,421. By September 2010, that number jumped by 16,052, nearly 35 percent. In roughly that same period, Gazette e-edition circulation increased by 17,796. The paper’s e-edition actually generates ad revenue by proving to potential advertisers that readers are local, Gazette general manager Dan Beck told me. “We have created, in an odd way, a more valuable reader to our advertisers,” Beck said. “We know they are our readers and they are local, they’re from here.”

“My overall sense is that this is more about marketing and new, more favorable metrics for newspaper companies than any kind of dramatic change in reading habits,” says Newsonomics author and Lab contributor Ken Doctor. Indeed, it’s difficult to tell whether e-edition subscriptions equate in any way to usage. Doctor cites “the snowbird reader of a northern paper” as one possible explanation.

The traditional e-edition essentially replicates the print product in digital format; ABC numbers cover these replicas, plus non-replica e-editions, like The Wall Street Journal and the soon-to-be-paywalled New York Times. ABC tracking includes online-only and Kindle subscriptions, which exist on a different account than print subscriptions, and products like TimesReader or GlobeReader as well.

All but two of the top 25 saw percentage increases in that September-to-September period, a phenomenon Doctor partly attributes to the slew of subscription bundles that surfaced over past year. He suggests the conventional e-edition isn’t attractive enough to compete with tablet versions as they continue to improve: “It’s a small, niche product, useful to those who like the newspaper in the format of the print paper. As tablets offer greater choice as digital news reading devices, e-editions will probably wither.”

We’ll see in March 2011, when ABC begins itemizing its e-edition circulation report by weekly subscription versus single-issue purchases, university subscriptions, and mobile readership.

Attention versus distraction? What that big NY Times story leaves out


This post is by from Nieman Journalism Lab


Click here to view on the original site: Original Post




Yesterday’s Sunday Times devoted the lead slot of its front page to a long examination of the effects of the web on the attention spans of teenagers. In the tradition (yes, it is now a tradition) of Nick Carr, the piece concludes that, essentially, our smartphones — and our Facebook and our YouTube and our web in general — are robbing kids of their ability to concentrate. Neuroplasticity! “Researchers say the lure of these technologies, while it affects adults too, is particularly powerful for young people,” the piece notes. “The risk, they say, is that developing brains can become more easily habituated than adult brains to constantly switching tasks — and less able to sustain attention.”

Rex Sorgatz summed it up like so: “‘Young people suck.’ –NYT.”

The human face of the epidemic is Vishal Singh, a seventeen-year-old from, naturally, Silicon Valley. “At the beginning of his junior year,” the Times reports, “he discovered a passion for filmmaking and made a name for himself among friends and teachers with his storytelling in videos made with digital cameras and editing software.” But that commitment to creation doesn’t transfer to schoolwork; though Vishal is entering a “pivotal academic year” in his life — his senior year, the year when colleges come calling and thus, ostensibly, futures are decided — he can’t seem to focus on the work he needs to do to do well.

Several teachers call Vishal one of their brightest students, and they wonder why things are not adding up. Last semester, his grade point average was 2.3 after a D-plus in English and an F in Algebra II. He got an A in film critique.

“He’s a kid caught between two worlds,” said Mr. Reilly [Vishal's principal at Woodside High School] — one that is virtual and one with real-life demands.

Two worlds. One real, the other digital. And in the space between them is Vishal — and, by implication, several other wayward members of the world’s first generation of digital natives, the kids who are, per the the piece’s headline, “Growing Up Digital, Wired for Distraction.” But does that binary — the ‘two worlds’ thinking that pits the virtual realm against the ‘real,’ as if the two were engaged in an epic battle for dominance of that vast land that is Impressionable Youth — really explain what’s going on here? Does it, for example, explain the nail it-to-fail it range of Vishal’s academic performance? Maybe; there’s a chance that his F in Algebra II can indeed be blamed on some unholy union of YouTube/Facebook/Sir Berners-Lee. But, then, if distraction is a diffusive proposition — if it infects all areas of intellectual life indiscriminately, and thus, ostensibly, equally — then how do you explain the A in film critique? (Also: a class in film critique? Perhaps Vishal’s problem is simply that his school is set in a DeLillo novel.)

Attention and distraction

That’s not to discount the attention-fragmenting nature of the web. “Facebook is amazing because it feels like you’re doing something and you’re not doing anything,” Vishal’s best friend, Sam, says in the story, after blaming the site for his inability to finish books and, thus, for his lower-than-desired SAT scores. And a distraction Facebook most certainly is. The question, though, is: distraction from what? And also: What’s inherently wrong with distraction? It seems to me that the real dichotomy here — to the extent, of course, that it’s fair to break any complex problem into reductive dualities — is less a matter of focus vs. distraction, and more a matter of the digital age’s spin-off opposition: interest vs. non-interest. Caring vs…lack of.

We talk a lot about fragmentation in the online world — the unbundling of the news product, the scattering of audiences, the unraveling of publics, etc. And when we do, we tend to focus on the entropic implications of that shift: “Fragmentation,” of course, carries a whiff of nostalgia not just for the thing being fragmented, but for wholeness itself — for completeness, for community, for all that’s been solid. What that framing forgets, though, is that the other side of fragmentation can be focus: the kind of deep-dive, myopic-in-a-good-way, almost Zen-like concentration that sparks to life when intellectual engagement couples with emotional affinity. The narrows, to be Carrian about it, of the niche. And when that kind of focus springs to life — when interest becomes visceral, when caring becomes palpable, when you’re so focused on something that the rest of the world melts away — the learning that results tends to be rich and sticky and sweet. The kind that you carry with you throughout your life. The kind that becomes a part of you. The kind that turns, soon enough, into wisdom.

It’s a kind of learning, though, that can’t be forced — because it relies for its initial spark on something that is as ineffable as it is intense. Interest has a way of sneaking up on you: One day, you’re a normal person, caring about normal things like sports and music and movies — and the next a Beatles song comes on the radio, and suddenly you’re someone who cares not just about sports and music and movies, but also about the melodic range of the sitar. Even if you don’t want, necessarily, to be somebody who cares about the melodic range of the sitar. Interests are often liberating; occasionally, they’re embarrassing. Either way, you can’t control them. They, in fact, control you.

The general and the personal

And that, I’d wager, is the root of Vishal’s academic problems: not that he’s not smart — indeed, again, “one of their brightest students” — and not that he’s the victim of a mass outbreak of web-borne distraction (again, that A in film critique). His problem is both simpler and more serendipitous than that: He just doesn’t care about algebra.

Which is a problem, of course, shared by probably 99.9 percent of the population who have experienced the particular pain of the polynomial. Rare is the person who genuinely likes algebra; rarer still is the person who genuinely, you know, cares about it. But we learn it anyway — because that’s what we’re expected to do. Formal education, as we’ve framed it, is not only about finding ways to learn more about the things we love, but also, equally, about squelching our aversion to the things we don’t — all in the ecumenical spirit of generalized knowledge. We value the straight-A report card not just as a demonstration of indiscriminate ability, but also as evidence of indiscriminate discipline: mastery over apathy. (An A in English and in chemistry! You, little polymath, are prepared for polite society.)

What distinguishes Vishal’s apathy, though — and what makes it more anxiety-inducing than that of the algebraic apatheists in whose footsteps he follows — is that he is coming of age in the digital era. And the digital era is bringing a new kind of empowerment not just to interest, but to aversion. The web is a space whose very abundance of information — and whose very informational infrastructure — trains our attention to follow our interests. And vice versa. In that, it’s empowering information as a function of interest. It’s telling Vishal that it’s better to spend time with video than with Vonnegut — simply because he’s more interested in editing than in reading. Vishal needs needs no other justification for his choice; interest itself is its own acquittal.

And we’re seeing the same thing in news. While formal learning has been, in the pre-digital world, a matter of rote obligation in the service of intellectual catholicism — and news consumption has been a matter of the bundle rather than the atom — the web-powered world is creating a knowledge economy that spins on the axis of interest. Individual interest. The web inculcates a follow your bliss approach to learning that seeps, slowly, into the broader realm of information; under its influence, our notion of knowledge is slowly shedding its normative layers.

For the learner, of course, that is incredibly empowering. One minute, I’m looking up a recipe for spice-roasted sweet potatoes; the next, courtesy of a few link-clicks, I’m learning that sweet potatoes are used for dye in South America, and that there exists such a thing as sweet potato butter. Which is, in a word, awesome. But it also means, on the social scale, a new ability to explore our idiosyncrasies. From Wikipedia to topic pages, from social curation to the explosive little link, the global textbook that is the web takes on a self-guided brand of dynamism, a choose-your-own-adventure proposition fueled by whim and whimsy. It’s a bottom-up shift that our top-down education systems, and journalism along with them, are grappling with. Community, after all, needs the normative to function; the question is where we draw the line between the interest and the imperative. Because as much as we talk about consumers’ desire for a curated information experience — whether on an iPad or within social networks or on the branded pages of the open web — Vishal’s volatility suggests that what we really want from the digital world is something more basic: the permission to be impulsive.

Image by Mike Licht used under a Creative Commons license.

With its new food blog, WordPress gets into the content-curation game


This post is by from Nieman Journalism Lab


Click here to view on the original site: Original Post




This month, the company associated with one of the world’s most popular blogging platforms took its first, quiet step into the realm of for-profit content aggregation. FoodPress, a human-curated recipe blog, is a collaboration between blogging giant WordPress.com and Federated Media, a company that provides advertising to blogs and also brokers more sophisticated sponsorship deals. Lindt chocolate is already advertising on the site.

“We have a huge pool of really motivated and awesome food bloggers,” explained Joy Victory, WordPress’ editorial czar. (Yes, that is, delightfully, her official title.) Food was a natural starting place for a content vertical.

If the FoodPress model takes off, it could be the beginning of a series of WordPress content verticals covering different topics. WordPress.com currently hosts more than 15.1 million blogs, and when the FoodPress launch was announced, excited WordPress commenters were already asking for additional themed pages on subjects like art, restaurants, and beer.

(To clarify the sometimes confusing nomenclature: WordPress the blogging software — sometimes called WordPress.org — is free, open source, and installed on your own web server; we use it under the hood here at the Lab. WordPress.com is a for-profit venture offering a hosted version of WordPress software, owned by Automattic, which was founded by WordPress developer Matt Mullenweg. FoodPress is a WordPress.com project.)

For now, though, FoodPress’ creators are keeping their focus on their first blog and seeing what kind of traffic and advertising interest it attracts — the start-small-then-scale approach. And one question that remains to be answered in this first experimental effort is how WordPress bloggers will respond to the monetization of their content, and whether featured bloggers will want compensation beyond the additional traffic they’re likely to receive.

So far, the response from users has been overwhelmingly enthusiastic, Victory said. While the familiar issue of blogger compensation has been raised in response to the new venture, “our users don’t seem concerned so far,” she said. Instead, they’re largely excited about the possibility of even more themed sites. Advertising is already a part of WordPress.com, Victory pointed out, popping up on individual WordPress blogs unless a user is signed into WordPress itself.

WordPress’ venture into the editorial realm is significant on its own merits, but it also provides a fascinating case study in how media jobs have proliferated even as the news industry suffers. Victory used to work for metro newspapers, as did Federated Media’s Neil Chase. Now the two are working on a project that brings atomized pieces of user-created content together as a singular web publication. (FoodPress’ tagline: “Serving up the hottest dishes on WordPress.com.”)

Victory is optimistic about this “new way of looking at journalism” — even though, she said, “I consider myself someone who has left traditional journalism behind.” But while some of the FoodPress content is aggregated automatically, Victory believes as well in the value of human curation in creating a good user experience — a sentiment shared among many in the burgeoning ranks of web curators. (Up to now, WordPress’ content curation has focused mainly on Freshly Pressed, a collection of featured blog posts on the site’s homepage, which Victory hand-selects daily.) And to bring more editorial oversight to FoodPress, Federated Media turned to one of its affiliated bloggers, Jane Maynard, to oversee the project — a paid, part-time position.

The blog won’t be just an experiment in curation, though; it will also be a case study in collaboration. “It’s the first step in what we think will be a critical partnership,” Chase noted — one that emerged organically from the collaboration-minded, conversational world of San Francisco-based startups. And just as Federated Media and Automattic have shared the duties of creating the site, he said, they will also share the revenue FoodPress generates.

As for the expectations for that revenue? Victory isn’t releasing traffic stats for FoodPress at this point — both she and Chase were hesitant to talk too much about a project still in beta testing — but noted that the site’s social media presence is growing, with, as of this posting, more than 1,400 Facebook “Likes” and 1,200 Twitter followers. The rest will, like a recipe itself, develop over time. “This is a little bit of an experiment for us,” Victory said. “And we’re hoping it’s wildly successful.”

The Washington Independent is folding, the CEO goes over the books and outlines the lessons he’s learned


This post is by from Nieman Journalism Lab


Click here to view on the original site: Original Post




On Wednesday, the nonprofit news and politics site The Washington Independent announced that, after just under three years of publishing, it’s closing shop. Its state-based sister site The New Mexico Independent said it would reduce its staff to just one part-time blogger.

News organizations open and close all the time, but this one hit home for me. I joined The Washington Independent in late 2007 as its managing editor and went on to be its top editor before joining the Lab. Several of my former colleagues have already lamented the loss of a valuable news organization; I could do the same, but in the spirit of the Lab, I’d like instead to look at what went wrong financially and what lessons could be learned by other nonprofit publishers from its experiences.

To get a sense of what happened, I spoke with my old boss, David Bennahum, the CEO of the American Independent News Network, which publishes the Washington and New Mexico sites plus a network of six other sites. Back in January, Bennahum told me that in the first five years the organization existed, he’d raised $11.5 million. With that kind of impressive fundraising, what went wrong? And what kind of outlook do other nonprofit news sites have? Here are three contributing factors to the closing:

The economic crisis

Nonprofit organizations are no less susceptible to the pain of an economic downturn. In the past two years, foundations and other donors regularly cited shirnking endowments as a reason for not renewing gifts or initiating new giving. That forced the network to spend less and still dip into reserves to cover costs. “It’s actually quite difficult to get these [nonprofit news sites] funded and get them to run,” he said, no matter the editorial success of the site. “It just never gets easier.”

In an email Bennahum sent to his staff, which he forwarded to me and is published in full at the end of this post, he broke down the numbers like this:

  • In 2006, 2007, and 2008 we raised $8.3 million and spent $6.5 million.
  • We ended 2008 with a surplus of $1.6 million.
  • In 2009 we raised $2.7 million and spent $3.1 million, eating into our reserves by $400,000.
  • In 2010 we will raise $1.9 million and spend $2.7 million; we expect this to leave us around $400,000 in reserves.

Going forward, each site in the network will need to generate enough fundraising to support its operations — successful fundraising for one site will no longer support other nodes in the network. The Washington and New Mexico sites were the two not pulling in enough to cover their costs independently. They also both launched in 2008 with a similar structural problem. In Washington’s case, a single $600,000 donation largely got the project off the ground. New Mexico launched with the backing of a small pool of donors. Those early donors didn’t renew another year.

Not enough multi-year commitments

Bennahum says in hindsight, he should not have launched without multi-year commitments from big donors, even if it meant starting off smaller. For example, had he negotiated the $600,000 donation as three $200,000 grants over three years, The Washington Independent would have been smaller, but more stable. “We would have been half the size — which means today, instead of this position, I might have had several hundred thousand more dollars left,” he said. “We probably wouldn’t be closing The Washington Independent.”

The Washington Independent launched with two full-time editors, about ten reporters (a mix of full and part time) and a substantial freelance budget. By the closing announcement, the staff was down to one editor and four reporters. “You have to have your own diet. If someone puts out a big buffet in front of you, you have to think twice,” he said. “That’s a lesson I’ve learned that we’ll just never repeat again.”

Not growing gradually

Bennahum says the next year will focus on what he described as “a more diversified mix of journalism projects that work in recessionary times.” Earlier this year he launched a site called The American Independent that aggregates stories from around the network and runs original content from states without standalone sites. The idea is to produce new content without the commitment of launching a new state-based site. Currently, reporters file stories from North Carolina and Texas. The funding for those reporters ends in January 2011, which they understood when hired on contract. The site, though, will continue to operate.

“The incremental cost to adding reporters [to The American Independent] is potentially a much lower price than you could operate a newsroom,” he said. “It creates a much more organic and gentle growth path.”

The network will continue operating sites with small staffs of one to three reporters in Iowa, Colorado, Michigan and Minnesota. The Florida Independent received a grant from the Knight Foundation this year for $175,000 (the network’s largest donation) and will continue to operate with a staff of five.

Bennahum says he wants to experiment more with syndicating content across his sites to see if even a site with one reporter can serve a community. “It’s just a great thought experiment,” he said.

So what does The Washington Independent’s demise say about the growing nonprofit sector of journalism? Bennahum said that, for now at least, journalism still isn’t in the same category as the sort of nonprofit entities that get long-term foundation support, like hospitals or schools. Philanthropists are still watching where the news industry is headed.

“‘Let the market sort it out,’ is a lucid response, and not necessarily wrong,” he said. “For the foreseeable future, success [in nonprofit journalism] is going to be the exception to the rule.”

David Bennahum’s staff email:

Dear Team Members:

In four years, we have built an extraordinary news organization. We can proudly track 600,000 monthly readers, and cite dozens of stories that have had a demonstrable impact in the communities we serve. Along the way, we’ve also received over 40 awards for excellence in journalism. We have pioneered a model that melds the benefits of the Internet (speed, voice, dialogue) with the discipline of serious investigative journalism.

I am proud of all we have accomplished together.

It is all the more remarkable for how we’ve done this in the worst economic climate since the Great Depression.

So I want to be transparent with you in regards to our financial position, as it will have consequences for 2011. Here’s the arithmetic:

  • In 2006, 2007, and 2008 we raised $8.3 million and spent $6.5 million.
  • We ended 2008 with a surplus of $1.6 million.
  • In 2009 we raised $2.7 million and spent $3.1 million, eating into our reserves by $400,000.
  • In 2010 we will raise $1.9 million and spend $2.7 million; we expect this to leave us around $400,000 in reserves.

Thus we have, for two years, been self-financing from reserves accrued during better economic times. I am grateful for these reserves, and that we could use them judiciously over 24 months. However, it is no longer possible to self-finance the gap between income and expenditures, for the simple reason that our cash reserves are too limited to do so.

Much of the shortfall in our income has to do with the larger economic climate. But not all. Here are some other factors that I, frankly, underestimated: We agreed, in the past, to open programs without multi-year commitments from supporters. In some cases, these supporters have not renewed their commitments, yet we have kept operating the programs at close to scale. In particular, this is the case both for The New Mexico Independent and The Washington Independent.

We are approaching our fifth year of operations; some of our founding supporters have, understandably, felt that the time has arrived to shift their support elsewhere. This is a relatively predictable pattern in philanthropy: 3-5 years of support from any given source is a safe assumption. Replacing this support with new support requires a 9-18 month development cycle. In this economic climate, it is closer to 18 months. The net result is that we see, in addition to a shortfall, our most conservative estimates actually coming true. For instance, in the summer of 2009 we did a worst case scenario for 2010, with regards to income, and projected $1.9 million in revenues. This is precisely what happened.

So going forward, we must adopt a new set of rules, to ensure our overall viability through an economic crisis that persists, and may persist for several more years:

Institute “pay go” budgets: programs must be supported. When they are not, they have to be either closed or operated at the level being supported. In the case of new programs, require multi-year commitments as a precondition for operations. This is what we have successfully done in Florida, where the program has two year commitments.

Be more innovative in terms of leveraging the “network effect” to help smaller programs operate with limited budgets. We pioneered this in Minnesota, where we’ve learned to operate a robust site with one full time reporter. The site is successful thanks, in part, to the way we can syndicate content throughout the network from our sister sites.

Using this framework, there are two programs that, unfortunately, are no longer sustainable at their current levels: The New Mexico Independent and The Washington Independent.

In the case of New Mexico, we are going to institute the Minnesota model, with the aim of working to rebuild support over time. In the case of The Washington Independent, we are going to merge the site with The American Independent, and now have one national place (instead of two) for all our reporting. Over time, we aim to build up our reporting capacity in Washington as support develops.

And going forward, we will be looking to a different architecture with regards to how we create new sites: more of our programs will live as “state pages” on AmericanIndependent.com rather than as stand alone websites. This will provide us with more flexibility and leave us less vulnerable to sudden changes in support levels.

More details in terms of how these changes will affect you will be forthcoming shortly from the editorial team.

I know that this news is hard, and the decisions that led to this did not come easily. We have learned to work with less, and done so admirably, but I am taking the prudent course that will ensure our network and its mission can thrive. And if things improve faster than anticipated, I look forward to having that good problem on our hands.

Please know that you can come to me with any questions about this situation.

Thank you.

Best,
David

Crunching Denton’s Ratio: What’s the return on paying sources?


This post is by from Nieman Journalism Lab


Click here to view on the original site: Original Post




There was a lot of buzz on Twitter yesterday about Paul Farhi’s piece in The Washington Post on checkbook journalism — in particular the way a mishmash of websites, tabloids, and TV news operations put money in the hands of the people they want to interview. (With TV, the money-moving is a little less direct, usually filtered through payments for photos or videos.)

But, just for a moment, let’s set aside the traditional moral issues journalists have with paying sources. (Just for a moment!) Does paying sources make business sense? Financially speaking, the justification given for paying sources is to generate stories that generate an audience — with the hope that the audience can then be monetized. Does it work?

There’s not nearly enough data to draw any real conclusions, but let’s try a little thought experiment with the (rough) data points we do have, because I think it might provide some insight into other means of paying for content. Nick Denton, the head man at Gawker Media and the chief new-media proponent of paying sources, provides some helpful financial context:

With the ability to determine instantly how much traffic an online story is generating, Gawker’s Denton has the pay scale almost down to a science: “Our rule of thumb,” he writes, “is $10 per thousand new visitors,” or $10,000 per million.

What strikes me about those numbers is how low they are. $10K for a million new visitors? There aren’t very many websites out there that wouldn’t consider that an extremely good deal.

Let’s compare Denton’s Ratio to the numbers generated by another money-for-audience scheme in use on the Internet: online advertising. After all, lots of ads are sold using roughly the same language Denton uses: the M in CPM stands for thousand. Except it’s cost per thousand impressions (a.k.a. pageviews), not cost per thousand new visitors, which would be much more valuable. What Denton’s talking about is more like CPC — cost per click, which sells at a much higher rate. (Those new visitors aren’t just looking at an ad for a story; they’re actually reading it, or at least on the web page.) Except it’s even more valuable than that, since there’s no guarantee that the person clicking a CPC ad is actually a “new” visitor. Let’s call what Denton’s talking about CPMNV: cost per thousand new visitors.

CPC rates vary wildly. When I did a little experiment last year running Google AdWords ads for the Lab, I ended up paying 63 cents per click. I ran a similar experiment a few months later with Facebook ads for the Lab, and the rate ended up being 26 cents per click.

What Denton is getting for his $10 CPMNV is one cent per click, one cent per new visitor. It’s just that the click isn’t coming from the most traditional attention-generating tool, an ad — it’s coming from a friend’s tweet, or a blogger’s link, or a mention on ESPN.com that sends someone to Google to search “Brett Favre Jenn Sterger.”

Doing the pageview math

And that $10 CPMNV that Denton’s willing to pay is actually less than the return he gets for at least some of his source-paid stories. Take the four Gawker Media pieces that the Post story talks about: the original photo of singer Faith Hill from a Redbook cover, to show how doctored the image was for publication; photos and a narrative from a man who hooked up with Senate candidate Christine O’Donnell; the “lost” early version of the iPhone 4 found in a California bar; and voice mails and pictures that allegedly show quarterback Brett Favre flirting with a woman named Jenn Sterger, who is not his wife. Gawker publishes its pageview data alongside each post, so we can start to judge whether Denton’s deals made financial sense. (Again, we’re talking financial sense here, not ethical sense, which is a different question.)

Faith Hill Redbook cover: 1.46 million pageviews on the main story, and about 730,000 pageviews on a number of quick folos in the days after posting. Total: around 2.2 million pageviews, not to mention an ongoing Jezebel franchise. Payment: $10,000.

Christine O’Donnell hookup: 1.26 million pageviews on the main story, 617,000 on the accompanying photo page, 203,000 on O’Donnell’s response to the piece, 274,000 on Gawker’s defense of the piece. Total: around 2.35 million pageviews. Payment: $4,000.

“Lost” iPhone: 13.05 million pageviews on the original story; 6.1 million pageviews on a series of folos. Total: around 19.15 million pageviews. Payment: $5,000.

Brett Favre/Jenn Sterger: 1.73 million pageviews on the first story, 4.82 million on the big reveal, 3.99 million pageviews on a long line of folos. Total: around 10.54 million pageviews. Payment: $12,000.

Let’s say, as a working assumption, that half of all these pageviews came from people new to Gawker Media, people brought in by the stories in question. (That’s just a guess, and I suspect it’s a low one — I’d bet it’s something more like 70-80 percent. But let’s be conservative.)

Expected under the Denton formula:
Faith Hill: 1 million new visitors
O’Donnell: 400,000 new visitors
iPhone: 500,000 new visitors
Favre: 1.2 million new visitors

Guesstimated real numbers:
Faith Hill: 1.1 million new visitors
O’Donnell: 1.17 million new visitors
iPhone: 9.56 million new visitors
Favre: 5.27 million new visitors

Again, these are all ham-fisted estimates, but they seem to indicate at least three of the four stories significantly overperformed Denton’s Ratio.

Reaching new audiences

The primary revenue input for Gawker is advertising. They don’t publish a rate card any more, but the last version I could find had most of their ad slots listed at a $10 CPM. Who knows what they’re actually selling at — ad slots get discounted or go unsold all the time, many pages have multiple ads, and lots of online ads get sold on the basis of metrics other than CPM. But with one $10 CPM ad per pageview, the 2.2 million pageviews on the Faith Hill story would drum up $22,000 in ad revenue. (Again, total guesstimate — Denton’s mileage will vary.)

Aside: Denton has said that these paid-for stories are “always money-losers,” and it’s true that pictures of Brett Favre’s manhood can be difficult to sell ads next to. Most (but not all) of those 10.54 million Brett Favre pageviews were served without ads on them. But that has more to do with, er, private parts than the model of paying sources.

But even setting aside the immediate advertising revenue — the most difficult task facing any website is getting noticed. Assuming there are lots of people who would enjoy reading Website X, the question becomes how those people will ever hear of Website X. Having ESPN talk about a Deadspin story during Sportscenter is one way. Having that Redbook cover emailed around to endless lists of friends is another. Gawker wants to create loyal readers, but you can only do that from the raw material of casual readers. Some fraction of each new flood of visitors will, ideally, see they like the place and want to stick around.

Denton publishes up-to-date traffic stats for his sites, and here’s what the four in question look like:

It’s impossible to draw any iron-clad conclusions from these squiggles, but in the case of Jezebel and Deadspin, the initial spike in traffic appears to have been followed by a new, higher plateau of traffic. (The same seems true, but to a lesser extent, for Gizmodo — perhaps in part because it was already much more prominent within the gadget-loving community when the story broke than, for example, 2007-era Jezebel or 2010-era Deadspin were within their target audiences. With Gawker, the O’Donnell story is too recent to see any real trends, and in any event, the impact will probably be lost within the remarkable overall traffic gains the site has seen.)

Fungible content strategies

I’ve purposefully set aside the (very real!) ethics issue here because, when looked at strictly from a business perspective, paying sources can be a marker for paying for content more generally. From Denton’s perspective, there isn’t much difference between paying a source $10,000 for a story and paying a reporter $10,000 for a story. They’re both cost outputs to be balanced against revenue inputs. No matter what one thinks of, say, Demand Media, the way they judge content’s value — how much money can I make off this piece? — isn’t going away.

Let’s put it another way. Let’s say a freelance reporter has written a blockbuster piece, one she’s confident will generate huge traffic numbers. She shops it around to Gawker and says it’ll cost them $10,000 to publish it. That’s a lot of money for an online story, and Denton would probably do some mental calculations: How much attention will this story get? How many new visitors will it bring to the site? What’s it worth? I’m sure there are some stories where the financial return isn’t the top factor — stories an editor just really loves and wants to publish. But just as the Internet has turned advertising into an engine for instantaneous price matching and shopping into an engine for instantaneous price comparison, it breaks down at least some of the financial barrier between journalist-as-cost and source-as-cost.

And that’s why, even beyond the very real ethical issues, it’s worth crunching the numbers on paying sources. Because in the event that Denton’s Ratio spreads and $10 CPMNV becomes a going rate for journalists as well as sources, that means for a writer to “deserve” a $50,000 salary, he’d have to generate 5 million new visitors a year. Five million is a lot of new visitors.

There’s one other line Denton had in the WaPo piece that stood out to me:

“I’m content for the old journalists not to pay for information. It keeps the price down,” Denton writes in an exchange of electronic messages. “So I’m a big supporter of that journalistic commandment – as it applies to other organizations.”

When we think of the effects of new price competition online, we often think of it driving prices down. When there are only a few people competing for an advertising dollar, they can charge higher rates; when there are lots of new competitors in the market, prices go down. But Denton’s basically arguing the equally logical flipside: I can afford to pay so little because there aren’t enough other news orgs competing for what sources have to offer. Let’s hope we don’t get to that same point with journalists.

The neverending broadcast: Frontline looks to expand its docs into a continual conversation


This post is by from Nieman Journalism Lab


Click here to view on the original site: Original Post




Frontline, PBS’s public affairs documentary series, has one of the best reputations in the business for the things that journalism values most highly: courageous reporting, artful storytelling, the kind of context-heavy narrative that treats stories not simply as stories, but as vehicles of wisdom. It’s a “news magazine” in the most meaningful sense of the term.

But even an institution like Frontline isn’t immune to the disruptions of the web. Which is to say, even an institution like Frontline stands to benefit from smart leveraging of the web. The program’s leadership team is rethinking its identity to marry what it’s always done well — produce fantastic broadcasts — with something that represents new territory: joining the continuous conversation of the web. To that end, Frontline will supplement its long-form documentaries with shorter, magazine-style pieces — which require a shorter turnaround time to produce — and with online-only investigations. (The site’s motto: “Thought-provoking journalism on air and online.”)

But it’s also expanding its editorial efforts beyond packaged investigations, hoping to shift its content in a more discursive direction. Which leads to a familiar question, but one that each organization has to tackle in its own way: How do you preserve your brand and your value while expanding your presence in the online world?

One tool Frontline is hoping can help answer that question: Twitter. And not just Twitter, the conversational medium — though “we really want to be part of the journalism conversation,” Frontline’s senior producer, Raney Aronson-Rath, told me — but also Twitter, the aggregator. This afternoon, Frontline rolled out four topic-focused Twitter accounts — “micro-feeds,” it’s calling them:

Conflict Zones & Critical Threats (@FrontlineCZCT), which covers national security and shares the series’ conflict-zone reporting;

Media Watchers (@FrontlineMW), which tracks news innovation and the changing landscape of journalism;

Investigations (@FrontlineINVSTG), which covers true crime, corruption, and justice — spotlighting the best investigative reporting by Frontline and other outlets; and

World (@FrontlineWRLD), which covers international affairs.

The topic-focused feeds are basically a beat system, applied to Twitter. They’re a way of leveraging one of the core strengths of Frontline’s journalism: its depth. Which is something that would be almost impossible for Frontline, Aronson-Rath notes, to achieve with a single feed. So “we decided that the best thing for us was to be really intentional about who we were going to reach out to and what kind of topics we were going to tweet about — and not just have it be a promotional tool.”

Each feed will be run by two-person teams, one from the editorial side and the other from the promotional — under the broad logic, Aronson-Rath notes, that those two broad fields are increasingly collapsing into each other. And, even more importantly, that “all the work that we do in the social media landscape is, by its very essence, editorial.” Even something as simple as a retweet is the result of an editorial decision — and one that requires the kind of contextual judgment that comes from deep knowledge of a given topic.

So Frontline’s feed runners, Aronson-Rath notes, “are also the people who have, historically, been working in those beats in Frontline’s broadcast work.” (Frontline communications manager Jessica Smith, for example, who’ll be helping to run the “Conflict Zones” feed, covered that area previously, in cultivating the conversation between Frontline and the national security blogosphere as a component of the program’s earlier web efforts.) In other words: “These guys know what they’re doing on these beats.”

To that end, the teams’ members will be charged with leveraging their knowledge to curate content from the collective resources of all of Frontline’s contributors — from reporters to producers, public media partners to internal staff — and, of course, from the contributors across the web. The teams will work collaboratively to produce their tweets (they’ll even sit next to each other to maximize the teamwork). And some feeds will contain not just curated content, but original reporting, as well. Frontline reporters Stephen Grey and Martin Smith are about to dispatch to Afghanistan; while they’re there, they’ll attempt to tweet from @FrontlineCZCT whenever possible. (They’ll tweet from personal feeds, which @FrontlineCZCT curators will pull into the Frontline-branded feed.)

The broad idea behind the new approach is that audiences identify with topics as much as they do with brands. And there’s also, of course, the recognition of the sea of material out there which is of interest to consumers, but which ends up, documentary filmmaking being what it is, on the cutting-room floor. The new approach, it’s hoped, will give Frontline fans a behind-the-scenes look into the film production process. “You wouldn’t actually know where Frontline’s reporting teams are right now,” Aronson-Rath points out. “You only know when we show up.” Now, though, “when a team goes into Afghanistan, we’re going to let you know where they are. We’re going to give you some intelligence about what they’re doing. And it’ll be a completely different level of a conversation, we’re hoping.”

It’ll also be a different level of engagement — for Frontline’s producers and its consumers. It’s a small way of expanding the idea of what a public affairs documentary is, and can be, in the digital world: a process, indeed, as much as a product. “We think,” Aronson-Rath says, “that this is going to help keep our stories alive.”

Google News experiments with metatags for publishers to give “credit where credit is due”


This post is by from Nieman Journalism Lab


Click here to view on the original site: Original Post




One of the biggest challenges Google News faces is one that seems navel-gazingly philosophical, but is in fact completely practical: how to determine authorship. In the glut of information on the web, much of it is, if not completely duplicative, then at least derivative of a primary source. Google is trying to build a way to bake an article’s originality into its no-humans-used algorithm.

Today, it’s rolling out an experiment that hopes to tackle the “original authorship” problem: two new metatags, syndication-source and original-source, intended to attribute authorship, via URLs, into the back end of news on the web. Though the tags will work in slightly different ways, Googlers Eric Weigle and Abe Epton note in a blog post, “for both the aim is to allow publishers to take credit for their work and give credit to other journalists.”

Metatags are just one of the many tools Google uses to determine which articles most deserve news consumers’ attention. They work, essentially, by including data about articles within webpages, data that help inform Google’s search algorithms. Google itself already relies on such tagging to help its main search engine read and contextualize the web. (Remember Rupert Murdoch’s so-far-unrealized threats to opt out of Google searches? He would have done it with a noindex tag.)

The tags are simple lines of HTML:

<meta name="syndication-source" content="http://www.example.com/wire_story_1.html">

<meta name="original-source" content="http://www.example.com/scoop_article_2.html">

And they’ll work, Weigle and Epton explain, like this:

syndication-source indicates the preferred URL for a syndicated article. If two versions of an article are exactly the same, or only very slightly modified, we’re asking publishers to use syndication-source to point us to the one they would like Google News to use….

original-source indicates the URL of the first article to report on a story. We encourage publishers to use this metatag to give credit to the source that broke the story. We recognize that this can sometimes be tough to determine. But the intent of this tag is to reward hard work and journalistic enterprise.

(This latter, original-source, is similar to Google’s canonical tag — but original-source will be specific to Google News rather than all of Google’s crawlers.)

Google News is asking publishers to use the new tags under the broad logic that “credit where credit is due” will benefit everyone: users, publishers, and Google. A karma-via-code kind of thing. So, yep: Google News, in its latest attempt to work directly with news publishers, is trusting competing news organizations to credit each other. And it’s also, interestingly, relying on publishers to take a more active role in developing its own news search algorithms. In some sense, this is an experiment in crowdsourcing — with news publishers being the crowd.

At the moment, there are no ready-made tools for publishers to use these tags in their webpages — although one presumes, if they get any traction at all, there’ll be a plugin for many of the various content management systems in use at news organizations.

And the tags, for any would-be Google Gamers out there, won’t affect articles’ ranking in Google News — at least not yet. (Sorry, folks!) What it will do, however, is provide Google with some valuable data — not just about how its new tags work, but also about how willing news publishers prove to be when it comes to the still-touchy process of credit-giving. That’s a question Google News has been trying to tackle for some time. “We think it is a promising method for detecting originality among a diverse set of news articles,” the tags’ explanation page notes, “but we won’t know for sure until we’ve seen a lot of data. By releasing this tag, we’re asking publishers to participate in an experiment that we hope will improve Google News and, ultimately, online journalism.”

“That heady feeling of being totally integrated”: The elusive promise of community, flattened and “real”


This post is by from Nieman Journalism Lab


Click here to view on the original site: Original Post




In the future-of-journalism business, we’re obsessed with adoption: getting online, getting hip to the web, leaving old analog practices behind, embracing the interactivity of social media. For a long time, not getting online — not getting hip to the digital program — seemed the provenance of clueless curmudgeons, middle-aged city desk editors, and Andrew Keen. Rightly, I think, we’ve devoted most of our energy to figuring out the details of what Jay Rosen has called “the migration point of the press tribe.” Getting to the other side of the chasm means getting wired in.

One of the things I always loved about Scott Rosenberg’s book Say Everything was that it covered enough historical time that it was as much a book about blogs ending as it was a book about the adoption of blogging. Over the last few weeks, we’ve been lucky enough to read several fantastic pieces that I think speak to this question of “getting offline” in ways that go beyond the usual curmudgeonly prattle. Two writers went down this road voluntarily: Marc Ambinder wrote a farewell post called “I am a Blogger No Longer,” and Zadie Smith, in a review of The Social Network, referred to herself a 1.0 person living in a 2.0 world, a person who killed her Facebook page after a few weeks. A third blogger, however — Ruth Gledhill of The Times of London — was forced to shut down her blog when the newspaper she worked for went behind a paywall. No openness to the Internet, no point in running a blog.

For me, it was Gledhill’s comments about “life behind the paywall” that got me thinking. “In one sense,” she wrote:

I have my ‘life’ back as my blog took up all of my waking hours when I wasn’t writing news stories and I was neglecting our son and other areas of my life outside work. It was definitely an addiction. When I was wired up, I felt physically part of the internet, the blogosphere. I still miss that heady feeling of being totally integrated with the ‘ether’.

Ambinder’s comments about non-blog journalism being “ego free” may have garned the most attention on Twitter, but I think Robin Sloan of Snarkmarket is right when he flags this as the piece’s key point. Ambinder’s point intersects well with Gledhill’s:

The mere fact that online reporters feel they must participate in endless discussions about these subjects is something new, a consequence of the medium, and is one reason why it can be so exhausting to do primarily web journalism. The feedback loop is relentless, punishing.

The fact that one of these comments is primarily positive (“wired up,” “physically part of the internet,” “heady feeling,” “totally integrated”) while the other is negative (“endless discussions,” “exhausting,” “relentless,” “punishing”) makes it clear to me that both writers are talking about the same thing. They’re talking about an intensive process of speaking and listening, grounded in a social network that is itself embedded within a dynamic community. In both cases, the journalist is open, responsive, locked in…and open and responsive to a network of ultimately real people, not to some abstract entity that looms just over your left shoulder. This would be a hard feeling to describe to someone who had never Tweeted, blogged, surfed an RSS feed, or gotten lost on Facebook, but if you’ve gotten this far you probably have some idea of what I’m talking about. There’s a certain frisson there. I can actually feel it as I write this post.

Having spent many years teaching and befriending journalists, and having participated in some poorly defined acts of citizen journalism myself, it seems that people generally go into journalism for a number of reasons. I’ve found that my would-be journalism students are usually curious. They want to get to the bottom of things; they deal in practical reality, not theory; and they (let’s face it) love to snoop. They’re practical, inquisitive, fact-minded folks.

In addition to them, though, I know a number of journalists who went into the industry because their communicative work gave them the chance to ground themselves in a particular community, to be embed themselves within a particular public. They want to stand near the center of the communications circuit. They want to listen to people and tell them things, all at the same time. They want to learn new things, things that matter to individuals and groups, and then tell them about it. They want to know that they’ve made a difference, that the people have heard them.

One of the things I think you realize as a journalist, however, is that your “public” quickly gets reduced to your beat, and your community most often consists of folks we might call “sources” (an ugly phrase). In everyday terms, the best journalists spend most of their time talking to a rather limited group of people — and even when that circuit of people expands they’re still primarily dealing with people they usually refer to as a “source.” Journalists are workers, and as workers, they become attuned to practices that make the most logical sense, that help them do their job, and get them out the door headed towards home as quickly as possible. For journalists, the practical necessities of journalism narrow the scope of the public.

This is why I think so many journalists get so excited about the social possibilities of digital technology. In the most basic sense, “the shock of community” that the Internet provides gets represented by quantitative audience metrics. Whatever audience-tracking tools may or may not be doing to the editorial process, there’s no mistaking the fact that when reporters first encounter those heady sheets of Omniture data, it blows their minds. “Finally! The invisible audience has returned! These are the people I cared about when I first went into reporting…I forgot about them — but here they are!” In more poetic terms, it’s what Gledhill talks about when she writes that “I felt physically part of the internet, the blogosphere…totally integrated with the ‘ether’.” It’s not just metrics, but it’s comments, links, email, and conversation.

When I was doing research in Philadelphia, this is how a local journalist/blogger described the evolution of his blog:

…the key lesson is that my blog got picked up and accepted as being an authentic part of the blogging community, which in his case was the left-wing blogosphere. And the way I did that was to link to these other blogs, to engage with them, and to seek them out. Some of our other blogs that are run by journalists are struggling with how to gain that acceptance. I remember a moment in September 2003 when one of my posts was linked by the leftwing website Buzzflash [which was popular at the time]. Comments came rolling in. Emails to me went through the roof — that was the kind of national attention I was looking for!

Ambinder, on the other hand, points to the aftermath of that social-network high: the endless comment moderation, the exhaustion that digital immersion can cause. And Zadie Smith goes one step further. For Smith, the community journalists have been so excited to rediscover isn’t actually real. It’s limited. It’s flattened. On Facebook,

If the aim is to be liked by more and more people, whatever is unusual about a person gets flattened out. One nation under a format. To ourselves, we are special people, documented in wonderful photos…Software may reduce humans, but there are degrees.

Smith’s point is philosophical: digital technology reduces us. Like any grand philosophical point, it’s ultimately unprovable, which is why I’ve tried to come at it from an oblique angle, by talking about publics and journalism. Does online journalism give us a community that’s more real, or less real, than the one we leave behind? I think that digital technology does flatten people. But it flattens more than just people. It flattens objects, concepts, publics, and relationships as well. And it’s not just digital technology that flattens things; the daily act of working, of day-to-day practical living flattens things too.

Reporters may go into journalism to be with the public; they eventually find beats and sources and the daily grind instead. Reporters may go online to find a community more responsive than the one they encounter in their daily work, but it’s a community that can be exhausting, pummeling, and not quite real. So get offline if you wish. Get online if you can. But in either case, never make the mistake in thinking that you’ve found a community, a public, a reality, that’s more authentic than the one you’ve left behind. We can’t will authentic community into being. It sort of sneaks up on us. And just as quickly — as soon as we turn our heads — it’s gone.

Photo by Matthew Field used under a Creative Commons license.

Sign up for our daily Nieman Journalism Lab email


This post is by from Nieman Journalism Lab


Click here to view on the original site: Original Post




Since we launched the Lab in 2008, we’ve tried to make our content accessible in lots of different ways: via RSS, Twitter, Facebook, Kindle, iPhone app, and more.

But, for whatever reason, we’ve skipped out on the most basic of electronic delivery mechanisms: email.

We’ve finally remedied that. You can now sign up for our daily Lab email. Each afternoon, if we’ve published something new in the previous 24 hours, you’ll get one email with all our new posts. The emails look like this.

We may occasionally send out other important announcements about the Lab through the list, but that’ll be very occasionally — this is really just a more convenient way to get our content delivered to the inbox where you spend your waking hours. Here’s the signup form:

And while you’re here, why not like us on Facebook? We’d appreciate it.

Comments and free samples: How the Honolulu Civil Beat is trying to build an audience (and its name)


This post is by from Nieman Journalism Lab


Click here to view on the original site: Original Post




“You’re starting from absolute scratch. That’s a big hill to climb.”

That’s not an excuse, but it is the reality of the news startup that John Temple is describing. Temple is the editor of the Honolulu Civil Beat, the online-only news source that made a big splash earlier this year because of its pay-first mentality. As envisioned by Temple, and by Civil Beat founders Pierre Omidyar and Randy Ching, most of the content on the Civil Beat site sits behind a paywall.

As far as startups go, the Civil Beat had news futurists curious about whether a media organization could get readers to pay for news upfront — particularly since Civil Beat has the advantage/disadvantage of starting from a paid subscription model out of the box, as opposed to introducing one after the fact. The big question — it almost seems like a sphinxian riddle — is how do you get people to pay for your work if they can’t readily access it?

In the first six months, the answer seems to be a lot of hustle on the part of Temple and his staff. They’ve aggressively pursued coverage on land use and money issues, placed an emphasis on data, and are engaging readers on and offline. And one other thing: They’re giving away free samples on CivilBeat.com.

“When you’re working at an established organization, you’re building on so much tradition. And here you’re not. You’re developing everything,” said Temple, who is more than familiar with established organizations having been editor and publisher of the departed Rocky Mountain News.

Doling out free content

Where Civil Beat has to be creative, Temple told me, is in making a connection to readers and turning them into site members. “The challenge of course is to have enough people feel that you’re essential that they want to support you and pay for your services,” he said. (Temple said they aren’t releasing numbers on Civil Beat memberships or site traffic just yet. Though he did say this: “People who are willing to sign up at the early phase of a new news product like this with high aspirations — there’s low churn rate with those people.”)

The paywall also sprouts leaks on certain days, when some Civil Beat stories are viewable to the public — generally reporting on the government or elections, Temple said. The Civil Beat homepage, as well as its Twitter feed, also provide a basic understanding of the day’s news in a less-than-closed off way. Temple said it’s been important, as a matter of marketing as well as gaining the public’s trust, to demonstrate to readers that their news is not completely hidden away.

Which is why they went one step further, offering the equivalent of “free ice cream sundaes!” with complete free access to the site on certain days. The free content days are timed around stories the staff believe are in the public interest or enterprise stories they’d like to see reach a wider audience. Temple said they recognize that in order for readers to decide whether they want to spend money on the Civil Beat, they should be able to sample it first.

What the Civil Beat shares in common with many news organizations is the belief in the strength of their journalism as the primary draw for the public, be it land development and environmental stories or campaign funding news. It’s a mix of news basics in new forms, with the Civil Beat reporter/hosts fact-checking (similar to PolitiFact) statements from politicians and parsing data for document-driven reports on subjects like public employee salaries.

“We share with the readership the experience in gathering those records and encountering government agencies,” Temple said. “In some ways that has been very provocative, because we’ve written about how difficult it is to get information and how government agencies treat us.”

Building community

As a small news organization willing to experiment with coverage areas, reader engagement, and ways readers can pay for content, Temple said it was necessary to have an open dialogue with members about changes to the Civil Beat. The company blog has become a place to discuss their journalism and ask for suggested interview questions. Temple said it’s also been useful as they’ve also tinkered with the subscription levels and pricing, offering a 15-day trial for $0.99 and adding a $0.99 cent per month discussion membership to take part in comments. (Comments are free to view, just not to leave.)

And speaking of comments, Temple says they have nothing but good things to report. Discussions have largely remained civil, even while spirited. Members use their real names or can use a screen name (though Civil Beat staff know members’ real identities, thanks to the subscription process). And what may be most surprising to editors dealing with comments elsewhere: “We don’t even have a profanity filter on our comments — anybody can post anything in our comments. It’s all self regulated,” Temple said.

The Civil Beat seems to be making its biggest bet on reader engagement, not just as a method of outreach, but also as content for the site. The debates between readers, ranging from education reform to a proposed Honolulu rail project are filled with long, thoughtful posts, often citing links for background. In turn, Civil Beat staff will invite members to write blog posts spun off from discussions or on other topical issues. “Obviously, the core content is the journalism that we produce, but the comments and the discussion create a whole other level of content,” Temple said.

They’re also reverse engineering the idea of comments as the new “public square,” by holding events (called “Beatups”) on issues like the judicial nomination process and the merger of the Honolulu Advertiser and Honolulu Star-Bulletin. The events are open to members, with non-members able to join for as little as the $0.99 commenting subscription.

Temple wants to not just inspire the daily conversation, but be a part of it — and yes, to get people to help pay for their work along the way. By making select stories open and comments visible, the strategy appears to be letting outsiders have just enough of a taste (or get them riled up for a debate) to pique their curiosity. The idea for the Civil Beat is to prove its worth as a news organization through their work while being open with readers about how they operate. And with substantial financial backing, it can afford to give its strategy some time to develop.

“If you look at most news organizations, and of course they’ve all evolved over the years, there’s still a pretty defensive posture,” Temple said. “We don’t think that’s a healthy way to approach it and I think our members have responded really positively to that. They want to feel that they can talk to you.”

Josh Marshall on Talking Points Memo’s growth over the last decade: Moving from solo blog to news org


This post is by from Nieman Journalism Lab


Click here to view on the original site: Original Post




It’s funny to think back to the Talking Points Memo of ten years ago, just a strip of text down a single blue page. (It also had a red-background phase before settling in on the beige color scheme it still has today.)

On November 13, 2000, Joshua Micah Marshall launched the site as a place to blog the presidential election recount in Florida. The tone was different then, much chattier; witness how often Marshall referred to himself as “Talking Points” in the third person, as in “Talking Points heard….” But over the next decade, of course, Marshall not only kept his blog going but grew it into one of the most cited models for online journalism, winning prizes, innovating with the crowd, attracting capital, and growing to a staff of almost 20. (Disclosure: TPM’s growth employed me at one point.)

In honor of TPM’s tenth anniversary, we emailed Marshall some questions about the growth of TPM and the direction it’s headed. He’s been dropping hints about future plans on Twitter, and he’s thinking a lot about what mobile devices will mean for news. And he says TPM is getting ready to experiment with a paid membership model early next year — but not a paywall.

There are some valuable lessons for anyone in the midst of, or considering launching a startup. Here’s the full transcript.

LKM: TPM is turning ten. Are you where you even close to where you thought you would be when you started? Are you where you thought you would be even five years ago?

JMM: Ten years ago, in November 2000, I don’t think I don’t think I gave any thought to where it was going. So I didn’t have any sense of where it would be. But five years ago was when I made the decision to build TPM into a multi-person news organization. Basically in the early spring of 2005. And on balance I’d say, yeah, this is about where I thought we’d be. Certain things are different. At the outset I thought more in terms of launching a series of basically distinct sites. But over time, I saw the logic of taking a more consolidated approach, making TPMMuckraker, for instance, more of a section within a TPM news site than a site in itself. But in terms of scale, topics I wanted us to cover, the move toward paid advertising as the core funding model, it’s about where I was shooting to be at this point.

LKM: You’ve tweeted about your disappointment in outlets repurposing content for the iPad rather than imagining something new. How did you think about TPM and the iPad or tablets? Do you think tablets will create a totally new form in the next few years, the way blogging emerged as its own form?

JMM: We’re focusing a huge amount of resources and thinking on mobile devices. Just to give an example, the percentage of visits to TPM that come from mobile devices is currently rising at almost 1 percentage point a month. So our first priority in 2011 is to make sure TPM is clean, fast and easy to use on all the key devices — iPhones, iPad, Android, etc. But my general sense is that while every digital publication thinks it has a “mobile strategy,” most actually don’t. They think they do, but they don’t. That’s because mobile devices will significantly change the mode of reporting and presentation, just like the web did a decade ago. If you go back to the mid-late 1990s, all the news organizations had websites. But it was basically print slapped onto the web. It was only in the beginning of this decade that you started to see presentational forms that were really native to the web and worked in the context of its strengths and weaknesses. I think mobile is about where web journalism was in maybe 1996-97. So we’re trying to keep in mind that the medium is still quite primitive and that we want to come up with some genuinely new, innovative uses of it.

I think it’s going to grow quickly, with two segments: one that’s basically tablets, things that look something like the iPad now does and then much smaller devices that people will carry with them/on them at all times. In the former category, I think you’ll have versions that look something like full-function websites, albeit designed very differently and around touch. It’s with the smaller devices that we’ll really be challenged to figure out ways to operate within much smaller screen sizes and interact with readers in fundamentally different ways. But as I said before, I don’t think anyone’s really come up with the break-out ideas for mobile yet.

LKM: A while back, you teased the idea of a membership model, where paid TPM members might get extra content or access. Do you imagine that model coming to fruition in the next year or two?

JMM: We’re hoping to do that in the first half of 2011. But to be clear, we’re never moving to a paywall model.

LKM: TPM’s expansion has been steady in the last few years. How do you balance maintaining quality with growth?

JMM: It’s a constant struggle. I knew something about journalism when I started doing this. And I actually knew a decent amount about the technology that powers a website. But I didn’t know anything about growing a company or an organization. So I’ve learned on the job. There are a lot of particular details about management and stuff like that. But I think the key is keeping in place a critical mass of people whose integrity and judgment I can trust. Building TPM taught me to be a businessman, and I enjoy that part of it. But really that’s what it comes to: a core of people who you trust.

LKM: What do you wish you knew ten years ago when you first started blogging?

JMM: It’s funny. I’m glad I didn’t know any of it. The pleasure for me has been exploring, learning, coming up with ideas or more often finding half-formed ideas and wrestling with them until I find some way to use them to improve what we do. I wouldn’t want to rob myself of that.

LKM: What does TPM look like ten years from now?

JMM: Stay tuned.

This Week in Review: An objectivity object lesson, a paywall is panned, and finding the blogger’s voice


This post is by from Nieman Journalism Lab


Click here to view on the original site: Original Post




[Every Friday, Mark Coddington sums up the week's top stories about the future of news and the debates that grew up around them. —Josh]

Olbermann and objectivity: Another week, another journalist or pundit disciplined for violating a news organization’s codes against appearances of bias: This week (actually, late last week) it was Keith Olbermann, liberal anchor and commentator for the cable news channel MSNBC, suspended for donating money to Democratic congressional candidates, in violation of NBC News policy. Olbermann issued an apology (though, as Forbes’ Jeff Bercovici noted, it was laced with animus toward MSNBC), and returned to the air Tuesday. There were several pertinent peripheral bits to this story — Olbermann was reportedly suspended for his refusal to apologize on air, it’s unclear whether NBC News’ rules have actually applied to MSNBC, numerous other journalists have done just what Olbermann did — but that’s the gist of it.

By now, we’ve all figured out what happens next: Scores of commentators weighed in on the appropriateness (or lack thereof) of Olbermann’s suspension and NBC’s ban on political contributions. The primary arguments boiled down to the ones expressed by Poynter’s Bob Steele and NYU’s Jay Rosen in this Los Angeles Times piece: On one side, donating to candidates means journalists are acting as political activists, which corrodes their role as fair, independent reporters in the public interest. On the other, being transparent is a better way for journalists to establish trust with audiences than putting on a mask of objectivity.

Generally falling in the first camp are fellow MSNBC host Rachel Maddow (“We’re a news operation. The rules around here are part of how you know that.”), Northeastern j-prof Dan Kennedy (though he tempered his criticism of Olbermann in a second post), and The New York Times’ David Carr (“Why merely annotate events when you can tilt the playing field?”). The Columbia Journalism Review was somewhere in the middle, saying Olbermann shouldn’t be above the rules, but wondering if those rules need to change.

There were plenty of voices in the second camp, including the American Journalism Review’s Rem RiederMichael Kinsley at Politico, and Lehigh j-prof Jeremy Littau all arguing for transparency.

Slate media critic Jack Shafer used the flap to urge MSNBC to let Olbermann and Maddow fly free as well-reported, openly partisan shows in the vein of respected liberal and conservative political journals. Jay Rosen took the opportunity to explain his phrase “the view from nowhere,” which tweaks traditional journalism’s efforts to “advertise the viewlessness of the news producer” as a means of gaining trust. He advocates transparency instead, and Terry Heaton provided statistics showing that the majority of young adults don’t mind journalists’ bias, as long as they’re upfront about it.

On The Media’s Brooke Gladstone summed up the issue well: “Ultimately, it’s the reporting that matters, reporting that is undistorted by attempts to appear objective, reporting that calls a lie a lie right after the lie, not in a box labeled “analysis,” reporting that doesn’t distort truth by treating unequal arguments equally.”

Commodify your paywall: We talked quite a bit last week about the new numbers on the paywall at Rupert Murdoch’s Times of London, and new items in that discussion kept popping up this week. The Times released a few more details (flattering ones, naturally) about its post-paywall web audience. Among the most interesting figures is that the percentage of U.K.-based visitors to The Times’ site has more than doubled since February, rising to 75 percent. Post-paywall visitors are also visiting the website more frequently and are wealthier, according to News Corp.

Of course, the overall number of visitors is still way down, and the plan continued to draw heat. In a wide-ranging interview on Australian radio, Guardian editor Alan Rusbridger expressed surprise at the fact that The Times’ print circulation dropped as their print-protectionist paywall went up. That, he said, “suggests to me that we overlook the degree to which the digital forms of our journalism act as a kind of sort of marketing device for the newspapers.” ResourceWebs’ Evan Britton gave five reasons why news paywalls won’t work, and Kachingle founder Cynthia Typaldos argued that future news paywalls will be tapping into a limited pool of people willing to pay for news on the web, squeezing each other out of the same small market.

Clay Shirky used The Times’ paywall as a basis for some smart thoughts about why newspaper paywalls don’t work in general. The Times’ paywall represents old thinking, Shirky wrote (and the standard argument against it has been around just as long), but The Times’ paywall feels differently because it’s being taken as a “referendum on the future.” Shirky said The Times is turning itself into a newsletter, without making any fundamental modifications to its product or the basic economics of the web. “Paywalls do indeed help newspapers escape commodification, but only by ejecting the readers who think of the product as a commodity. This is, invariably, most of them,” he wrote.

A conversation about blogging, voice, and ego: A singularly insightful conversation about blogging was sparked this week by Marc Ambinder, who wrote a thoughtful goodbye post at his long-running blog at The Atlantic. In it, Ambinder parsed out differences between good print journalism (ego-free, reliant on the unadorned facts for authority) and blogging (ego-intensive, requires the writer to inject himself into the narrative). With the switch from blogging to traditional reporting, Ambinder said, ”I will no longer be compelled to turn every piece of prose into a personal, conclusive argument, to try and fit it into a coherent framework that belongs to a web-based personality called ‘Marc Ambinder’ that people read because it’s ‘Marc Ambinder,’ rather than because it’s good or interesting.”

The folks at the fantastically written blog Snarkmarket used the post as a launching point for their own thoughts about the nature of blogging. Matt Thompson countered that Ambinder was reducing an incredibly diverse form into a single set of characteristics, taking particular exception to Ambinder’s ego dichotomy. Tim Carmody mused on blogging, voice, and authorship; and Robin Sloan defended Ambinder’s decision to leave the “Thunderdome of criticism” that is political blogging. If you care at all about blogging or writing for the web in general, make sure to give all four posts a thorough read.

TBD’s (possible) content/aggregation conflict: The new Washington-based local news site TBD has been very closely watched since it was launched in August, and it hit its first big bump in the road late last week, as founding general manager Jim Brady resigned in quite a surprising move. In a memo to TBD employees, TBD owner Robert Allbritton (who also launched Politico) said Brady left because of “stylistic differences” with Allbritton. Despite the falling-out, Brady, a washingtonpost.com veteran, spoke highly of where TBD is headed in an email to staff and a few tweets.

But the immediate questions centered on the nature of those differences between Allbritton and Brady. FishbowlDC reported and Business Insider’s Henry Blodget inferred from Allbritton’s memo that the conflict came down to an original-content-centric model (Allbritton) and a more aggregation-based model (Brady). Brady declared his affirmation of both pieces — he told Poynter’s Steve Myers he’s pro-original content and the conflict wasn’t old media/new media, but didn’t go into many more details — but that didn’t keep Blodget from taking the aggregation side: The web, he said, “has turned aggregation into a form of content–and a very valuable one at that.” Lost Remote’s Cory Bergman, meanwhile, noted that while creating content is expensive, Allbritton’s made the necessary investments and made it profitable before with Politico.

A new iPad app and competitor: There were two substantive pieces of tablet-related news this week: First, The Washington Post released its iPad app, accompanying its launch with a fun ad most everyone seemed to enjoy. Poynter’s Damon Kiesow wrote a quick summary of the app, which got a decent review from The Post’s Rob Pegoraro. For you design geeks, Sarah Sampsel wrote two good posts about the app design process.

The other tablet tidbit was the release of Samsung’s Galaxy Tab, which runs on Google’s Android system. Kiesow rounded up a few of the initial reviews from All Things Digital (a real iPad competitor, though the iPad is better), The New York Times (beautiful with some frustrations), Wired (more convenient than the iPad, but has stability problems) and Gizmodo (“a grab bag of neglect, good intentions and poor execution”). Kiesow also added a few initial impressions of the Galaxy’s implications for publishers, predicting that as it takes off, it will put pressure on publishers to move to HTML5 mobile websites, rather than developing native apps.

In other tablet news, MediaWeek looked at the excitement the iPad is generating within the media industry, but ESPN exec John Skipper isn’t buying the hype, telling MarketWatch’s Jon Friedman, ”Whenever a new platform comes up, people want to take the old platform and transport it to the new platform.” It didn’t work on the Internet, Skipper said, it won’t work on the iPad either.

Reading roundup: More thoughtful stuff about news and the web was written this week than most normal people have time to get to. Here’s a sample:

— First, two pieces of news: First, word broke last night that Newsweek and The Daily Beast will be undergoing a 50-50 merger, with the Beast’s Tina Brown taking over editorship of the new news org. The initial news accounts started to roll out late last night and into this morning at The New York Times, Washington Post, and NPR, who posted an interview with Brown. Obviously, this is a big, big story, and I’m sure I’ll have much more commentary on it next week.

— Second, U.S. News & World Report announced last week that it’s dropping its regular print edition and going essentially online-only, only printing single-topic special issues for newsstand sales. The best analysis on the move was at Advertising Age.

— Two great pieces on journalism’s collaborative future: Guardian editor Alan Rusbridger in essay form, and UBC j-prof Alfred Hermida in audio and slide form.

— Poynter published an essay by NYU professor Clay Shirky on “the shock of inclusion” in journalism and the obsolescence of the term “consumer.” Techdirt’s Mike Masnick added a few quick thoughts of his own.

— Two cool posts on data journalism — an overview on its rise by The Columbia Journalism Review’s Janet Paskin, and a list of great tools by Michelle Minkoff.

— Finally, two long thinkpieces on Facebook that, quite honestly, I haven’t gotten to read yet — one by Zadie Smith at The New York Review of Books, and the other by The Atlantic’s Alexis Madrigal. I’m going to spend some time with them this weekend, and I have a feeling you probably should, too.

Olbermann photo by Kirsten used under a Creative Commons license.

Hacking data all night long: A NYC iteration of the hackathon model


This post is by from Nieman Journalism Lab


Click here to view on the original site: Original Post




In the main room of the Eyebeam Art and Technology Center’s massive 15,000-square foot office and lab space in the Chelsea neighborhood of Manhattan, more than sixty developers, designers, and journalists pore over their computer screens. A jumble of empty coffee cups and marked up scraps of butcher paper litter the tabletops while networks of power cords fan out underneath.

The event is The Great Urban Hack, a two-day overnight hackathon, organized by the meetup group Hacks/Hackers, that took place over an intense 30-hour stretch this past weekend. Starting early Saturday morning journalists and developers came together to “design, report on, code and create projects to help New Yorkers get the information they need while strengthening a sense of community.”

The eleven teams that participated in the event worked on a varied set of projects that ranged in scope from collaborative neighborhood mapping to live action urban gaming.

Rearranging and visualizing data

The team that worked on the project “Who’s My Landlord?,” based off of Elizabeth Dwoskin’s article of the same name in the Village Voice last Wednesday, concerned itself with the task of helping residents determine who owns a given piece of property. Dwoskin’s article points out that for many of the most derelict buildings in the city this link is obfuscated, a huge barrier for city agencies in their task of regulation to protect tenants. The team built a tool that draws from three databases: two from the city to pull the names of building owners, and one state database to look up the address of the owner when there is an intermediate company.

Several groups worked on visualizations of some form of city data. The “Drawing Conclusions” team created a “Roach Map” using the raw data set of restaurant inspection results from the NYC Data Mine. The group wrote a script that scans the data line-by-line and counts each violation by zip code. They then analyze the data, taking into account variation in the number of inspections across zip codes, and plot it on a map of the city which auto-generates every week.

How hackathons work is simple: They define goals and create artificial constraints (like time) to catalyze the process of innovation. The closest journalistic equivalent might be the collaborative rush of a good newsroom working a big breaking story. But is this really the best environment to incubate projects of a journalistic nature? What are the different circumstances that foster the healthiest practices of innovation? And what is the best way to set expectations for an event like this?

The hackathon model

Hackathons like this are a growing trend. A lot can be said for bringing these groups together and into a space outside of their normal work environment. What’s maybe most fascinating to me is the opportunity for cultural interplay as these two groups find themselves more and more immersed in each other’s creative work. As John Keefe, one of the hosts of the event and a senior producer at WNYC, says: “It’s not really journalistic culture to come together and build stuff like this.”

Chrys Wu, a co-organizer of Hacks/Hackers and both a journalist and developer, talked about the group’s different philosophy’s of sharing information: “Your traditional reporter has lots of lots of notes, especially if they’re a beat reporter. There’s also their rolodex or contacts database, which is extremely valuable and you wouldn’t want to necessarily share that. But there are pieces of things that you do that you can then reuse or mine on your own…at the same time technologists are putting up libraries of stuff, they say: ‘I’m not going to give you the secret sauce but I’m definitely going to give you the pieces of the sandwich.’”

Lots of questions remain: what is the best way to define the focus or scope for an event like this? Should they be organized around particular issues and crises? And what’s the best starting point for a journalistic project? Is it with a problem, a data set, a question, or as in the case of the landlord project: the research of a journalist? For all of the excitement around hackathons, this seems like just the beginning.

Photo by Jennifer 8. Lee used under a Creative Commons license.

Talking Points Memo’s first developer talks startup life, jumping to ProPublica and data journalism


This post is by from Nieman Journalism Lab


Click here to view on the original site: Original Post




What’s it like being the only in-house techie at a news startup? Talking Points Memo’s first developer Al Shaw says “it’s kind of like being a reporter….you have to be a generalist,” doing everything from ad-side work to election-night interactives.

Shaw was the primary technical force behind most of the bells and whistles that cropped up at TPM over the past two years, including a redesign that lets producers switch up the layout of the homepage, and an array of slick interactives like the real-time election results tracker that made TPM look a lot less like a scrappy startup and more like an establishment outlet on Election Night earlier this month. (Shaw is quick to explain he had some help on the election map from Erik Hinton, TPM’s technical fellow.) He’s also been good about blogging about his technical endeavors in ways that could be useful to his peers at other news organizations.

Shaw announced last month he is leaving TPM to start a new gig at ProPublica, where he’ll keep working on data-driven journalism. On one of his few days off between jobs, I talked with him about what it’s like working for a news startup, what he hopes to accomplish at ProPublica, and where he thinks data journalism is headed. Below is a lightly edited transcript. (Disclosure: I used to work at TPM, before Al started there.)

Laura K. McGann: How did you approach your job at TPM? What did you see as your mission there?

Al Shaw: When I started, I came on as an intern right before the ’08 election. At that point, they didn’t have anyone in house who really knew much about programming or design or software. I came on and I saw an opportunity there because TPM is such a breaking-news site, and their whole goal is to do stuff really fast, that they needed someone to do that, but on the technology side, too.

I had a big role in how we covered the 2008 election. We became able to shift the homepage, rearrange stuff. Being able to really elevate what you can do in blogging software. That was kind of the first foray. Then I started redesigning some of the other sections. But the biggest impact I had was redesigning the homepage. That was about a year ago. I had the same goal of being able to empower the editors and nontechnical types to have a bigger palette of what they can do on the site. I created this kind of meta-CMS on top of the CMS that allowed them to rearrange where columns were and make different sections bigger and smaller without having to get into the code. That really changed the way the homepage works.

There is still Movable Type at the core, but there’s a lot of stuff built up around the sides. When we started to build bigger apps, like the Poll Tracker and election apps, we kind of moved off Movable Type all together and started building in Ruby on Rails and Sinatra. They’re hosted on Amazon EC2, which is a cloud provider.

LKM: What have you built that you’re the most proud of?

AS: Probably the Poll Tracker. It was my first project in Rails. It just had enormous success; it now has 14,000 polls in it. Daily Kos and Andrew Sullivan were using it regularly to embed examples of races they wanted to follow and it really has become a central part of TPM and the biggest poll aggregator on the web now. I worked with an amazing Flash developer, Michiko Swiggs, she did the visual parts of the graph in Flash. I think a lot of it was really new in the way you could manipulate the graph — if you wanted to take out certain pollsters, certain candidates, methods, like telephone or Internet, and then you could see the way the trend lines move. You can embed those custom versions.

I think the election tool was also a huge success [too], both technologically and on the design and journalism side. We got linked to from Daring Fireball. We also got linked to from ReadWriteWeb and a lot of more newsy sites. Andrew Sullivan said it was the best place to watch the elections. Because we took that leap and said we’re not going to use Flash, we got a lot of attention from the technology community. And we got a lot of attention from kind of the more political community because of how useable and engaging the site was. It was kind of a double whammy on that.

LKM: What was your experience working with reporters in the newsroom? TPM is turning ten years old, but it’s still got more of a startup feel than a traditional newspaper.

AS: It’s definitely a startup. I would fade in and out of the newsroom. Sometimes I’d be working on infrastructure projects that dealt with the greater site design or something with the ad side, or something beyond the day-to-day news. But then I’d work with the reporters and editors quite a bit when there was a special project that involved breaking news.

So for example, for the Colbert-Stewart rallies we put up a special Twitter wire where our reporters go out to the rallies and send in tweets and the tweets would get piped into a special wire and they’d go right onto the homepage. I worked with editors on how that wire should feel and how it should work and how reporters should interact with it. I remember one concern was, what if someone accidentally tweets somethng and it ends up on the homepage. How do we delete that? I came up with this system with command hashtags, so a reporter could send in a tweet with a special code on it which would delete a certain tweet and no one else would know about that, except for the reporter.

A lot of the job was figuring out what reporters and editors wanted to do and figuring out how to enable that with the technology we had and with the resources we had.

LKM: I remember an instance in my old newsroom where we had a tweet go up on the front page of another site and the frantic emails trying to get it taken down.

AS: Twitter is such an interesting medium because it’s so immediate, but it’s also permanent. We’re having a lot of fun with it, but we’re still learning how best to do it. We did this thing called multi-wire during the midterms, which was a combination of tweets and blog posts in one stream. There was a lot of experimentation with: When do we tweet as compared to a blog post? Should we restrict it to certain hours? That was a really interesting experiment.

LKM: What emerging trends do you see going on in data-driven or interactive journalism?

AS: It’s really good that a lot of sites are starting to experiment more with data-driven journalism, especially as web frameworks and cheap cloud hosting become more prevalent and you can learn Rails and Django, it’s really easy to get a site up that’s based around data you can collect. I do see two kind of disturbing trends that are also happening. One is the rise of infographics. They may not be as useful as they are pretty. You see that a lot just all over the place now. The other problem you see is the complete opposite of that where you’ll get just a table of data filling up your whole screen. The solution is somewhere in between that. You have a better way of getting into it.

It’s really great that there’s kind of a community forming around people that are both journalists and programmers. There’s this great group called Hacks/Hackers that brings those two cohorts together and lets them learn from each other.

LKM: How about at ProPublica? You mentioned you aren’t sure entirely what you’re going to do, but broadly, what do you hope to accomplish there?

AS: I’m most excited about working more closely with journalists on data sets and finding the best ways of presenting those and turning them into applications. That was one thing I was able to do with Poll Tracker, but it didn’t seem like TPM had as big of a commitment to individual stories that could have side applications. Poll Tracker was more of a long-running project. ProPublica is really into delving deeply onto one subject and finding data that can be turned into an application so the story isn’t just a block of text, there’s another way of getting at it.

One of the other things they’re working on is more tools for crowdsourcing and cultivating sources. I know that they want to start building an app or a series of apps around that. And they’re doing some cool stuff with Amazon Mechanical Turk for kind of normalizing and collecting data. I’m sure there’s going to be a lot more fun stuff to do like that.

Jeff Israely: An idea and a brand come together as Worldcrunch


This post is by from Nieman Journalism Lab


Click here to view on the original site: Original Post




[Jeff Israely, a Time magazine foreign correspondent in Europe, is in the planning stages of a news startup — a "new global news website." He details his experience as a new news entrepreneur at his site, but he'll occasionally be describing the startup process here at the Lab. Read his past installments here. —Josh]

This is a long overdue introduction: a kind of public christening, a chance to share with you, the reader, our vision for the future of news. Okay, you see where we’re headed: this post is all about marketing. Sixteen months after secretly banging out my first PowerPoint business plan, nine months of blog posts delving into every twist and turn of my digital news startup except what the damn thing was — I am hereby beginning the rollout.

But first, one last hedge. Up until now, the motivations for these pieces for the Lab have varied: trying to figure out where I fit in to this transforming industry; sharing the daily ins and outs/ups and downs of Old Media Guy launching New Media Thing; a public search for my writing voice on new platforms and in the new role of would-be startup business dude. On that final point, I have been keenly aware of the potential benefits afforded by this space — and blogging in general — in the attention it might generate when (and if) my project got off the ground. It is an expression of that sometimes uncomfortable truth about the 21st-century journalist: that we can no longer shy away from the nitty-gritty of promoting, selling, marketing each piece of editorial output we produce and the building of each of our respective personal brands as the best way to increase the chances that we may continue (or begin) doing the actual newsbiz work we originally set out to do.

And so here, just this once, let me set aside the personal exploration and entrepreneurial and journalistic “processes,” and focus solely on product: a mini/soft/pre-launch and presentation of our company’s core concept, our big ambitions, our brand. I won’t go into detail here about our plans for actually executing what we set out to do, though that is perhaps the most difficult and decisive of all topics. Once we’re up and running live, we will see together how that execution is proceeding, both in the back office and on the front page. But first: throat clear….drumroll!….spotlight!!

What we do

How do you cover the world — the most sprawling and variegated and expensive beat of them all? Where do you turn to find the fresh new stories and voices that break through all the inevitable chattering and cannibalizing around this or that single news event that only the wires or The New York Times have managed to chronicle? Where is the existing, untapped potential for on-the-ground journalism that is more than just a lucky tweet? Might there be a shortcut to quality content? Real, worldwide scoops? Though ours is just one part of the solution to covering the global beat, we believe it is strong on simplicity and economy and immediate impact: The professional (and participatory) selection and translation of the best, most relevant stories in the foreign-language media.

This new idea, of course, is not brand new. There is much interesting already happening now around online translation of news and information: Global Voices’ coverage of international bloggers, Meedan’s innovative Arabic-English online current-events dialogue, Café Babel’s and Presseurope’s multilingual European coverage, Worldmeets.us’s global viewpoints on American policy, Der Spiegel’s English-language website. But the quest for a commercially viable digital formula around the top names in global journalism is indeed something new. And, we think, rich in potential.

The roots of the model can be found in Courrier International, a successful general interest weekly launched 20 years ago in France, and has been taken up by others, including my good friends at Internazionale in Rome, Forum in Warsaw and Courrier Japon in Tokyo. Indeed, we are exploring a range of possibilities in partnering with Courrier, which is just a Paris Métro ride away from our home offices. We have much to learn from what they’ve been doing in print, including questions of selection and translation and copyright. And some day, they may have something to learn about what kind of journalistic and business opportunities we can create by applying this formula digitally, and in the real-time news cycle of the Internet. Indeed, partnerships will be key to executing what we will be doing. More on that in a future post.

Where we are

Unlike Courrier International — or World Press Review, a high-brow New York-based monthly that survives as an online forum for global opinion — we are being born as a live news source in the digital space. This will permeate everything we do. But the technology (like the traditions) must serve the journalism, not be an end in itself. Frédéric Bonelli, one of our first investors, describes the media world right now as being “like Europe after World War II“: a mixed landscape of ruins, reconstruction efforts, old institutions trying to salvage their standing, and ambitious new players, some with true vision, others just looking to exploit the confusion. As a company that is both global and agile, we hope we can fit somewhere in the “vision” camp, aware of the words of Jay Rosen, who declared in a September speech here in Paris that “the struggle for the next press is an international thing.” Mais oui, monsieur!

What’s our name?

Way back in December 2009, when my Danish-born, Rome-based web designer friend Annie Skovgaard Christiansen agreed to create the demo site for the project, she casually said, “Okay — but I can’t start until you tell me the name.” Panic. There was a working name attached to my working biz plan, but it was both mediocre and unavailable as a URL. So the next 48 hours, I spent wracking my brain, harassing friends and colleagues, getting to know goDaddy. It had to be punchy, global…and available as .com for the standard $8.99 rate! The good names were all taken, and those not yet taken, weren’t quite good enough. Until…hmm…that’s not bad…probably not available? Let me see…yes! The feedback ever since — colleagues, friends, potential partners and investors — has been about as positive as you could hope for (though my ownDaddy said it sounded like breakfast cereal). So the URL nabbed back in late December has stuck as our website’s name, our company’s brand. And if we do the rest of our job well, we hope it sticks in your brain as a mark of quality international news: Worldcrunch.

One last bit of bald marketing: Please sign up for updates on our launch, as we continue with our alpha testing and building our team (and continuing our fundraising). We also have Twitter and Facebook pages. And though my business partner Irene is opposed, one day the Worldcrunch coffee mugs will arrive as well!

And finally, the brand needs a slogan, or what I’ve since discovered is referred to as a baseline. It came to me just a few weeks ago, as I swam my laps. Maybe you once heard it in j-school? Or at your first newspaper job? They say “All news is local.” Of course it is. The county hospital’s response to national health care reform, the school board budget deliberations, and the new stop sign installed around the corner must get covered because they affect the lives of you, the reader. But for the same reasons, we must keep up with the latest news from Peshawar or Pyongyang, China, Chile, and Chicago too, to say nothing of this autumn’s harvest in Bordeaux. What happens there matters here. All news indeed is local. We just say it differently here at Worldcrunch: All News is Global.

Meet Intersect, where storytelling, time, and location get all mashed up


This post is by from Nieman Journalism Lab


Click here to view on the original site: Original Post




It’s near impossible to tell a story that doesn’t have a place or a time. As readers and just as humans we have a difficult time connecting with a story — be it a friendly anecdote or a news article — that doesn’t tell us where it happened and when. As writing and storytelling has evolved online those two components have largely been relegated to the background — no less important of course, but often useful as metadata, a tag or pin on a map.

Intersect is trying to bring that information to the forefront of storytelling and wants people to build around what happens to them at fixed points in time and space. Part blogging tool, part social network, Intersect lets users tell stories as they are pegged to a certain time and place in a way that would eventually create a timeline for each user. But pulling back wider, Intersect will allow communities to share a more complete narrative of certain events.

An example? How about The Daily Show and Colbert Report’s Rally to Restore Sanity/March to Keep Fear Alive in Washington D.C. The Washington Post partnered with Intersect to tell stories from the event, both from attendees but also reporters:

The Story Lab team will be filing stories throughout Saturday’s events on the Mall via Intersect, a new site designed to collect and present stories live and from the scene. Here on washingtonpost.com and on Intersect’s site, we’ll be documenting the scene and asking those in attendance and those watching at home to weigh in on the politics vs. entertainment question. Please join us.

Let’s consider how this would work without Intersect: Anyone covering the event would hope for a universally accepted hashtag on Twitter, curate the best Tweets from the day, search for any photos on Flickr, and maybe, if they’re crafty, create a Google Map that pins Tweets and photos to locations on the National Mall.

Instead, with Intersect, any user can go in and automatically enter the time and location and proceed to write updates and post photos. (Like, say, the President get a donut while campaigning in Seattle.) But in order for Intersect to work they’ll need to answer two big questions: how to attract an audience to populate intersections, and how to introduce a new routine to users (i.e. get them to write about Intersections as much as they tweet or post to Facebook).

The Post partnership — an example of one potential route to an audience — was promoted online by the Post and Intersect, garnered its share of Twitter buzz and made a splash at the Online News Association conference, all of which seemed to generate interest in using the service on Rally day. Looking over Intersect there are more than 40 stories connected to the rally and the National Mall, each offering a different vantage point, the kinds reporters covering these type of events typically like to seek out.

Post reporters using a beta version of an Intersect iPhone app posted stories and photos that were fed to WashingtonPost.com and Intersect’s site, where they were side by side with updates from other users.

Since the content from the Rally was shared on both sites, Intersect demonstrated its value as both a platform for stories and a tool for crafting them. That may be key to any future success for Intersect, since they’ll need high visibility and a combination of big events and big partners willing to experiment.

Though Intersect is not expressly a platform for journalism, it could be applied to news gathering, as evidenced by the Post’s partnership. Intersect could allow journalists to either tap into an existing community to see what background they can provide for a story, or be used to invite others to tell a story. I spoke with Monica Guzman, Intersect’s director of editorial outreach, and she gave the example of Seattle’s Space Needle, which celebrates 50 years in 2012. A journalist could begin a story on Intersect about the needle and ask readers to fill in the history of the landmark over the last 50 years.

“It’s this idea of you can actually tell your whole story, go all the way back, see how you’ve changed,” Guzman told me. “That’s kind of cool.” Guzman used to work at SeattlePI.com, where she ran its main blog.

Another reason Intersect could be valuable to journalists is that it’s a system set up to provide context in stories. “I think it’s absolutely critical. A lot of new media journalists are seeing that need to bring context back into journalism,” she said.

Intersect does have a social network meets real-world feel to it, as members have a presence online, but one tied to specific places. Instead of simply building online “community,” Intersect could also serve as a means of growing a physical community and connecting people around certain localities, like the story of the change in a neighborhood as told by the people who live there, she said.

If the launch of services like Storify and Intersect tell us anything, it’s that aggregation and collaboration in storytelling may be reaching a new plateau, one where there is a symbiotic relationship between the technology and the craft behind how we share stories.

Guzman sees Intersect as part of the broader change in news, the transition from journalists as the sole keepers of news and information to journalists finding ways to collaborate and reach out to readers. “I learned through the Big Blog just how much news is becoming a conversation,” she said. “It’s about bringing out new voices and perspectives.”