November 21, 2017

Does your marketing need a customer graph?

social_graph (3)The relentless rise of social networks in recent years has made many marketers familiar with the concept of the social graph—data about how people are connected to one another—and its power in a marketing context.

Facebook’s social graph has propelled it to a projected annual revenue of around $40B for 2017, driven primarily by advertising sales. Advertisers are prepared to pay a premium for the advanced targeting capabilities that the graph enables, especially when combined with their own customer data; these capabilities will enable Facebook to snag over 20% of digital ad spend in the US this year.

Partly as a result of this, many marketers are thinking about how they can exploit the connectedness of their own customer base, beyond simple “refer a friend” campaigns. Additionally, it’s very common to hear marketing services outfits tack the term graph onto any discussion of user or customer data, leading one to conclude that any marketing organization worth its salt simply must have a graph database.

But what is a graph, and how is it different from a plain old customer database? And if you don’t have a customer graph in your organization, should you get one?


What is a graph database, and why should I care?

A graph database is a database that stores two things:

  • A set of entities (such as people, servers, movies, or types of cheese)
  • The relationships between the entities (such as friendships, memberships, ownership, or authorship)

In formal graph theory parlance, the entities are referred to as vertices, and the relationships edges. Whether someone uses one set of terms or the other is a good indication of whether they’re trying to impress you with their knowledge of graph theory (we’ll stick to the friendly terms above).

In a graph database, the relationships are first-class objects alongside the entities, with their own taxonomy and attributes, and it is this that makes graph databases unique. There are lots of types of database that can store information about things (the most common being our old friend, the relational database). These databases can be pressed into service to capture relationships indirectly (in RDBMS systems, through primary/foreign key pairs), but graph databases provide a much more explicit and simple way of capturing and, more importantly, retrieving relationship information.

The diagram below provides a simple example of a graph—in this case, one that captures Twitter-style ‘follow’ relationships between people. The circles in the diagram are people, and the lines are the relationships. Note that each relationship has a direction.

image

The graph database makes it easy to get the answers to the following kinds of question:

  • Who does Andrew follow? (Barbara and Cynthia)
  • Who are Barbara’s followers? (Andrew and Cynthia)
  • Who do the people who Andrew follows follow? (Barbara and Donald)
  • Who are Donald’s followers’ followers? (Andrew and Cynthia)

You can see how this kind of data is essential in providing a service like Twitter: When Andrew logs in, he wants to see the tweets from the people he follows, while Barbara can see how many followers she has. But the graph also makes it possible to make a suggestion to Andrew: Because Andrew follows Barbara and Cynthia, and they both follow Donald, Twitter can suggest that Andrew also follow Donald. This ability to explore indirect relationships is a key value of graph data.

Because a lot of marketing is about people and relationships, there are several ways in which graph data, particularly user graph data, can be useful for marketers, including:

  • Recommendation services
  • User identity matching
  • Community detection (tribes)
  • Commercial account management

In the rest of this post, we’ll take a look at these scenarios in a little more detail, and then look at some things to bear in mind if you’re thinking about investing in graph database technology.


Graph-based recommendation services

One of the best (and most well-known) applications for graph data is a recommendation engine. By capturing the relationships between users and the products they have purchased in a graph, companies like Amazon are able to recommend products based on the purchase behavior of others.

image

The graph above is more complex than the first example because it captures two kinds of entity (customer, in blue, and product, in green) and two kinds of relationships (purchase and ‘like’  - analogous to a 5-star product rating in this example).

Based on the graph, we can see that Andrew has purchased (and liked) a widget. Barbara and Cynthia have also both purchased a widget, and additionally have each purchased both a sprocket and a doodad. So a simple recommender system could use this information to suggest to Andrew that he should buy one of these products.

However, with the extra level of detail of the ‘like’ relationship, we can see that Barbara and Cynthia both like the Doodad – and that additionally Cynthia likes the widget she bought. So it is possible to make a more relevant recommendation – for the doodad. This is also known as collaborative filtering.

A simple (but significant) enhancement of such a system is to capture user behavior (especially in response to recommendations) and feed it back into the graph as a new layer of relationships. This feedback loop means that the recommendation engine can become an unsupervised machine learning system, continually optimizing recommendations based on not just similarities in purchases, but how users are responding to the recommendations themselves.

Pinterest’s Pixie is a good example of a graph-based recommendation system. This deck from Brazil-based retailer Magazine Luzia talks about how they built a recommendation system on AWS.


Identity matching

Matching sets of user data that do not contain a common, stable identity signal is a challenge for many organizations, and is only getting more challenging as devices proliferate; fortunately, graphs offer a solution here.

A graph database can model the indirect relationships between IDs through tracking the relationships between those IDs and the contexts they are seen in. In the diagram below, three distinct IDs (ID1, ID2 and ID3) are seen in combination with different devices (in green), apps (in orange) and geo locations (in grey). Using the graph, it’s possible to calculate a simple ‘strength function’ which represents the indirect relationship between the IDs, by counting the number of entities they have common links to.

image

Per the diagram, the relationship strength function (we’ll call it S) for each pair of IDs is as follows:

  • S{ID1, ID2} = 2
  • S{ID1, ID3} = 4
  • S{ID2, ID3} = 3

On the basis of this function, one would conclude that ID1 and ID3 are most likely to belong to the same actual user, or at least users that have some sort of commonality.

A more sophisticated version of the graph has numeric values or weights associated with the relationships – for example, the user/app relationship might capture the number of times that app has been launched by that user. This enables the strength function to achieve greater accuracy by weighting some connections more than others.

A further enhancement is to apply machine learning to the graph. The various relationships between two IDs, and their intrinsic weights, can be thought of as a set of features to build a predictive model for the likelihood of two IDs actually belonging to the same person. A set of known connected IDs can then be used as a training set, with the ML algorithm adjusting a second set of weights on the relationship edges until it achieves a high-quality prediction, and is able to provide a probability for each identity pair belonging to the same actual user. This is roughly how solutions like Drawbridge’s Connected Consumer Graph and Tapad’s Device Graph work.

This kind of probabilistic identity mapping enables marketing delivery systems to treat highly similar IDs as if they belong to the same user, even if they do not in practice. Obviously, deterministic identity mapping (where user data is joined on common keys) is better, but in cases where that is not possible, a graph-based approach can provide an efficient way of extending an audience to a new channel where good ID coverage is scarce.


Community detection

A third marketing application of graphs is community detection. This is a type of clustering that looks for communities or ‘cliques’ of highly interconnected individuals within a population:

(sou54rce)

The clusters that result from this kind of analysis represent customers who are highly connected to each other; targeting these users with social or word-of-mouth campaigns can then drive above-average response rates.

A graph-based approach to this problem is useful when the clusters or communities are not of equal size, and where there may be an uneven distribution of connectedness within the clusters. As with some of the other scenarios I’ve described, using a graph representation of the data isn’t the only way to solve this problem, but it can be the most efficient.

This article provides some interesting insights into how the diversity of Twitter users’ networks can have an impact on their ability to generate creative ideas.


Commercial account management

Commercial or Enterprise account management is all about tracking relationships – relationships with individual decision-makers, the organizations that they represent, and other information such as the decisions those people are responsible for, the role they have, and so on.

Graphs are very well suited to tracking this kind of information, making it possible for account managers to identify strategies for closing new business. Take the following example graph, of companies, individuals who work for them, and products those companies use:

image

This graph enables the following conclusions to be drawn:

  • GM uses Product A and Product B
  • Andrew Smith, who works for Ford, used to work for GM, which uses Product B
  • Barbara Jones, who used to work for Ford, specializes in Product B

Based on the above insights, the account manager for Ford, who is trying to sell in Product B, might decide to speak to her account manager colleague for GM to ask if she can connect Carl White with Andrew Smith, so that Carl can talk about his experiences with Product B. Given that Carl and Andrew have both worked at GM, they may even already know each other.

In selecting a partner to work with Ford on the implementation of Product B, the Ford account manager would be wise to speak to Barbara Jones at ABC, since the graph shows that ABC has worked with Ford before, and so has Barbara.

This HBR article provides some more detail on the use of social graphs for business.


Getting started with graphs

If one of the above use-cases for graph data has caught your attention, here are some ways to get started with building a customer graph dataset.

Before you start

Before you go to the trouble and expense of building your own graph database, you do need to have a fairly good-quality set of customer data with attributes that you can use to connect those customers together. A sparse graph that is a series of poorly-connected islands will not be very useful.

You should also think about whether your business and data lend themselves to the kinds of connected applications listed above. For example, if your customer base has very diverse needs, then a recommender service may not be very useful. Similarly, if your customer base is small, then an in-house graph solution may be overkill; you may benefit from connecting your customer base to one of the public graph offerings below.

Graph databases

If you’re looking to build your own graph database, there are a number of commercial solutions available:

If you’re already managing your data in the cloud, you’ll want to pick a graph data solution that works with your cloud provider – Datastax, for example, partners with Microsoft for hosting on Azure, and Google’s cloud platform. Before you are able to leverage your graph data, you’ll need to both define your graph schema and load data into the graph – look for an implementation partner that can help you with both of these things.

To work with graph data, you’ll need to learn a graph query language. The most popular graph query language is Gremlin (part of the TinkerPop stack), though other query languages exist (for example, Neo4j has its own query language, called Cypher). Again, an implementation partner who already has graph coding skills will be a valuable resource here.

Customer and ID graphs

If you’re not ready to manage your own graph dataset, there are many companies offering graph-based services around customer and identity data. In fact, most DMPs today will tout a customer graph as core to their solution. Establishing whether such offerings truly are customer graphs rather than identity graphs (i.e. whether they provide relationship information between customers, not just between IDs) should be one of your first questions if you’re being pitched on these solutions.

Some of the key offerings are:

Even if you don’t end up implementing a graph database, graph-style thinking is a valuable step in data design; one of the pleasing benefits of graph databases is that their structure closely mirrors the way that humans tend to think about data and systems. Thinking about your customer data as a graph can therefore help you unlock insights from your existing data.

del.icio.usdel.icio.us diggDigg RedditReddit StumbleUponStumbleUpon

October 09, 2017

The Electrification of Marketing

weave room

At the tail end of the nineteenth century, electricity was starting to have a profound effect on the world. As dramatized in the excellent novel The Last Days of Night, and shortly in the forthcoming film The Current War, Thomas Edison battled with George Westinghouse (the latter aided by Croatian genius/madman Nikola Tesla) for control over the burgeoning market for electricity generation and supply. The popular symbol of the electrical revolution is of course Edison’s famous light bulb, but perhaps almost more important was the humble electric motor.

The electric motor was so important because it revolutionized manufacturing, enabling factories to create assembly lines and realize huge efficiency dividends. The Ball Brothers Glass Manufacturing Company, for example, replaced 36 workers with a single electric crane for moving heavy loads across the factory where they made their famous Mason jars.

But for all the benefits of electric motors, many factories were slow to embrace the new technology. As this article from the BBC World Service’s “50 Things that Made the Modern Economy” podcast explains, by 1900, almost twenty years after Thomas Edison started selling electricity from his generation plants in Manhattan and London, only 5% of factories had switched from steam to electric power. Powering a factory with a steam engine was costly, complicated, and dangerous. So why the reluctance to move to electricity?

The reason lies in the way those factories were organized to take advantage of steam power generation. A typical nineteenth century factory, for example making textiles, looked like the image above. Mechanical power was generated by a single large steam engine which ran more or less continuously, and was transferred to individual machines (such as looms or lathes) via a series of drive shafts, gears and drive belts. Because the power was being transferred mechanically, the machines were packed closely together. This, combined with the constant spinning of the drive shafts, made these factories very dangerous to work in; in 1900, over half a million people in the US (almost 1% of the population) were maimed in factory accidents.

Simply replacing the central steam engine with an electric motor did not deliver significant benefits – the drive shafts and belts to the machines still broke down, factories were still crowded, inefficient and dangerous, and the central motor (now powered by comparatively expensive electricity) still had to be kept running constantly.

To truly capitalize on electrification, factories had to reinvent themselves, replacing all their individual machines with versions that were powered by their own electric motors, with power transferred to them via unobtrusive wires rather than spinning drive shafts. In turn this meant that machines did not need to be so tightly packed together; factories could be reorganized to be more spacious and facilitate the flow of items, paving the way for the production line and improving factory conditions and safety. Ultimately, it was the qualitative transformation in the way things were made which was electrification’s biggest benefit.

Reorganizing the marketing factory

The story of electrification and how it impacted manufacturing in the first decades of the twentieth century provides an interesting parallel to the impact of data and AI on the marketing industry in the first decades of the twenty-first.

Today, many marketing organizations have adopted data in a similar way to how factories first adopted electricity: by applying it to existing business processes and ways of working. In direct marketing, the core processes of list-generation and campaign delivery have not changed fundamentally in fifty years – marketers build target audience lists, map messages to this list, deliver those messages, and then measure the response. The sophistication and complexity of all these steps has changed dramatically, but the process itself is still the same.

However, as electricity led to the development of new kinds of manufacturing machines, so data is leading the the development of new kinds of marketing machines, powered by AI. These new systems, which I have written about before, promise to transform the way that digital marketing is done. But just as before, getting there won’t be easy, and will require marketing leaders to embrace disruptive change.

The current ‘factory layout’ for many marketing organizations is based around individual teams that have responsibility for different channels, such as web, search, email, mobile and so on. These teams coordinate on key marketing calendar activities, such as holiday campaigns or new product launches, but manage their own book of work as a sequence of discrete activities. At Microsoft we’ve made progress in the last few years on bringing many of these teams together, and supporting them with a common set of customer data and common marketing automation tooling. But individual campaigns are still largely hand-crafted.

AI-driven marketing systems use a wide range of attributes at the customer level, combined with a continuous testing/learning approach, to discover which of a range of creative and messaging should be executed next, for which customers, in which channels. They break down the traditional campaign-centric model of customer communications and replace it with a customer-centric, ‘always on’ program of continuous nurture. For these systems to work well, they need a detailed picture of the customer, including their exposure and response to previous communications, and they need a wide range of actions that they can take, including the ability to choose which channel to communicate in for a given message and audience.

A fairly traditional marketing organization that is looking to evaluate the potential of AI-driven marketing will, prudently, lean towards trying the technology in a relatively limited pilot environment, likely choosing just one campaign or program in a single channel for their test. These choices make sense – few companies can easily try out new technology across multiple channels, for both technical reasons (i.e. wiring the thing up) and organizational reasons (getting multiple teams to work together).

But this approach is a bit like a 1900’s factory owner deciding to replace just a single machine in the factory with an electric version. Dedicated (and expensive) wiring would have to be laid to power the machine; it would still be crammed in with all the others, so its size and design would be limited; and it would likely need a dedicated operator. In this environment, it would be unlikely that the single machine would be so transformatively efficient that the factory owner would rush out to buy twenty more.

And so it is with AI-driven marketing. A test within a single channel, on a single campaign, will likely generate modest results, because the machine’s view of the customer will be limited to their experience with that brand in that channel; its message choices will also be limited, since it can only communicate within the single channel. These problems are exacerbated by the expense of laying dedicated data ‘lines’ to the new system, and of building many creative variants, to give the system enough message choice within a single channel.

What’s needed is for AI-based optimization to be applied as an enabling capability in all marketing campaigns, across multiple channels and products. This requires significant investment in data and channel integration; but even more importantly it requires marketers, and marketing organizations, to operate differently. Digital advertising, CRM and e-commerce teams, and their budgets, need to be brought together; instead of marketers creating many discrete campaigns, marketers need to create more evergreen programs that can be continuously optimized over time. The marketing factory needs to be organized around the customer, not the product or channel.

This kind of model represents very disruptive change for today’s marketing organizations, as it did for yesterday’s factory owners. In the end, much of the rise of electrified factories a hundred years ago was due to the efforts of newcomers to the field such as Henry Ford, who jumped straight to an electrified production line in the production of his Model T. Today’s marketing chiefs would do well to heed this lesson from history, as disruptors like Amazon, Tesla and Stitch Fix use process innovation to create streamlined, customer-centric marketing functions that are poised to exploit the transformative technology of AI.

del.icio.usdel.icio.us diggDigg RedditReddit StumbleUponStumbleUpon

September 06, 2017

Is Digital Marketing having its ‘Deep Blue’ moment?

COMPUTER CHESS

Garry Kasparov will forever be remembered as perhaps the greatest chess player of all time, dominating the game for almost twenty years until his retirement in 2005. But ironically he may be best remembered for the match he failed to win twenty years ago in 1997 against IBM’s Deep Blue chess computer. That watershed moment – marking the point at which computers effectively surpassed humans in chess-playing ability – prompted much speculation and hand-wringing about the coming obsolescence of the human brain, now that a mere computer had been able to beat the best chess grandmaster in the world.

Since then, computers and chess software have only grown more powerful, to the point that a $50 commercial chess program (or even a mobile app) can beat most grandmasters easily. Faced with this, you might expect Kasparov and other top-flight players to have grown disillusioned with the game, or defensive about the encroachment of computers on their intellectual territory; but in fact the reverse is true.

Today’s chess grandmasters make extensive use of computers to practice, try out new strategies, and prepare for tournaments, in the process becoming a little more like the machines that outpaced them in 1997. Kasparov himself was instrumental in pioneering a  new type of chess game, Advanced Chess, in which humans are allowed to consult with chess software as they play. In his new book, “Deep Thinking: Where Machine Intelligence Ends and Human Intelligence Begins”, Kasparov writes about an Advanced Chess match he played in 1998 against Veselin Topalov:

“Having a computer partner also meant never having to worry about making a tactical blunder. The computer could project the consequences of each move we considered, pointing out possible outcomes and countermoves we might otherwise have missed. With that taken care of for us, we could concentrate on strategic planning instead of spending so much time on calculations. Human creativity was even more paramount under these conditions.”

What Kasparov and his successors in the competitive chess-playing world have discovered was that, when it comes to chess, the strongest player is not man or machine, but man and machine. In fact, a new kind of chess tournament has sprung up, Freestyle Chess, in which teams of humans and computers compete against one another, each bringing their respective strengths to the game: creativity, strategy and intuition from the humans, and tactical outcome prediction from the computers.

And your point is?

You may be asking what relevance this has to digital marketing. In fact, there are strong similarities between chess and marketing (particularly digital marketing):  they are both highly quantifiable pursuits with clear outcomes which have historically relied solely on human intuition and creativity for success.

As in chess, digital marketing relies upon a continuous reassessment of the ‘board’ (customer behaviors and history) in order to decide upon the next ‘move’ (a particular campaign communication aimed at a particular group of customers). Once the move has been made, the board needs to be reassessed before taking the next move.

Today’s digital marketer is much like the chess grandmaster of the early 1990s – they rely on their intuitive understanding of their audience’s makeup and preferences to decide what offers and messages they want to deliver, to which users, and in which channels. Of course, digital marketers understand that measuring campaign outcomes and audience response (using techniques like control groups and attribution analysis) is very important, but most still operate in a world where the humans make the decisions, and the computers merely provide the numbers to support the decision-making.

Luddites 2.0

When Kasparov was asked in 1990 if a computer could beat a grandmaster before the year 2000, he quipped:

“No way - and if any grandmaster has difficulties playing computers, I would be happy to provide my advice.”

Today’s digital marketers can be forgiven for exhibiting some of the same skepticism. Ask them how they came up with a new idea for an ad, or how they know that a particular product will be just right for a particular audience, and they may not be able to answer – they will just know that their intuition is sound. As a result it can seem incredible that a computer can pick the right audience for a campaign, and match the appropriate offer and creative to that audience. 

But the computers are coming. As I mentioned in my earlier post on bandit experimentation, companies like Amplero, Kahuna and Cerebri AI are pitching intelligent systems that claim to take a lot of this decision-making about creative choice, audience, channel and other campaign variables out of the hands of humans. But where does that leave the digital marketer?

We welcome our robot colleagues

The clue lies in the insights that Kasparov ultimately drew from his defeat. He realized that the strengths he brought were different and complementary to the strengths of the computer. The same holds true for digital marketing. Coming up with product value propositions, campaign messaging and creative are activities which computers are nowhere close to being good at, especially in the context of broader intangible brand attributes. On the other hand, audience selection and targeting, as well as creative optimization, are highly suited to automation, to the extent that computers can be expected to perform significantly better than their human counterparts, much as chess software outperforms human players.

Clearly humans and machines need to work together to create and execute the best performing campaigns, but exactly how this model will work is still being figured out.

Today, most digital marketers build campaign audiences by hand, identifying specific audience attributes (such as demographics or behavioral history) and applying filters to to those attributes to build segments. The more sophisticated the marketer attempts to be in selecting audience attributes for campaign segments, the more cost they incur in the setup of those campaigns, making the ROI equation harder to balance.

The emerging alternative approach is to provide an ML/AI system with a set of audience (and campaign) attributes, and let it figure out which combinations of audience and offer/creative deliver the best results by experimenting with different combinations of these attributes in outbound communications. But this raises some important questions:

  • How to choose the attributes in the first place
  • How to understand which attributes make a difference
  • How to fit ML/AI-driven campaigns into a broader communications cadence & strategy
  • How to use learnings from ML/AI-driven campaigns to develop new value propositions and creative executions

In other words, ML/AI-driven marketing systems cannot simply be ‘black boxes’ into which campaign objectives and creative are dumped, and then left to deliver clicks or conversions on the resulting campaign delivery. They need to inform and involve marketers as they do their work, so that the marketers can make their uniquely human contribution to the process of designing effective campaigns. The black box needs some knobs and dials, in other words.

The world of chess offers a further useful parallel here. Chess grandmasters make extensive use of specialized chess software like Fritz 15 or Shredder, which not only provide a comprehensive database of chess moves, but also training and analysis capabilities to help human players improve their chess and plan their games. These programs don’t simply play chess – they explain how they are making their recommendations, to enable their human counterparts to make their own decisions more effectively.

These are the kinds of systems that digital marketers need to transform their marketing with AI. In turn, marketers need to adjust the way they plan and define campaigns in the same way that chess grandmasters have dramatically changed the way they study, plan and play games of chess in the last twenty years, working alongside the computers before, during and after campaigns are run.

In 1997, it was far from clear how chess, and the people who played it , would react to the arrival of computers. Digital Marketing stands on a similar threshold today. Twenty years from now it will seem obvious how marketers’ roles would evolve, and how technology would adapt to support them. We’re in the fortunate position of getting to figure this out as it all unfolds, much as Kasparov did.

del.icio.usdel.icio.us diggDigg RedditReddit StumbleUponStumbleUpon

August 26, 2015

Got a DMP coming in? Pick up your underwear

mr-messy-nr-8If you’re like me, and have succumbed to the unpardonably bourgeois luxury of hiring a cleaner, then you may also have found yourself running around your house before the cleaner comes, picking up stray items of laundry and frantically doing the dishes. Much of this is motivated by “cleaner guilt”, but there is a more practical purpose – if our house is a mess when the cleaner comes, all she spends her time doing is tidying up (often in ways that turn out to be infuriating, as she piles stuff up in unlikely places) rather than actually cleaning (exhibit one: my daughter’s bedroom floor).

This analogy occurred to me as I was thinking about the experience of working with a Data Management Platform (DMP) provider. DMPs spend a lot of time coming in and “cleaning house” for their customers, tying together messy datasets and connecting them to digital marketing platforms. But if your data systems and processes are covered with the metaphorical equivalent of three layers of discarded underwear, the DMP will have to spend a lot of time picking that up (or working around it) before they can add any serious value.

So what can you do ahead of time to get the best value out of bringing in a DMP? That’s what this post is about.

What is a DMP, anyway?

That is a excellent question. DMPs have evolved and matured considerably since they emerged onto the scene a few years ago. It’s also become harder to clearly identify the boundaries of a DMP’s services because many of the leading solutions have been integrated into broader “marketing cloud” offerings (such as those from Adobe, Oracle or Salesforce). But most DMPs worth their salt provide the following three core services:

Data ingestion & integration: The starting place for DMPs, this is about bringing a marketer’s disparate audience data together in a coherent data warehouse that can then be used for analytics and audience segment building. Central to this warehouse is a master user profile  – a joined set of ID-linked data which provides the backbone of a customer’s profile, together with attributes drawn from first-party sources (such as product telemetry, historical purchase data or website usage data) and third-party sources (such as aggregated behavioral data the DMP has collected or brokered).

Analytics & segment building: DMPs typically offer their own tools for analyzing audience data and building segments, often as part of a broader campaign management workflow. These capabilities can vary in sophistication, and sometimes include lookalike modeling, where the DMP uses the attributes of an existing segment (for example, existing customers) to identify other prospects in the audience pool who have similar attributes, and conversion attribution - identifying which components of a multi-channel campaign actually influenced the desired outcomes (e.g. a sale).

Delivery system integration: The whole point of hiring a DMP to integrate data and enable segment building is to support targeted digital marketing. So DMPs now provide integration points to marketing delivery systems across email, display (via DSP and Exchange integration), in-app and other channels. This integration is typically patchy and influenced by other components of the DMP provider’s portfolio, but is steadily improving.

Making the best of your DMP relationship

The whole reason that DMPs exist in the first place is because achieving the above three things is hard – unless your organization in a position to build out and manage its own data infrastructure and put some serious investment behind data integration and development, you are unlikely to be able to replicate the services of a DMP (especially when it comes to integration with third-party data and delivery systems). But there are a number of things you can do to make sure you get the best value out of your DMP relationship.

 

1. Clean up your data

dirty-dishesThis is the area where you can make the most difference ahead of time. Bringing signals about your audience/customers together will benefit your business across the board, not just in a marketing context. You should set your sights on integrating (or at least cataloging and understanding) all data that represents customer/prospect interaction with your organization, such as:

  • Website visits
  • Purchases
  • Product usage (if you have a product that you can track the usage of)
  • Mobile app usage
  • Social media interaction (e.g. tweets)
  • Marketing campaign response (e.g. email clicks)
  • Customer support interactions
  • Survey/feedback response

You should also integrate any datasets you have that describe what you already know about your customers or users, such as previous purchases or demographic data.

The goal here is, for a given user/customer, to be able to identify all of their interactions with your organization, so that you can cross-reference that data to build interesting and useful segments that you can use to communicate with your audience. So for user XYZ123, for example, you want to know that:

  • They visited your website 3 times in the past month, focusing mainly on information about your Widget3000 product
  • They have downloaded your free WidgetFinder app, and run it 7 times
  • They previously purchased a Widget2000, but haven’t used it for four months
  • They are male, and live in Sioux Falls, South Dakota
  • Last week they tweeted:
    image

Unless you’re some kind of data saint (or delusional), reading the two preceding paragraphs probably filled you with exhaustion. Because all of the above kinds of data have different schemas (if they have schemas at all), and more importantly (or depressingly), they all use different (or at least independent) ways of identifying who the user/customer actually is. How are you supposed to join all this data if you don’t have a common key?

DSPs solve these problems in a couple of ways:

  • They provide a unified ID system (usually via a third-party tag/cookie) for all online interaction points (such as web, display ads, some social)
  • They will map/aggregate key behavioral signals onto a common schema to create a single user profile (or online user profile, at any rate), typically hosted in the DMP’s cloud

The upside of this approach is that you can achieve some degree of data integration via the (relatively) painless means of inserting another bit of JavaScript into all of your web pages and ad templates, and also that you can access other companies’ audiences who are tagged with the same cookie – so-called audience extension.

However, there are some downsides, also. Key amongst these are:

Yet another ID: If you already have multiple ways of IDing your users, adding another “master ID” to the mix may just increase complexity. And it may be difficult to link key behaviors (such as mobile app purchases) or offline data (such as purchase history) to this ID.

Your data in someone else’s cloud: Most marketing cloud/DMP solutions assume that the master audience profile dataset will be stored in the cloud. That necessarily limits the amount and detail of information you can include in the profile – for example, credit card information.

It doesn’t help your data: Just taking a post-facto approach with a DMP (i.e. fixing all your data issues downstream of the source, in the DMP’s profile store) doesn’t do anything to improve the core quality of the source data.

So what should you do? My recommendation is to catalog, clean up and join your most important datasets before you start working with a DMP, and (if possible) identify an ID that you already own that you can use as a master ID. The more you can achieve here, the less time your DMP will spend picking up your metaphorical underwear, and the more time they’ll spend providing value-added services such as audience extension and building integrations into your online marketing systems.

 

2. Think about your marketing goals and segments

cpc_01You should actually think about your marketing goals before you even think about bringing in a DMP or indeed make any other investments in your digital marketing capabilities. But if your DMP is already coming in, make sure you can answer questions about what you want to achieve with your audience (for example, conversions vs engagement) and how you segment them (or would like to segment them).

Once you have an idea of the segments you want to use to target your audience, then you can see whether you have the data already in-house to build these segments. Any work you can do here up-front will save your DMP a lot of digging around to find this data themselves. It will also equip you well for conversations with the DMP about how you can go about acquiring or generating that data, and may save you from accidentally paying the DMP for third-party data that you actually don’t need.

 

3. Do your own due diligence on delivery systems and DSPs

catapultYour DMP will come with their own set of opinions and partnerships around Demand-side Platforms (DSPs) and delivery systems (e.g. email or display ad platforms). Before you talk with the DMP on this, make sure you understand your own needs well, and ideally, do some due diligence with the solutions in the marketplace (not just the tools you’re already using) as a fit to your needs. Questions to ask here include:

  • Do you need realtime (or near-realtime) targeting capabilities, and under what conditions? For example, if someone activates your product, do you want to be able to send them an email with hints and tips within a few hours?
  • What kinds of customer journeys do you want to enable? If you have complex customer journeys (with several stages of consideration, multiple channels, etc) then you will need a more capable ‘journey builder’ function in your marketing workflow tools, and your DMP will need to integrate with this.
  • Do you have any unusual places you want to serve digital messaging, such as in-product/in-app, via partners, or offline? Places where you can’t serve (or read) a cookie will be harder to reach with your DMP and may require custom integration.

The answers to these questions are important: on the one hand there may be a great third-party system with functionality that you really like, but which will need custom integration with your DMP; on the other hand, the solutions that the DMP can integrate with easily may get you started quickly and painlessly, but may not meet your needs over time.

 

If you can successfully perform the above housekeeping activities before your DMP arrives and starts gasping at the mountain of dishes piled up in your kitchen sink, you’ll be in pretty good shape.

del.icio.usdel.icio.us diggDigg RedditReddit StumbleUponStumbleUpon

May 17, 2015

The rise of the Chief Data Officer

mad-men-monolithAs the final season of Mad Men came to a close this weekend, one of my favorite memories from Season 7 is the appearance of the IBM 360 mainframe in the Sterling Cooper & Partners offices, much to the chagrin of the creative team (whose lounge was removed to make space for the beast), especially poor old Ginsberg, who became convinced the “monolith” was turning him gay (and took radical steps to address the issue).

My affection for the 360 is partly driven by the fact that I started my career at IBM, closer in time to Man Men Series 7 (set in 1969) than the present day (and now I feel tremendously old having just written that sentence). The other reason I feel an affinity for the Big Blue Box is because my day job consists of thinking of ways to use data to make marketing more effective, and of course that is what the computer at SC&P was for. It was brought in at the urging of the nerdish (and universally unloved) Harry Crane, to enable him to crunch the audience numbers coming from Nielsen’s TV audience measurement service to make TV media buying decisions. This was a major milestone in the evolution of data-driven marketing, because it linked advertising spend to actual advertising delivery, something that we now take for granted.

The whole point of Mad Men introducing the IBM computer into the SC&P offices was to make a point about the changing nature of advertising in the early 1970s – in particular that Don Draper and his “three martini lunch” tribe’s days were numbered. Since then, the rise of the Harry Cranes, and the use of data in marketing and advertising, has been relentless. Today, many agencies have a Chief Data Officer, an individual charged with the task of helping the agency and its clients to get the best out of data.

But what does, or should, a Chief Data Officer (or CDO) do? At an advertising & marketing agency, it involves the following areas:

Enabling clients to maximize the value they get from data. Many agency clients have significant data assets locked up inside their organization, such as sales history, product telemetry, or web data, and need help to join this data together and link it to their marketing efforts, in order to deliver more targeted messaging and drive loyalty and ROI. Additionally, the CDO should advise clients on how they can use their existing data to deliver direct value, for example by licensing it.

Advising clients on how to gather more data, safely. A good CDO offers advice to clients on strategies for collecting more useful data (e.g. through additional telemetry), or working with third-party data and data service providers, while respecting the client’s customers’ privacy needs.

Managing in-house data assets & services. Some agencies maintain their own in-house data assets and services, from proprietary datasets to analytics services. The CDO needs to manage and evolve these services to ensure they meet the needs of clients. In particular, the CDO should nurture leading-edge marketing science techniques, such as predictive modeling, to help clients become even more data-driven in their approach.

Managing data partnerships. Since data is such an important part of a modern agency’s value proposition, most agencies maintain ongoing relationships with key third-party data providers, such as BlueKai or Lotame.The CDO needs to manage these relationships so that they complement the in-house capabilities of the agency, and so the agency (and its clients) don’t end up letting valuable data “walk out of the door”.

Driving standards. As agencies increasingly look to data as a differentiating ingredient across multiple channels, using data and measurement consistently becomes ever more important. The CDO needs to drive consistent standards for campaign measurement and attribution across the agency so that as a client works with different teams, their measurement framework stays the same.

Engaging with the industry & championing privacy. Using data for marketing & advertising is not without controversy, so the DCO needs to be a champion for data privacy and actively engaged with the industry on this and other key topics.

As you can see, that’s plenty for the ambitious CDO to do, and in particular plenty that is not covered by other traditional C-level roles in an ad agency. I think we’ll be seeing plenty more CDOs appointed in the months and years to come.

del.icio.usdel.icio.us diggDigg RedditReddit StumbleUponStumbleUpon

November 09, 2011

Building the Perfect Display Ad Performance Dashboard, Part I – creating a measurement framework

dashboard-warning-lightsThere is no shortage of pontification available about how to measure your online marketing campaigns: how to integrate social media measurement, landing page optimization, ensuring your site has the right feng shui to deliver optimal conversions, etc. But there is very little writing about the other side of the coin: if you’re the one selling the advertising, on your site, or blog, or whatever, how do you understand and then maximize the revenue that your site earns?

As I’ve covered previously in my Online Advertising 101 series, publishers have a number of tools and techniques available to manage the price that their online ad inventory is sold for. But the use of those tools is guided by data and metrics. And it’s the generation and analysis of this data that is the focus of this series of posts.

In this series, I’ll unpack the key data components that you will need to pull together to create a dashboard that will give you meaningful, actionable information about how your site is generating money – or monetizing, to use the jargon.

We’ll start by taking a high-level look at a framework for analyzing a site’s (or network’s) monetization performance. In subsequent posts, we’ll drill into the topics that we touch on briefly here.

 

Getting the measure of the business

Ultimately, for any business, revenue (or strictly speaking, income or profit) is king. If you’re not generating revenue, you can’t pay the bills (despite what trendy start-ups will tell you). But anyone running a business needs a bit more detail to make decisions that will drive increased revenue.

In the ad-supported publishing business, these decisions fall into a couple of broad buckets:

  • How to create more (or more appealing) supply of sellable advertising inventory
  • How to monetize the supply more effectively – either by selling more of it, or selling it for a better price, or both

Another way of thinking about these decisions is in a supply/demand framework that is common to almost all businesses: If your product is selling like hot cakes and you can’t mint enough to meet demand, you have a supply problem, and you need to focus on creating more supply. If, on the other hand, you have a lot of unsold stock sitting around in warehouses (real or virtual), you have a demand problem, and you need to think about how to make your products more compelling, or your sales force more effective, or both.

Online publishers usually suffer from both problems at the same time: Part of their inventory supply will be in high demand, and the business will be supply-constrained (it is not easy to mint new ad impressions the way a widget manufacturer can stamp out new widgets). Other parts of the inventory, on the other hand, will be hard to shift, and the business will be demand-constrained – and unlike widgets, unsold ad inventory goes poof! when the clock strikes midnight.

So analysis of an online ad business needs to be based on the following key measures:

  • How much inventory was available to sell (the Supply)
  • How much inventory was actually sold (the Volume Sold)
  • How much the inventory was actually sold for (the Rate)

It’s ultimately these measures (and a few others that can be derived from them) that will tell you whether you’re succeeding or failing in your efforts to monetize your site. But like any reasonably complex business (and online advertising is, at the very least, unreasonably complex), it’s really how you segment the analysis that counts in terms of making decisions.

 

What did we sell, and how did we sell it?

Most businesses would be doing a pretty poor job of analysis if they couldn’t look at business performance broken out by the products they sell. A grocery chain that didn’t know if it was selling more grapes or grape-nuts would not last very long. Online advertising is no exception – in fact, quite the opposite. Because online ad inventory can be packaged so flexibly, it’s essential to answer the question “What did we sell?” in a variety of ways, such as:

  • What site areas (or sub-areas) were sold
  • What audience/targeting segments were sold
  • What day-parts were sold
  • What ad unit sizes were sold
  • What rich media types were sold

The online ad sales business also has the unusual property that the same supply can (and is) sold through multiple channels at different price points. So it is very important to segment the business based on how the supply was sold, such as:

  • Direct vs indirect (e.g. via a network or exchange)
  • Reserved vs remnant/discretionary

Depending on the kind of site or network you’re analyzing, different aspects of these what and how dimensions will be more important. For example, if you’re running a site with lots of high-quality editorial content, analyzing sales by content area/topic will be very important; on the other hand, if the site is a community site with lots of undifferentiated content but a loyal user base, audience segments will be more relevant.

 

Bringing it together – the framework

I don’t know about you, but since I am a visual person to start with, and have spent most of the last ten years looking at spreadsheets or data tables of one sort or another, when I think of combining the components that I’ve described above, I think of a table that looks a bit like the following:

image

This table is really just a visual way of remembering the differences between the measures that we’re interested in (volume, rate etc) and the dimensions that we want to break things out by (the “what” and “how” detail). If you don’t spend as much of your time talking to people about data cubes as I do, these terms may be a little unfamiliar to you, which is why I’m formally introducing them here. (As an aside, I have found that if you authoritatively bandy about terms like “dimensionality” when talking about data, you come across as very wise-sounding.)

In the next posts in this series, I shall dig into these measures and dimensions (and others) in more detail, to allow us to populate the framework above with real numbers. We’ll also be looking at how you can tune the scope of your analysis to ensure that

For now, here’s an example of the kinds of questions that you would be able to answer if you looked at premium vs non-premium ad units as the “what” dimension, and direct vs indirect as the “how” dimension:

image

 

As this series progresses, I’d love to know what you think of it, as well as topics that you would like me to focus on. So please make use of the comments box below.

del.icio.usdel.icio.us diggDigg RedditReddit StumbleUponStumbleUpon

October 17, 2011

Nicely executed retargeting opt-out (for a change)

Retargeting (sometimes called remessaging or remarketing) has taken off in a big way, recently – Google introduced the feature into AdWords earlier this year, and a host of other players are in the game. Consequently, the interwebs now abound with commentary on the rather spooky nature of the technology, with people being “followed around” the Internet by ads for things they were either searching for, or were looking at on e-commerce websites.

It is true that most retargeting implementations are a bit clunky, and I have been on the receiving end of plenty of them myself. Their most irritating aspect seems to be that the time window for perceived relevance of the retargeted ads seem to be ridiculously long. It’s somehow almost more irritating to be deluged by ads for that miscellaneous widget site that you once visited a few weeks ago (even though you have since satisfied your need for widgets elsewhere) than it is to be served non-targeted (or more broadly targeted) ads.

Such ads are made more bearable by a robust opt-out capability; many ad networks have adopted the IAB’s self-regulatory program, which calls for the advertiser to make it possible to opt out of these kinds of ads, which is to say, stop receiving them; stopping the data collection is a more difficult matter.

So today I want to give a little love to TellApart, not because their retargeting implementation is especially subtle or innovative, but simply because they provide a nice opt-out implementation. Last week I spent a little time looking for a desk for my daughter (who currently occupies our dining table with her homework). So since then I have been served retargeted ads on behalf of the site I visited (www.childrensdesks.com) on various sites. Here’s one from Business Insider:

image

The nice thing about the ad is it has a little “x” icon in the top right (which actually makes a little more sense than the IAB’s suggested “Advertising Option Icon”, which is a bit cryptic). Clicking it gives me this:

image

The ability to opt out right in the ad unit is nice, and makes me feel more well-disposed to the advertiser and the site that the ad is running on. Clicking through the “Learn More About These Ads” link at the bottom takes me to TellApart’s website with a little more information and the same option to opt out – though no option to opt out of certain categories of ads, or groups of advertisers.If more retargeting networks provided simpler opt-out capabilities like these, it might help to make these ads seem like less of a scary proposition.

del.icio.usdel.icio.us diggDigg RedditReddit StumbleUponStumbleUpon

May 13, 2009

Does Display help Search? Or does Search help Display?

One of the topics that we didn’t get quite enough time to cover in detail in my face-off with Avinash Kaushik at last week’s eMetrics Summit (of which more in another post) was the thorny issue of conversion attribution. When I asked Avinash about it, he made the sensible point that trying to correctly “attribute” a conversion to a mix of the interactions that preceded it ends up being a very subjective process, and that adopting a more experimental approach – tweaking aspects of a campaign and seeing which tweaks result in higher conversion rates – is more sound.

I asked the question in part because conversion attribution is conspicuously absent from Google Analytics – a fact which raises an interesting question about whether it’s in Google’s interest to include a feature like this, since it may stand to lose more than it gains by doing so (since the effective ROI of search will almost certainly go down when other channels are mixed into an attribution model).

Our own Atlas Institute is quite vocal on this topic, and has published a number of white papers such as this one [PDF] about the consideration/conversion funnel, and this one [PDF], on which channels are winners and losers in the new world of Engagement Mapping (our term for multi-channel conversion attribution).

The Atlas Institute has also opined about how adding display to a search campaign can raise the effectiveness of that campaign by 22% compared to search alone – in other words, how display helps search to be better.

However, a recent study from iProspect throws some new light on this discussion. The study – a survey of 1,575 web consumers – attempted to discover how people respond to display advertising. And one of the most interesting findings from the study is that, whilst 31% of users claim to have clicked on a display ad in the last 6 months, almost as many – 27% – claimed that they responded to the ad by searching for that product or brand:

image

This raises the interesting idea that search can actually help display be better, by providing a response mechanism that differs from the traditional ad click behavior that we expect. Of course, this still doesn’t mean that search should get 100% of the credit for a conversion in this kind of scenario – in fact, it makes a stronger case for “view-through” attribution of display campaigns – something that ad networks (like, er, our own Microsoft Media Network) are keen to encourage people to do, to make performance-based campaigns look better.

All this really means that, of course, it’s not a case of display vs. search, but display and search (and a whole lot of other ways of reaching consumers). Whether you take the view that it’s your display campaign that helps your search to be more effective, or your search keywords that help your display campaign to drive more response, multi-channel online marketing – and the complexity that goes with measuring it – looks set for the big time. And by “big time”, I mean the army of small advertisers currently using systems like Google’s AdWords, or our own adCenter. So maybe we’ll see multi-channel conversion attribution in Google Analytics before long.

del.icio.usdel.icio.us diggDigg RedditReddit StumbleUponStumbleUpon

January 14, 2009

PubMatic kicks us when we’re down (but gently)

image As if we weren’t all feeling gloomy enough already, PubMatic has just released its Q4 2008 Ad Price Index report, which makes for sobering reading. For those of you not familiar with PubMatic, they provide “multi-network optimization” for publishers who are looking to maximize the yield on their remnant ad inventory (i.e. the inventory the publisher can’t sell themselves).

Rather than manually dealing with a handful of networks directly, the publisher hands their inventory over to PubMatic who ensures that the most profitable ad is shown, whether it comes from Google Adsense, BlueLithium, AdBrite, or another network. Since a key part of what PubMatic does is measure the CPMs for online ads, they have access to lots of ad price data, and every quarter they roll this data up into a report (available here as a PDF).

PubMatic has been doing this for 15 months now, and so far, they’ve yet to deliver any good news:

image

Given the economic Armageddon that overtook the world, PubMatic’s report that average prices only softened by $0.01 during Q4 actually seems like pretty good news. But then again, Q4 was the holiday season; and compared to Q4 2007, Q4 2008’s numbers look pretty horrendous.

The detail of the report contains some more interesting tidbits: for example, the average CPMs for small sites (fewer than 1 million PV/mo) are the highest, at around $0.60, whilst the average CPMs for large sites languish at around the $0.17 mark.

Before you start predicting the doom of the mainstream media, however, it should be pointed out (as Mike Nolet has done) that there is a sample bias in the PubMatic numbers – whereas a small publisher is wholly dependent on ad networks for all their revenue (lacking the resources to sell their inventory themselves), and so is likely sending all its inventory (including the juiciest stuff on the home page) to PubMatic, a large site will only be sending the inventory that they couldn’t sell themselves – i.e. the bottom-of-the-barrel stuff.

It also turns out that average prices for the largest and smallest publishers have slumped by around 50% in the past year, whilst prices for medium-sized sites have remained more solid:

image

I’m at something of a loss to explain why this might be – at the high end, it may be because large sites are becoming more efficient at selling their inventory themselves, so it’s only the really cheap stuff that is being passed on to PubMatic; whilst at the bottom end, small publishers are becoming increasingly crowded out by new sites.

What would be immensely useful would be for PubMatic to provide some kind of indication of the proportion of inventory from sites that is being served through them; this would make it easier to understand if changes in average prices through PubMatic are the result of a change in the mix of inventory that is being passed to the company. However, I would be very surprised if PubMatic had access to this kind of data.

One more thing…

image Once you’re done reading the Ad Price Report, stick around on the PubMatic site a little longer and download their White Paper entitled “Death to the Ad Network Daisy Chain”. This little document does a nice job of explaining how an impression is passed from one ad network to another, and highlights the surprisingly high proportion of ad calls that are returned ‘unsold’ by networks. The document then goes on to talk about how ad operations folk have to manually set up ‘daisy chains’ of ad networks to try to ensure that the maximum amount of inventory is sold. As the title of the document implies, this is held to be a bad thing.

Because of the nature of the business that PubMatic is in, the recommendation in the document is that publishers use ‘dynamic daisy-chaining’ (which is essentially what PubMatic does, choosing the order of daisy-chaining based on expectations about which network will be likely to monetize an impression most effectively) to solve this problem. At one point the document states (my emphasis):

Due to the volatility of online ad pricing … creating a dynamic “chain” of ad networks, rather than a static one, is the only way for a publisher to ensure that they can get the best price possible for their ad space.

I would respectfully disagree with this statement; another way of achieving this is to use an ad network that is a member of an ad exchange, and which can therefore draw on a larger pool of advertisers than just those with whom it has a direct relationship.

But I don’t disagree with the main sentiment of the PubMatic paper, which is that publishers still struggle with significant inefficiencies in the way they monetize inventory; and I believe we’ll see the kind of multi-network optimization solution that PubMatic offers (also available from Rubicon and AdMeld) become increasingly important as the year wears on.

del.icio.usdel.icio.us diggDigg RedditReddit StumbleUponStumbleUpon

September 25, 2008

Yahoo! APT (formerly AMP!) emerges blinking into the sunlight

apt_logo_150x76 Well, they said it would launch in Q3, and it has – yesterday Yahoo! unveiled its new ad management platform, called APT, at a razzamatazz-filled event in New York introduced by John Hamm, star of Mad Men (currently the topic of much debate in our house as to its merits).

APT started life as “Project APEX”, which Jerry Yang started to talk about around a year ago as a successor/complement to Panama (Yahoo’s search ad platform, properly known as Yahoo Search Marketing). Yahoo then pre-announced something called AMP! (Ad Management Platform) in March, saying that it would revolutionize the way that ad media was bought and sold, drastically simplifying the selling process for publishers in particular. And yesterday AMP!’s name had changed again, to APT. APT does not appear to stand for anything, but at least they have dropped the exclamation point.

So what is APT? Well, according to Yahoo, it’s “designed to simplify the process of buying and selling ads online while connecting all the market players -- publishers, advertisers, agencies, networks, partners and developers -- from a unified platform to do business more efficiently and effectively”.

However, in its first incarnation, APT is principally a tool for publishers, aiming to make it easier for them to respond to advertiser/agency RFPs, and allowing them to build ad hoc private networks in order to be able to re-sell other publishers’ inventory. This latter capability is strongly linked to the initial user base for AMP, which is Yahoo’s Newspaper Consortium [PDF] members. The Newspaper Consortium is a sort of hybrid advertising and content network. Two members of this network – the San Francisco Chronicle and San Jose Mercury News – are founding customers of APT.

 

APT for publishers

According to the blurb on the APT site, APT will bring the following benefits for publishers:

  • Simplified workflow (creative management, campaign management)
  • Analytics & yield optimization (inventory management & prediction)
  • Increased inventory liquidity through cross-selling abilities and integration with Right Media Exchange
  • Extensibility via web services & an API
  • Access to Yahoo!’s expertise (creative development, targeting, etc)

The only screenshot available is of the APT dashboard, which looks fairly nice, but doesn’t reveal that much about the functionality of the system (click the image to view it full-size):

apt_dashboard

There are also some more tidbits in the video which Yahoo release in April during the original AMP announcement.

Of the announced functionality, the ‘cross-selling’ capabilities in APT are some of the most interesting, representing a twist on the network model. Rather than running a traditional network where the publisher members give a certain portion of their inventory to a centrally managed hub, with APT, publishers can buy and sell inventory directly to and from one another. This will likely mean that publishers can get a better price for that inventory than if they sold it at remnant prices. If APT does a good job of this, it will make publishers’ lives much easier, and deliver a much-needed fillip to their revenues.

SImilarly, APT – by integrating with Yahoo’s Right Media Exchange (RMX) – could attract networks to work with Yahoo/RMX by offering a new pool of inventory that those networks could resell.

 

What about advertisers and agencies?

imageWhere APT’s value is less clear to me is as a tool for advertisers and agencies. APT will provide a one-stop-shop for buying inventory on Yahoo’s properties and those of its publisher partners (i.e. the Newspaper Consortium folks) – and, given Yahoo’s strength in behavioral targeting, should be able to offer innovative inventory packages and highly targeted buying. But do advertisers and agencies need another interface for buying ads? These organizations would prefer to buy their media through the third-party ad server solutions (DoubleClick and Atlas, mostly) that they already use. Already they face fragmentation in their buying systems for search, contextual, display and rich media advertising – another tool may add to the pain, not ease it.

This may seem like a biased comment (since Atlas is now part of Microsoft), but in fact Microsoft faces the same kinds of challenge as we evolve our advertising platform. Our adCenter product offers a web UI for buying advertising (mostly search and contextual, with display coming), but larger advertisers prefer to interact with it indirectly through its APIs. And this – rather than a ‘one-stop-shop’ buying UI – is how I think APT will end up interacting with advertisers and agencies. Sensibly, Yahoo seems to have appreciated this, since a big part of its APT story is its extensibility through third-party integration.

in conclusion, if APT can create a more transparent network buying experience, and at the same time enable publishers to gain more control over how their inventory is sold, then it will definitely be a Good Thing for the industry – and for Yahoo, as it looks to position itself against Google and Microsoft. It’s going to be interesting to see how APT develops.

del.icio.usdel.icio.us diggDigg RedditReddit StumbleUponStumbleUpon

About

About me

Disclaimer

Subscribe

Enter your email address:

Delivered by FeedBurner

Subscribe