Analytics Strategist

April 5, 2013

Bid quality score: the missing piece in the programmatic exchange puzzle

Filed under: Ad Exchange, Game Theory, Matching game, misc, Technology — Huayin Wang @ 7:45 pm

On the eve of the Programmatic IO and Ad Tech conferences in SF, I want to share my idea for a new design feature of Exchange/SSP, a feature that has the potential of significantly impacting our industry. This feature is the bid auction rule.

Bid auction rule is known to be central to Search Engine Marketing.  Google’s success story underscores how important a role it can play in shaping the process and dynamics of the marketplace. There is reason to believe that it has the similar potential for the RTB Exchange industry.

The current auction model implemented in Ad Exchanges and SSPs are commonly known as Vickrey auction, or the second price auction. It goes like this:  upon receiving a set of bids, exchanges will decide on a winner based on the highest bid amount, and set the price to the second highest bid amount.  In the RTB Bid Process diagram below, this auction rule is indicated by the green arrow #7:

RTB bidding process

RTB bidding process

(I am simplifying the process a lot by removing non-essential details from the actual process for our purpose, e.g Ad Servers)

The new auction I’d like to propose is a familiar one: it is a modified Vickrey auction with quality score!  Here, the bid quality score is defined as the quality of an ad to the publisher, aside from the bid price.  It essentially captures all things that a publisher may care about the ad. I can think of a few factors:

  1. Ad transparency and related data availability
  2. Ad quality (adware, design)
  3. Ad content relevancy
  4. Advertiser and product brand reputation
  5. User response

Certainly, the bid quality scores are going to be publisher specific.  In fact, it can be made site-section specific or page specific.  For example, a publisher may have a reason to treat Home Page of their site differently than other pages.  It can also vary by user attributes if the publisher like to.

Given that, the Exchange/SSP will no longer be able to carry out the auction all by itself – as the rule no longer depends only on bid amounts.  We need a new processing component, as shown in the diagram below.

new-design

Now, #7 is replaced with this new component, called Publisher Decider.  Owned by the publisher, the decider works through the following steps:

  1. it takes in multiple bids
  2. calculates the bid quality scores
  3. for each bid, calculates the Total Bid Score (TBS), by multiplying bid amount and quality score
  4. ranks the set of bids by the TBS
  5. makes the bid with highest TBS the winner
  6. sets the bid price based on a formula below, as made famous by Google

p3

Here, P1 is the price set for the winner bid. Q1 is the bid quality score. B2 is the bid amount for the bid with second highest TBS. Q2 is the bid quality score for the bid with the second highest TBS.

This is not a surprise and it’s not much of a change. So, why is this so important?

Well, with the implementation of this new auction rule, we can guess some natural impacts coming out:

  • Named Brands will have an advantage on bid price, because they tend to have better quality scores. A premium publisher may be willing to take $1 CPM from apple than $5 CPM from a potential adware.  This will be achieved via Apple having a quality score 5 times or more higher than that of the other crappy ad.
  • Advertisers will have an incentive to be more transparent. Named brands will be better off with being transparent, to distinguish themselves from others. This will drive the quality score from non-transparent ads lower, therefore starting a good cycle.
  • DSPs or biddes will have no reason to not submit multiple bids, for they won’t be able to know which ad will be the winner before hand.
  • Premium Publishers will have more incentive to put their inventory into now that they have transparency and finer level of control.
  • The Ad Tach eco-system will respond with new players, such as ad-centric data companies serving the publisher needs, similar to the contextual companies serving advertisers

You may see missing links in the process I described here.  It is expected, because a complete picture is not the focus of this writing.  I hope you will be convinced that bid quality score / Publisher Decider is interesting, and potentially has significant impact by pushing the Ad Tech space in the direction of more unified technologies and consistent framework.

Advertisements

November 16, 2011

Ad exchange, matching game and mechanism design

Over the years, I have learned some interesting things in this new ad:tech industry, particularly around the Ad Exchange and RTB ad auction market model.  I want to share some of my thoughts here and hope you find them interesting to read.

Ad Exchange is not like a financial exchange

The “exchange” in the name is suggestive of a financial stock exchange market, and interesting observations can be made based on this analogy.  However, there are some fundamental differences such as the lack of liquidity in ad impression and information asymmetry.  Jerry Neumann has blogged about this topic profusely;  it is still a topic of great interest today, as seen in a recent article by Edward Montes.

In fact, the differences are easy to understand. The harder part is, like Jerry asked,  If the ad exchange aren’t exchanges, what are they?  or I should add, what should they be like?

Publisher’s preference is the missing piece

The analogy with financial exchange (stock and future) is not a good analogy partly because of its inability to fully model advertiser preference. Not all impressions are of the same value to an advertiser and not all advertisers give the same value to an impression. The commodity auction model as embedded in ad exchange does better, because it allows advertisers to bid based on any form of evaluation – a chance for advertiser to fully express its preference over audience, contextual content and publishers’ brand.

Still, there is a problem for the current auction model: after collecting all the bids from advertisers, it takes the highest bidder to be the winner, as if price is the only thing publishers care about.  In reality, not all bids with the same price are of the same value to a publisher.  Publishers care about brand safety and contextual relevancy as well; in fact, the quality of user experience may mean more to publishers than advertisers!  In sum, publishers care about the quality of the ads above and beyond the bid price.  Unfortunately, the current ad exchanges lack the proper mechanism allowing publishers to articulate their full preferences.  This results in lost of market efficiency and lost of opportunities to remove transaction frictions.  This is a design flaw.

Display marketplace is still far from perfectly efficient and this design flaw does not help.  The recent developments of Private Marketplace are piecemeal attempts to overcome this design issue.  Some market movements in late merge and acquisition attempts can be understood from this angle.

Where can we look for design idea on how to handle this issue?  – paid search and game theory!

The quality score framework from paid search

In many ways, paid search is just like ad exchange, with Google plays one of a few “publisher” roles.   In both markets, advertisers are competing for ad-view through auction bidding;  if we equate audience in display to keywords in search, then the bidding processes is quite the same:  search advertisers do extensive keyword research, look at past performance along side of other planning parameters such as time of the day etc. to optimize their bids;  similarly display advertisers look at the audience attributes, the site and page contents, past performance and planning parameters as they perform bid optimization.

The bidding processes in both markets are similar;  the differences lie in the post-bidding ad evaluation.

After all bids are collected, ad exchange today simply select the highest bidder.  In case of paid search, bids are evaluated on price, ad relevancy and many other attributes.  Google has mastered the evaluation process with its Quality Score framework.  This difference in having a Quality Score framework vs not is not a small thing.  As anyone familiar with the history of paid search know, the quality score framework played a pivotal role in shaping the search industry when Google introduced it around the turn of the century.  The post-bidding ad evaluation for display may just be a critical piece of technology and have potentially significant impact on the efficiency and the health of the display market.

The need for a non-trivial post-bidding ad evaluation calls for an extra decision process (and algorithm) to be added, either at ad exchange or at publisher’s site, or both.  In this new model and with this extra component, ad exchange will send the full list of bidding to the publisher instead of picking a winner based on price alone.  It is then up to the publisher to decide which ad will be shown.  With millions of publishers, large and small, this seemingly small change may be a trigger for more inside this industry where technology is already orders of magnitude more complex than paid search.

The matching game analogy

With full preferences being taking into account for both advertisers and publishers, ad exchange looks less like a commodity marketplace and more like the matching game.  It will be interesting to look at market efficiency from the perspective of mechanism design in game theory, which is another way of saying operational market process.

Matching advertisers with publishers under a continuous series of Vickrey auctions is our setup for the discussion – the best model I can think of that mimic the matching game setup;  it shouldn’t be too surprising to anyone that matching game is an interesting analogy to ad exchange.  As a game theory abstraction of many practical cases, matching game includes college admission and marriage market.  Let’s take the marriage market as an example.

Using a simplistic description, a marriage market involves a set of Men and a set of Women.  Each man has a preference vector over the set of women (a ranking of women);  similarly each woman has a preference vector over the set of men (a ranking of men).  A matching is an assignment of men to women such that each man is assigned to at most one woman and vice versa.  A matching is unstable if there exist a man-woman pair not currently matched to each other but both prefer match to each other than their current match – the pair as such is called a blocking pair.  When there is no blocking pair exist, a match will be called a stable match.

Clearly, stability of a match is a good quality: a stable match is not vulnerable to any voluntary pairwise rematch (translating into ad exchange language, a stable match is one such that no pair of advertiser – publisher currently not matched to each other have incentive to switch and form a new match).  A matching is male-optimal if no two males have incentive to switch partners. Female-Optimal is defined similarly.  A stable matching that is both male-optimal and female optimal looks like an perfect efficient market; we hope to find a mechanism that lead to the unique stable matching as such – something we can then mimic to implement for a future ad exchange model.

Unfortunately, there is no unique stable matching for matching game in general (in this case, having too many good things may not be a good thing).  There is also no unique optimal matching that is optimal from both men and women’s perspective.  We learned that Male Proposals Deferred Acceptance Algorithm, sort of like the current auction process in ad exchange in which advertisers played the male roles, produce Male-Optimal stable matching.  If we switch the role of men and women, a similar algorithm exists that produces Female-Optimal matching.  The two algorithms/mechanisms lead to two distinctly different results.  You can read more about algorithmic game theorycomputational game theory, specifically on matching game and mechanism design if interested.

So, why are we looking into this and what we’ve learned from it?  Below is my translation, or transliteration to be more appropriate, from game theory speak to the ad:tech domain.

We all like to believe that there is an efficient market design for everything, including the exchange marketplace for ads. Our believe is justified for all commodity marketplace by the general equilibrium theory.  Unfortunately, there is no equivalence of a “general equilibrium” or universal optimal stable match for a marriage market, which implies that there is no universal optimal advertiser-publisher matching in ad exchange.  If this is the case, the search for an optimal market mechanism for ad exchange will be a mission impossible.

However, there exist one-sided optimal condition, advertiser-optimal and/or publisher-optimal matching.  It is also easy to find the corresponding mechanisms that lead to those one-sided optimal stable matching.  The auction market as currently implemented in ad exchanges, with the addition of post-bidding evaluation process, is similar to the mechanism leading to advertiser-optimal matching.

The future seems open for all kinds of good mechanism design.  Still, I believe that there is a “naturalness” in the current style auction market.  It is quite natural for the auction process to start from the publisher side, by putting the ad impression on auction, because it is all start with the audience requesting a webpage – a request send to a publisher. It is not easy to imagine how advertiser can set up a “reverse auction” starting from the demand side, within a RTB context. We can never rule out the possibility, and it may work potentially for trading the ad future.

Conclusion:

I am reluctant to draw any conclusions – these are all food for thought and discussion.  I’d love to hear your comments!

January 2, 2010

a decade in data analytics …

Filed under: misc, Web Analytics — Tags: , , , — Huayin Wang @ 10:53 pm

I was reading an article The Decade of Data: Seven Trends to Watch in 2010 this morning and found it a fitting retrospective and perspective piece.  I have been working in data analytics for the past 15 years, so naturally I went searching for similar articles with more of a focus on analytics, but came back empty handed 😦

I wish I could write a similar post, but feel the task is too big to take.  A systematic review with vision into the future would require much more dedication and effort than I could afford at this point.  However, I do have a couple of thoughts and went ahead to gather some evidence to share.  I’d love to hear your thoughts; please comment and provide your perspectives.

The above chart shows search volume indices for several data analytics related keywords over the last six years.  There are many interesting patterns.  The one caught my eyes first is the birth of Google Analytics: Nov 14, 2005.  No only did it cause a huge spike in the search trend for “analytics”, the first day “analytics” surpass “regression”, it become the driving force behind the growth of web analytics and analytics discipline in general.  Today, more than half of all “analytics” searches are associated with “Google Analytics”.  Anyone who writes the history of data analytics will have to study the impact of GA seriously.

I wish I could do a chart on the impact of SAS and SPSS on data analytics in a similar fashion, but unfortunately it is hard to isolate SAS searches for statistics software vs other “SAS” searches.  When limited to the “software” category, it seems that SAS has about twice the volume of SPSS, so I used SPSS instead.

Many years ago, before Google Analytics and the “web analyst” generation, statistical analysis and modeling dominated the business applications of data analytics.  Statistician and their predictive modeling practice were sitting in their ivy tower.  Since the early years of the 21st century, data mining and machine learning became a strong competing discipline to statistics – I remember the many heated debates between statistician and computer scientists about statistical modeling vs data mining.  New jargons came about, such as decision tree, neural network, association rule and sequence mining.  To whomever had the newest, smartest, most math grade, efficient and powerful algorithm went the spoils.

Google Analytics changed everything.  Along with data democratization came the democratization of data intelligence. Who would’ve guessed that today, for a large crowd of (web) analysts, analytics would become near-synonymous with Google Analytics and building dashboard, tracking and reporting the right metrics the holy grail of analytics?  Those statisticians may still inhabit the ivy tower of data analytics, but the world is already owned by others – the people – as democracy would dictate.

No question about it, data analytics is trending up and flourishing as never before.

comments?  Please share your thought here.

April 28, 2009

Mining twitter data

Who is the first reporter of the Mexico City earth quake?  I remember watching twitter second-by-second and @cjserrato was the first one reported the earth quake (the tweet id is 1630381373):

 
mexico city

Mining twitter data is a huge challenge.  So far I have not been able to see many interesting data/text mining and data analytics around twitter data.  I have been playing the data lately, and here’s a thematic/topic graph I had – a visualization of all tweets of the last eight hours that are related to to “mexico city”:

tweets of mexico city topic graph 

You can tell that “Swine Flu” still at the center of all topics, whereas earthquake is clustered alone to the side.

Have you seen any interesting twitter analytics (by the way, I do not mean the twitter metrics or counters etc..)?

Jeff Clark of NeoFormix has a great set of application, the best I have found so far.  FlowingData is another one.

March 17, 2009

the wrong logic in attribution of interaction effect

Attribution should not be such a difficult problem – as long as reality conforms to our linear additive model of it. The interaction, sequential dependency and nonlinearity are the main trouble makers.

In this discussion, I am going to focus on the attribution problem in the presence of interaction effect

Here’s the story setup: there are two ad channels, paid search (PS) and display (D).  

Scenario 1)
      When we run both (PS) & (D), we get $40 in revenue.  How should we attribute this $40 to PS and D?

The simple answer is: we do not know – for one thing,  we do not have sufficient data.
What about making the attribution in proportion to each channels’ spending numbers? You can certainly do it, but it is not more justifiable than any others.

Scenario 2)
    when we run (PS) alone we get $20 in revenue;  when we run (PS) & (D) together, we get $40.
    Which channel gets what?

The simple answer is again: we do not know – we do not have enough data.
Again, a common reasoning of this is:  (PS) gets $20 and (D) gets $20 (= $40 – $20).  The logic seems reasonable, but still flawed because there is no consideration of the interaction between the two.  Of course, with the assumption that there is no interaction between the two, this is the conclusion.

Scenario 3)
    when we run (PS) alone we get $20 in revenue; running (D) alone gets $15 in revenue; running both (PS) & (D) the revenue is $40.
    Which channel gets what?

The answer:  we still do not know. However, we can’t blame the lack of data anymore.  It is forcing us to face the intrinsic limitation in the linear additive attribution framework itself.

Number-wise, the interaction effect is a positive $5, $40-($20+$15), which we do not know what portion to be attributed to which channel. The $5 is up for grab for anyone who fight it harder – and usually to nobody’s surprise, it goes to the power that be.

Does this remind anyone of how CEO’s salary is often justified?

What happens when the interaction effect is negative, such as in the following scenario?

Scenario 4)
    when we run (PS) alone we get $20 in revenue; running (D) alone gets $15 in revenue; running both (PS) & (D) the revenue is $30.
    Which channel gets what?
How should the $5 lost distributed?  We do not know. 

What do you think? Do we have any way to justify other than bring out the “fairness” principle?

If the question is not answerable, the logic we use will at most questionable, or plain wrong.

However, all is not lost. Perhaps we should ask ourselves a question: Why do we ask for it in the first place? Is this really what we needed, or just what we wanted? This was the subject of one of my recent post: what you wanted may not be what you needed.

March 16, 2009

the new challenges to Media Mix Modeling

Among many themes discussed in the 2009 Digital Outlook report by Razorfish, there is a strand linked to media and content fragmentation, the complex and non-linear consumer experience, interaction among multiple media and multiple campaigns – all of these lead to one of the biggest analytics challenge: the failure of traditional Media Mix Modeling (MMM) in searching of a better Attribution Analytics.

The very first article of the research and measurement section is on MMM. It has some of the clearest discussion of why MMM failed to handle today’s marketing challenge, despite of its decades of success.  But I believe it can be made clearer. One reason is its failure to handle media and campaign interaction, which I think it is not the modeling failure but rather a failure for the purpose of attribution ( I have discussed this extensively in my post: Attribution, what you want may not be what you need).  The interaction between traditional media and digital media however, is of a different nature and it has to do with mixing of both push and pull media.  Push media influence pull media in a way that render many of the modeling assumptions problematic.  

Here’s its summary paragraph:

” Marketing mix models have served us well for the last several decades. However, the media landscape has changed. The models will have to change and adapt. Until this happens, models that incorporate digital media will need an extra layer of scrutiny. But simultaneously, the advertisers and media companies need to push forward and help bring the time-honored practice of media mix modeling into the digital era.”

The report limit its discussion to MMM, the macro attribution problem.  It did not give a fair discussion of the general attribution problem – no discussion of the recent developments in attribution analytics ( called by many names such as Engagement Mapping, Conversion Attribution, Multicampaign Attribution etc.).  

For those who interested in the attribution analytics challenges, my prior post on the three generations of attribution analytics provide an indepth overview of the field. 

Other related posts: micro and macro attribution and the relationship between attribution and  optimization.

SIM: the brightest spot in the 2009 Digital Outlook report

Filed under: Advertising, Datarology — Tags: , — Huayin Wang @ 3:57 am

Social Media is superhot these days and naturally I expect it to be a significant topic in Razorfish’s Digital Outlook report; and I was not disappointed 🙂

Social Influence Marketing (SIM) is one of the eight trends to watch; social object theory right in the middle of the report, followed by secrets of powering SIM campaigns; the Pulse (tagged as one of the three things every CEO must know) and mobile, both are connected to SIM in some fundamental ways. Most importantly, in the research and measurement section (dearest to my heart), two our of three are about social influence measurement and research; both are excellent.

I am particularly fond of Marc Sanford’s Social Influence Measurement piece.  Marc approaches the topic methodically, providing good conceptual lead-ins as well as rigorous measurement framework. I enjoyed its evenly paced, matter of fact writing style.  Starting from discussion of the two aspects of SIM: sharable contents and people, the Generational Tag technology, Marc made it clear about the importance of separating the values where campaigns touch consumers directly versus the incremental values when the contents passing through viral media, through the power of endorsement.  The methodology part and the technology part of SIM come together nicely in the article.

The Social Influence Research by Andrew Harrison and Marcelo Marer is equally interesting. There are excellent detailed discussion about the challenges that facing the traditional survey and focus group research, and how it can evolve and adapt into a new form of social influence research.

March 14, 2009

Eight trends to watch: 2009 Digital Outlook from Razorfish

1. Advertisers will turn to “measurability” and “differentiation” in the recession

2. Search will not be immune to the impact of the economy

3. Social Influence Marketing™ will go mainstream

4. Online ad networks will contract; open ad exchanges will expand

     with Google’s new interest-based targeting, thing looking to change even more rapidly.

5. This year, mobile will get smarter

6. Research and measurement will enter the digital age

     This is an issue dear to my heart and I have been writing about the importance of Attribution Analytics,  Micro and Macro Attribution many times in recent months; directly from the report:

    “Due to increased complexity in marketing, established research and measurement conventions are more challenged than ever. For this reason, 2009 will be a year for research reinvention. Current media mix models are falling down; they are based on older research models that assume media channels are by and large independent of one another. As media consumption changes among consumers, and marketers include more digital and disparate channels in the mix, it is more important than ever to develop new media mix models that recognize the intricacies of channel interaction.

7. “Portable” and “beyond-the-browser” opportunities will create new touchpoints for brands and content owners

8. Going digital will help TV modernize

Read the Razorfish report for details.

March 10, 2009

fairness is not the principle for optimization

In my other post, what you want may not be what you need, I wrote about the principle of optimization. Some follow up questions I got from people made me realize that I had not done a good job in explaining the point. I’d like to try again.

Correct attribution provides business a way to implement accountability. In marketing, correct attribution of sales and/or conversions presumably help us optimize the marketing spend. But how?  Here’s an example of what many people have in mind:  

    Suppose you have the following sale attributions to your four marketing channels:
             40% direct mail
             30% TV
             20% Paid Search
             10% Online Display
    then, you should allocate future budget to the four channels in proportion to the percentage they got.

This is intuitive, and perhaps what the fairness principle would do:  award according to contribution.  However, this is not the principle of optimization. Why?

Optimization is about maximization under constraints.  In case of budget optimization, you ask the question of how to distribute the last (or marginal) dollar more efficiently.  Your last dollar should be allocated to the channel with the highest marginal ROI.  In fact, this principle dictates that as long as there is difference in marginal ROI across channels you can always improve by moving dollars around.  Thus with true optimal allocation, the marginal ROI should be equalized across channels.

The 40% sale/conversion attribution to Direct Mail is used to calculate the average ROI.  In most DM programs, the early part of the dollar goes to the better names in the list, which tends to contribute to higher ROI; on the other hand, the fixed cost such as cost incurred for model development effort etc. will lower the ROI for the early part of the budget.  ROI and marginal ROI are variable functions of budget, and the marginal ROI in general is not equal to the average ROI.  There are different reasons for every channel with similar conclusion.  This is why those attribution percentages do not automatically tell us how to optimize. 

You may ask that, assuming all the marginal ROI are proportional to the average ROI, are we then justified to use of attribution percentages for budget allocations?  The answer is no.  If your assumption is right you should give all your dollars to one channel with the highest ROI, but not to all channels in proportion to the percentages.

This is an example of macro attribution. The same thinking applies to micro attribution as well.  Attribution is seen as linked to accountability and further more to operation and/or budget optimization.

We used an example of macro attribution to illustrate our point; same thinking applies to micro attribution as well.  Contrary to commonsense that regards attribution as the foundation for accountability and operation optimization, attribution percentages should not be used directly in optimization. The proportional rule or the principle of fairness is not the principle for optimization.

March 5, 2009

The three generations of (micro) attribution analytics

For marketing and advertising, attribution problem normally starts at the macro level: we have total sales/conversions and marketing spends.  Marketing Mix Modeling (MMM) is the commonly used analytics tool providing a solution using time series data of these macro metrics.  

The MMM solution has many limitations that are intrinsically linked to the nature of the (macro level) data that’s been used.  Micro attribution analytics, when the micro-level touch points and conversion tracking is available, provides a better attribution solution.  Although sadly, MMM is more often practice even when the data for micro-attribution is available; this is primarily due to the lack of development and understanding of the micro attribution analytics, particularly the model-based approach.

There has been three types, or better yet, three generations of micro analytics over the years: the tracking-based solution, the order-based solution and the model-based solution.

The tracking-based solution has been popular in the multi-channel marketing world.  The main challenge here is to figure out through which channel a sale or conversion event happens. The book Multichannel Marketing – Metrics and Methods for On and Offline Success by Akin Arikan is an excellent source of information for the most often used methodologies – covering customized URL, unique 1-800 numbers and many other cross-channel tracking techniques.  Tracking normally implemented at the channel-level, not individual event levels.  Without the tracking solution, the sales numbers by channels are inferred through MMM or other analytics. With proper tracking, the numbers are directly observed.

Tracking solution essentially a single attribution approach to a multi-touch attribution problem. It does not deal with the customer level multi-touch experience.  This single-touch attribution approach leads natrually to the last-touch point rule when viewed from a multi-touch attribution perspective.  Another drawback of it is that it is simply a data-based solution without much analytics sophistication behind it – it provides relationship numbers without a strong argument for causal interpretation.  

 The order-based solution explicitly recognizes the multi-touch nature of individual consumer experience for brands and products. With the availability of micro-level touch point and conversion data, order-based attribution generally seeks attribution rules in the form of a weighting scheme based on the order of events. For example, when all weights are zero except the last touch point, it simply reduced to the LAST touch point attribution.  There has been many such rules been discussed; with constant debate about the virtual and drawbacks of each and every one of the rules.  There are also derived metrics based on these low-level order-based rules, such as the appropriate attribution ratio (Eric Peterson).

Despite the many advantages of order-based multi-touch attribution approach, there are still methodological limitations. One of the limitations is that, as many already know, there is no weighting scheme that is generally applicable, or appropriate for all business under all circumstances. There is no point of arguing which rule is the best without the specifics of the business and data context.  The proper rule should be different depending on the context; however, there is no provision or general methodology for the rule should be developed.  

Another limitation of the order-based weighting scheme is: for any given rule, the weight of an event is determined solely based on the order of event and not the type of event.  For example, one rule may specify the first click getting 20% attribution – when it maybe more appropriate to give the first click 40% attribution if it is a “search” and 10% if it is a “banner click through”.

Intrinsic to its intuition-based rule development process is that it does not have a rigorous methodology to support any causal interpretation which is central for right attribution and operation optimization.

Here comes the third generation of attribution analytics: the model-based attribution.  It promises to deliver a sound modeling process for rule development, and provides the analytical rigor for finding relationships that can have causal interpretation.  

More details to come.  Please come back to read the next post: a deep dive example of model-based attribution.

Related post: Micro and Macro Attribution

Older Posts »

Blog at WordPress.com.