Analytics Strategist

May 24, 2012

The Principles of Attribution Model

Filed under: attribution analytics — Tags: , , — Huayin Wang @ 7:36 pm

(Disclaimer:  some questions and answers below are totally made up,  any resemblance to anything anyone said is purely coincidental)

How do we know an attribution model, such as Last Click Attribution, is wrong?

  • it is incorrect surprise surprise, a lot of people just make the claim and be done with it
  • it does not accurately capture the real influence a campaign has on purchase – but how do you know it?
  • it only credit the closer – isn’t this just a re-statement of what it is?
  • it is unfair to upper funnels and only awards to lower funnel – are you suggesting that it should award to all funnels, why?
  • it leads to budget mis-allocation so your campaign is not optimized – how do you know?
  • it is so obvious, I just know it – what?

How do we know an attribution model, such as a equal attribution model, is right?

  • it is better than LCA – intuition?
  • it gives out different credits than LCA so you can see how much mis-allocation LCA does to you campaign – different from LCA is not automatically right
  • we tested and it generate better success metrics for the campaign – sound good, how?
  • it is fair – what does that mean?

How do we find the right attribution model?

  • try different attribution models and test the outcome – attribution model does not general outcome to campaigns directly
  • play with different models and see which one fit your situation better – how do I know the fitness?
  • use statistical modeling methodology to measure influence objectively – what models? conversion models?
  • use predictive model for conversion – why predictive models? what models? how to calculate influence and credit from the models?
  • test and control experiment – how many test and control, what formula to use to calculate credit?
  • you decide, we allow you to choose and try whatever attribution weights you want – but I want to know what’s the right one?
  • the predictive models help you with optimization, once we get that, you do not care about attribution – but I do care …
  • shh … it is proprietary: I won’t tell you or I will kill you! – ?

The Principle of Influence

Three principles are often implicitly used:  the “influence principle”,  the “fairness principle” and the “optimization principle”.

The influence principle works like this: assume we can measure each campaign’s influence on a conversion, the correct attribution model will give credit to campaigns proportional to their influence.  The second principle is often worded with “fairness”, but very much the same as the first principle:  if multiple campaigns contribute to a conversion, giving 100% credit to only one of them if “unfair” to others.  The third principle, the optimization principle, in my understanding, is more about the application of attribution (or the benefit of it) and not about the principle of attribution.

The principle of influence is the anchor of three; the fairness and optimization principles are either a softer version or a derivative of it.

Now we have our principle, are we close to figuring out the right approach to attribution model?  We need to get closer to the assumption of this principle.  Can we objectively measure (quantify)  influence?  Are there multiple solutions or just one right way to do this?

If influence principle is the only justification of attribution models, then quantitative measurement methodology such as probabilistic modeling, some time it is called algorithmic solution which I think is a misnomer,  will be the center technology to use.  It leave no room for arguing just on the ground of intuition alone.  Those who offer only intuition and experience, plus tools for clients to play with whatever attribution weights are not attribution solution provider, but merely a vendor of flexible reporting.

Those of the intuition and experience school like to frame attribution model around the order and position of touch points:  the first/last/even and the introducer/assist/closer. (how many vendors are doing this today?)  They have troubles in providing quantitative probabilistic solution to attribution issue.  The little known fact is that it is analytically flawed:  the labeling of “last touch” and “closer” are only known post-conversion, and therefore not usable inside probabilistic modeling framework.  In predictive modeling and data mining lingo, this is known as the “leakage problem”.  (search on Google, or read Xuhui’s article that mentioned this).

Unfortunately, we have a problem with the data scientist camp as well but of different nature; it is the lack of transparency with metrics, models and process details.  Some vendors are unwilling to open up their “secret sauce”.  Perhaps, but is that all?  I will try to demystify and discuss the “secret sauce” of attribution modeling.


Advertisements

1 Comment »

  1. Hi Huayin,I have gone through your post ,its really nice.I want some help from you .I am making one multi touch attribution modelling using Bayesian network.I have conversion (0,1) and 3 channels A,B,C which are continuous.after getting the conditional probabilities of each node how can I convert them into fractional weights of each node. Please suggest me the process of doing it. Thanks and Regards, Parthasarathi Chakraborty.

    Comment by Parthasarathi Chakraborty — August 25, 2015 @ 11:36 am


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Create a free website or blog at WordPress.com.

%d bloggers like this: