The secondary level pulse is around people debating which of these two methodologies is “better” and hence which one should they be using. There are lots of conversations that outline benefits of one methodology or the other. There are even more technically nuanced geeky conversations by one party bashing the other.
The only assumption is that you don’t have a website that is so amazingly unique that there is no other website with a web serving platform on the planet like yours.
Here are four important reasons for picking a side (that has not hurt Fox News and I am hoping it won’t come back to bite me either, their slogan is: We Report. You Decide):
Separating Data Serving & Data Capture (gaining efficiency and speed):
With web logs data serving (web pages with data going out from your web servers upon user requests) is tied completely with data capture (as the web pages go out the server logs information about that in web log files). Every time you want a new piece of data you are tied to your IT organization and there ability and structure to respond to you. In most companies this is not a rapid response process.
The beauty of this is that the company IT department and website developers can do what they are supposed to do, serve pages, and the “Analytics department” and do what they are supposed to do, capture data. It also means that both parties gain flexibility in their own jobs, speaking selfishly this means the Analytics gals/guys can independently enhance code (which does not always have to be updated in tags on the page) to collect more data faster.
The reliance on IT will not go down to 0%, it will end up around 25%, but it is not 100% and that in of itself opens up so many options when it comes to data capture and processing.
Type and Size of Data:
Web logs were built for and exist to collect server activity, not business data. Over time we have enhanced them to collect more and more data and store it with some semblance of sanity to meet the needs to business decision makers. They still collect all the technical data as well as the business data (often from multiple web servers that support a single website each of whom has a log file that then needs to be “stitched back” to give the complete view of each user).
This presents us with a stark choice of having to build and own our own company only customized means of capturing this new data and keeping pace with other innovations or relying the on the expertise that is out there (regardless of which Vendor you prefer) and keeping pace with all the innovation.
Often this is a easy choice to make of any company that considers its core competency to be to focus on its business and not developing web analytics solutions (though admittedly if you are Wal-Mart you can absolutely do that – for example they have invented their own database solution since nothing in the world can meet their size and scale).
Increasingly we are heading towards doing a lot more measurement and customer experience analysis beyond just clickstream. Two great examples of this are experimentation and testing (especially multivariate testing) and personalization / behavior targeting. In both cases “add-on” solutions are tacked on to the website and testing / targeting happens. Often these solutions come with their own methods of collecting and analyzing data and measuring success.
But as we head for a integrated end to end view of the customer behavior, for optimal analysis, we have to find ways of integrating data from these add-ons into the standard clickstream data (else you are optimizing just for each add-on which is not a great thing).
[Like this post? For more posts like this please click here.]