Before the first Yelp review or support ticket, there was a botched delivery of copper ore.
Dating all the way back to 1750 BC, this disgruntled purchaser may be the very first example of a formal customer complaint. Nanni, the man of the hour, was so dissatisfied with the inferior grade of copper ore delivered to him that he took the time to chisel roughly 300 characters into a clay tablet.
Here’s an excerpt from the ancient Babylonian support claim, which currently resides in the British Museum:
…How have you treated me for that copper? You have withheld my money bag from me in enemy territory; it is now up to you to restore (my money) to me in full.
istoTake cognizance that (from now on) I will not accept here any copper from you that is not of fine quality. I shall (from now on) select and take the ingots individually in my own yard, and I shall exercise against you my right of rejection because you have treated me with contempt…
Obviously heated about his poor customer experience, Nanni ensured he wouldn’t let anything like this happen ever again, and he must have felt pretty great after venting that frustration. Regardless, we can’t thank Nanni enough for getting the customer experience ball rolling.
Now, without covering thousands of years of CX history, let’s skip to the turn of the century and learn more about the founding fathers of customer feedback.
1896 – Data Processing
Founder of the Tabulating Machine Company (later renamed IBM), Herman Hollerith began using punch cards and electromechanical card readers to radically improve the U.S. census process. His invention was one of the first examples of semi-automated data processing, and the equipment went on to be leased to major census bureaus around the world.
1912 – Market Research
While Charles Parlin was working for the Curtis Publishing Company, he conducted 1,121 interviews across the nation’s 100 largest cities to draw conclusions about the market. He also pioneered a report focused on understanding the differences between convenience and shopping goods so that companies could hone their activities to enable the greatest profit increase.
1936 – Gallup Correctly Predicts Roosevelt’s Presidency
Gallup’s company, The American Institute of Public Opinion, set the standard for survey sampling and statistical methods when they accurately predicted Franklin Roosevelt’s presidential victory using a sample of 50,000 responses.
1941 – Focus Groups
Paul Lazarsfeld’s work with Robert Merton at Columbia University birthed the first concept of focus groups. The pair had been tasked with evaluating a radio program by the U.S. government’s Office of New Facts and Figures, and they chose to interview listeners and record their responses. This method ensured their questions were properly answered and the information
could be analyzed afterwards.1954 – Nielson Station Index Service Formed
Arthur Nielson formed the Nielsen Station Index Service using set meters to report audience measurement and usage, which would go on to become the standard system for measuring local television markets.
1969 – ARPANET Sends First Message
Government-funded ARPANET (Advanced Research Projects Agency Network) launched, laying the foundation for network communication standards. This framework was then used to create foundational elements of the internet as we know it today. Email, VoIP, and file transfer systems all began their life within ARPANET.
1976 – Dynamic Brand Tracking
Maurice Millward and Gordon Brown set up a business aiming to provide clients with data they could use to make management and marketing decisions for their products. The pair wanted to explore how ad-effectiveness degraded over time by establishing a dynamic tracking system that shows brand health and streaming information on what was working and what wasn’t for their brands.
1980’s – Computer Assisted Telephone Interviewing
The concept of customer satisfaction was established in the 80’s, largely thanks to the introduction of Computer Assisted Telephone Interviewing (CATI). This enabled researchers to follow scripts displayed within an application so that microdata could be collected and questions could be customized based on the answers provided by the participant.
1990’s – Computer-Aided Web Interviewing Introduced
Computer-Aided Web Interviewing (CAWI) came to prominence in the 90’s, allowing researchers to follow a script within a website that could include images, audio, and video to create a branching web page. Not only could they be customized based on the participant’s answers, but researchers were also able to experiment with the design of each questionnaire to boost response rate and accuracy.
2000’s – Mobile, Social, and Big Data
With the introduction of mobile platforms, market research expanded its reach, while social media allowed communities to grow and connect, further cementing the idea that companies must become more customer-centric in their decision making. Data was faster and easier to collect, and companies were using it to increase customer engagement and satisfaction.
Today – 360° View of the Customer
Due to the incredible amount of customer data streaming in from all channels, companies focus on capturing this information and turning it into action. The problem is that customers today use very colorful language, riddled with misspellings, domain-specific jargon, and even emojis to describe how they feel about products and services. This makes it very difficult for organizations to aggregate all of their unstructured text, let alone extract valuable insights from it.
Before Luminoso, there were really only two methods of getting computers to understand this kind of industry-specific customer feedback:
- The ontological approach, which typically spans months and leaves you with a manual, inflexible setup that must be updated to remain relevant. In other words, you must create rules and a list of the vocabulary you care about so that the computer can understand the specific type of language your customers use and provide outcomes and visuals you’re looking for
- Machine learning approach, which is more hands-off, but it has a huge weakness in that it requires a lot of training data before the computer can actually understand the language and terms your customers tend to use. In fact, Google’s DeepMind AI couldn’t even beat human players in a game of ‘Go’ even after simulating 3 million games. From there, they had to begin having the AI play itself to gather more data.
Lucky for Luminoso, we don’t need the ontologies or heaps of data required by many of our peers who follow one or both of the methods listed above. Instead, Luminoso requires 1/1000th of the training data compared to other machine learning systems, and it learns the unique, domain-specific language of all our customers automatically.
By enabling companies to easily visualize and understand the most relevant concepts and themes in their customer or employee feedback without the need for months of manual work or a fleet of data scientists, they can instead focus on improving the customer experience.
As we look back on the mile markers that led to today’s customer experience landscape, it’s easy to see why customer-centricity and data-driven decision making came to prominence. If companies have no way of understanding what drives their customer to buy or churn, how can they make the right decisions to improve the products and services they sell?
Follow us on Twitter or LinkedIn for more on how Luminoso is helping companies understand their customers and improve the experience.
Sources – Openculture – Vision Critical