Daily Mirror - Print Edition

Customer satisfaction survey achieves topical insight into breaking client issues

29 Aug 2016 - {{hitsCtrl.values.hits}}      

Last week, we reviewed the customer satisfaction survey program. We said that it should focus on measuring customer perceptions. 


How well does your company deliver on the critical success factors and dimensions of the business as defined by the customer? For example, is your service prompt and is your staff courteous? How responsive and understanding of the customer’s problem are your representatives? The findings of company performance should be analysed both with all your customers as well as by key segments.


We said that any Customer satisfaction survey will follow eight steps. We discussed the following four:  (1) defining the problem, (2) Planning the survey design, (3) designing the questionnaire, (4) selecting a sample. This week, we will continue with the 
other four. 
 

Stage 5: Collecting data
Once the research has been designed, the researcher must actually collect the needed data. Whether a telephone interview, mail survey, Internet survey, or another collection method is chosen, it is the researcher’s task to minimize errors in the fieldwork process and errors are easy to make. For example, interviewers who have not been carefully selected and trained may not phrase their questions properly or may fail to record respondents’ comments accurately. Worse, if the fieldworkers are poorly paid, they may be tempted to fill out the forms themselves creating a fraudulent set of data for analysis. Field service firms are organizations that specialize in the collection of data. 
 

Stage 6: Analysing the data
Data processing ordinarily begins with jobs called editing and coding, in which surveys or other data collection instruments are checked for omissions, incomplete or otherwise unusable responses, illegibility, and obvious inconsistencies. Coding assigns numbers to subjective responses. 


For example, the reasons for switching to a competitive product may need to be given numerical identification numbers such that l=cheaper price, 2=better value etc. 
Data analysis is next. Data analysis may involve statistical analysis, qualitative analysis, or both. 


The type of analysis used should depend on the research objectives, the nature of the data collected, and who will use the findings. 
Of course, if a non-probability sampling procedure is used, then statistical procedures will not be needed as there is no basis to estimate sampling error. The purpose of the statistical test is to estimate levels of sampling error and accurately judge how different the sample results may be from the population totals.
 

Stage 7: Drawing conclusions and preparing the report
Remember that the purpose of customer satisfaction research is to aid managers in making effective marketing decisions. The researcher’s role is to answer the question - what does this mean to our CRM strategy?  Therefore, the culmination of the survey process must be a report that usefully communicates research findings to management.


Typically, management is not interested in how the findings were derived. Except in special cases, management is likely to want only a summary of the findings. Presenting these clearly by using graphs, charts, and other forms of artwork, is a creative challenge to the researcher and any others involved in the preparation of the final report. 


If the researcher’s findings are not properly communicated to and understood by the organization and its managers, the survey process has been, in effect, a total waste.
 

Stage 8: Following up
After the researcher submits a report to management, he or she should follow up to determine if and how management responded to the report. The researcher should ask how the study and/or he report could have been improved and made more useful. 


The output of one study is typically the initial input for defining research objectives for the next one, thus, the ending discussion is also the time to consider the specific issues in need of better description as well as the exploratory steps that should be taken to initiate the next satisfaction measure.
 

Satisfaction and quality measures
There are many ways to define and assess satisfaction as well as quality. In general, satisfaction is viewed as a comparison between what customers expect from a product or service and the actual performance received. If the organization delivers more than what was expected, customers are delighted. 


If the organization falls short of its promises, customers are dissatisfied. A basic organizational need, then, is to understand satisfaction in terms of the many aspects of a product or service that could be important to different segments.


The current best practice in assessing customer satisfaction may weft involve an understanding of the drivers of customer perceptions of brand, value, and relationship/retention equity. 
A perceptual driver is similar to a key performance indicator it identifies which issues seem to affect satisfaction the most.


Value equity represents the objective appraisal of the brand (things like perceptions of quality, price, and convenience); brand equity is the subjective appraisal of the brand (things like brand awareness and attitude toward the brand); and relationship equity involves the special relationship elements that link the customer with the brand (e.g., frequent buyer programs).


Satisfaction can be affected by customer responses to tangible aspects of the product or store, intangible perceptions of the product, brand name or image, as well as the drivers of the strength of the relationship with the customer. Drivers of perceptions are multidimensional indicators of customer satisfaction - who, what, where, why, and how of the effects.
 

Roll up, drill down or drill across
Consider the automated call centre. Objective indicators of the performance of the routing system such as call handling times and number of transfers can be compared to subjective indicators of call centre agent performance such as evaluation scores or call outcomes. Organizations should have an interest in assessing the effects of new training programs, improved contact points, or revised routing procedures.


The ability to roll up, drill down or drill across data at any level including listening to the actual call recordings underlying the data gives an unparalleled opportunity to understand reality from the customer’s perspective. Detailed analysis can include drilling down through different levels of the organization, across time for trend analysis or comparing departments or product lines against each other.


Objective measures and subjective perceptions can be important when the right data is captured, consolidated, analysed, and reported as part of a circular process that is used by the entire organization to improve customer experiences.
 

Service organisations
Service organizations create value for consumers through performances. All businesses are service businesses to some degree. Computer manufacturers and food retailers create consumer value through a goods—services mix. Commercial banks and hospitals create consumer value largely through services. Service convenience facilitates the sale of goods as well as the sale of services.


Thus, there are tangible aspects of a service such as automobile repairs and intangible aspects to products such as the check-out experience. Both can be important components of the organization’s understanding of customer contact points. Two key subjects of interest to organizations include the tangible features of a product and the intangible elements associated with the provision of service.


With this instalment, we end our series on Customer Relationship Management. If you need any clarification on certain issues you may communicate with the writer.  If you need any previous copies of the series, you may access them at http://www.dailymirror.lk/columns/ and search for the writer.