Gauging the exact benefits of internal communications might be tricky, not least because there is not one single best methodology to gather data.
Even when key information is collected and the details are known, it can still be hard to take the right actions and assess their impact.
However, “in the past years there have been more guidelines provided to communicators on how and what to measure,” says Angela Sinickas (pictured right), a pioneer in the field of organisational communication measurement. A good example she likes to give is the Barcelona Declaration of Measurement Principles, an authoritative document published by worldwide communication associations in 2010, which sets clear standards and approaches to PR measurement.
Other frameworks are useful too. Sinickas notes that the CIPR Inside Measurement Matrix focuses entirely on internal communications, and she concludes that it is one of the most appropriate guides for the profession. “It is a great guideline; not only does it say what you should measure, but how you can measure it – either with a survey or through many other forms of research.”
People familiar with new technology appreciate the fact that data can be counted more easily today. “Through web metrics you can see how many employees click on different pages, or “like” a piece of news or share a blog post.”
Trying to pin down the exact figures of clicks misses the point, though. The problem is that those numbers don’t necessarily mean that what has been measured is useful. “For example, when you count how many people are clicking on a certain page, all you are really measuring is the piece of communication immediately before that was very effective at making people go to that link. It tells you nothing about whether they have found the actual page useful or not.”
In those cases some interventions are required. Firstly, Sinickas suggests asking readers if they have found the content useful or not at the bottom of the page.
“Action” pages could also be helpful. “Those are pages where individuals can take some types of action, from donating money to signing up for a benefit option – any kind of behaviour-related action.”
Another crucial piece of information would be to track down how people went to any particular page. “Did they get there because they received a link through the employee newsletter? Or did they get to that page via a social media discussion? Or through an article on the intranet?” The answers to those questions would tell communicators what the most useful communication was to have staff access that specific content.
But, for this type of evaluation to really make a mark, it has to be thought-through beforehand so unique links or “parameters” can be created in each of the different communications that lead to the same action page.
Focus on changing behaviours
For Sinickas, among the top measurement mistakes is “forgetting to evaluate until after the communication is finished.”
Benefits could be bigger if the before baseline was taken carefully into consideration. One of the best ways to prove the value of any communication is to show that it has led to some kind of behavioural change. “Try to connect the communication you are doing with how many employees are changing behaviours by improving the knowledge and attitudes your research has identified as leading to those behaviour changes.” From there, Sinickas believes communicators can prove that their activity has brought money back to the organisation either by increasing revenue or decreasing the cost of work.
Asking the right questions
All of this adds up to the importance of asking the questions that will lead to actionable answers. But, despite a communicator’s best intentions, this can easily be done wrongly.
“For example, let’s say you ask people to agree or disagree with the following statement: ‘The newsletter should continue to be published weekly.’ If they disagree, you don’t know whether to increase or decrease the frequency, or to eliminate it altogether. It would be better to ask people to select their ideal frequency from a list of options such as monthly, quarterly, never, etc.”
There are a lot of such voids. Another example is asking people “Are you getting too much, too little or the right amount of information on X subject?” Although this question looks good, the reality is that it is limited and does not give enough information. Sinickas suggests instead formulating two questions:
“I would ask, ‘How interested are you on X subject?’ as well as ‘How well informed are you on X subject?'”
Ultimately, the benefits of asking both questions is that communicators will learn right away how many employees are currently interested on X subject – if they want more or less on that topic – as well as how well informed they feel. “If they just ask about wanting more information, it’s very possible that the percentage of people who want more might not change at all, even though both the interest level and the information may have both improved by the same percentage.”
Another misleading question would also be, ‘Where would you like to get all your company-related information?’ Sinickas likes to emphasise that depending on the subject, staff want to access content in different ways. Hence, “if you force them to choose just one option for all topics, you end up with useful data for choosing the right mix of channels by subject.”
Reading the data
One thing is to gather the data; another thing is to understand what the best actions to take are based on that information. Reading the data correctly is crucial. Sometimes this mean recognising that there might be a bias in the results.
“It’s very easy to post a survey question on the intranet home page and get replies almost instantaneously. It’s more difficult to interpret what those results really mean.”
That is partly because the results apply only to the segment of the audience with easy online access, which means that some groups may end up being under- or over-represented.
Hence, Sinickas advises to “ensure that you are getting the data from the people who have a legitimate answer to the questions. Otherwise you will end up with the wrong assessment on what is happening. For example, if you ask people how useful a communication channel is without first excluding those who have not seen it, you may have a high percentage of people saying it’s not useful when really they just haven’t seen it but were forced to choose only among the available options.”
That may sound straightforward. Yet, reality is more complex and Sinickas has seen many companies falling into that trap.
Not surprisingly, the organisations that are considered to be good at measurement are also the ones with a proven track record of evaluation activity. “They measure regularly to see how things are changing over time as well as to understand what works and what does not work.”
Do survey questionnaires still have a role to play when company have real-time big data at their fingertips? For Sinickas, the answer is “yes”. Strange to say, but for her the quarterly, twice per year, and even once per year surveys do still play a role in an always-on and connected digital world. She likes to bring back her initial observation: “With real-time tools, you can see what and when people are clicking on, but that does not tell you what they are thinking about what they read. And, you don’t know what they are thinking unless you ask them though a survey question.”
She might have a point. The analytics industry has been consolidating since early 2000. Even as powerful technology such as machine learning is rapidly sprouting up, a good internal communicator would always ask colleagues for their ideas and needs.
But, what does measuring ‘regularly’ mean then? It depends on the context. “Observational metrics that do not rely on surveys could be measured monthly. For example, you can track the reading grade level of your writing to see if it matches the educational level of your audience every month. You can track the volume of content you are sending to employees each month on a list of key topics. Both of these factors will eventually impact how well employees understand the key topics you’re writing about—which you might only be able to measure once a year. Large organisations can also do random samples more frequently than once a year with different clusters of employees.”
The future of measurement – finding the balance
The future is bright for the measurement industry because “more management wants to know if they are spending their money wisely.” Indeed, practitioners are moving toward new data and analytics.
Yet, Sinickas repeats for clarity, “There is always a balance between what you can learn and what you should learn.”
She refers specifically to crossing the line of invading personal privacy: “For example, IBM regularly monitors the level of involvement of their employees on internal social media. If the level of participation of a department drops, their experience shows it likely happened because the group has a new manager who may not be working out well in that role. Then HR does an intervention to either help that manager or move him or her into a different role before the lowered level of employee engagement leads to dissatisfied employees looking for jobs elsewhere. While the information is useful because it allows the company to make its management better, it is also invading the private space of individuals. It is a delicate balance.”
Not everyone would necessarily agree with that statement. But nonetheless Sinickas’s views are a good reminder of the privacy dilemma in our digital age – How are our data used? Perhaps as well as measurement we should be considering with equal gravity just how ethical our corporate policies and practices are when it comes to evaluating the views and actions of our colleagues.