How to answer ‘what does good look like’?

We’ve noticed a recurring theme when talking with HR decision-makers: how to establish a common understanding and way of defining what ‘Good’ looks like for key roles in an organisation. Everyone you ask has a different opinion, and judges it differently.

The problem is, not  being able to describe what good looks like in a consistent way results in:

  • managers setting unclear expectations about performance targets with ambiguous or unrealistic targets, and consequently getting less out of their people than they are capable of
  • variable hiring effectiveness; the wrong people hired for the job and missing better candidates
  • people promoted into the wrong roles, or roles where they are lacking key competencies, most commonly managerial behaviours
  • variable performance across the business, affecting customer perception of quality and not having a basis for continuous improvement.

Some people asked will hold to a traditional view of the role, modelled on how it has always been done, some will point to individuals they consider embody good practice, some will claim to be able to recognise it when they see it but may struggle to describe the criteria they are using!

If the role is changing and current practice is no longer likely to be good enough to compete effectively, how can organisations move on from ‘here’ if they can’t describe what ‘here’ is.

Of course once the bar has been met, what happens? It goes up? Without at least a working description of what ‘good’ looks like, how can organisations focus their assessment, development and recruitment efforts effectively, year after year?

So what are the methods relied on to make these decisions, and how reliable/useful are they? Let’s take a look:

Job descriptions

Job descriptions provide a factual definition of what is required (accountabilities, responsibilities etc.), but doesn’t give much indication of what ‘good’ looks like, or help in assessing potential or performance in the role.

Person specification

Identifies personal characteristics judged to be indicators of good candidates for a role. Used for recruitment and selection normally for performing an initial sift of potential candidates. Often these are defined subjectively with little or no validation before or after, so variable in quality and usefulness.  Can be very effective in communicating what the company considers important and rewards. They set expectations of the role in terms of its outputs, and drive the specific behaviours that contribute to achieving those outputs; you get what you measure and reward. Performance measures are often unbalanced in roles (e.g. only based on revenue) and can therefore drive unbalanced, unexpected or undesired behaviours. They may be in conflict with other goals or values and inhibit the development of desired behaviours. In a recruitment context, performance measures relate to performance achieved elsewhere and are therefore hard to compare.

Values

Similar to person specifications, values are useful in defining individual ‘enablers’ for the role and are useful for identifying whether an individual will fit-in and thrive within the culture of the organisation, or not! They implicitly set norms of behaviour that influence the ‘how’ of work in an organisation but can often be difficult to translate to the actual role and the more generic they become, the less insightful any assessment becomes. This applies in a recruitment context though values based assessment is more useful for progression and selection where the ‘values can come to life’.

CVs and references

CVs and references provide evidence of what the candidate considers to be their most relevant experience and achievements, and though they are widely relied on in recruitment, have proved to be very poor as a predictor of performance in a role. Not relevant to selection or development, as more relevant performance and review data is normally available. Legal issues have made balanced references problematic with many companies now only confirming terms and duration of employment. Verbal references are not without issues either!

Qualifications

Often define the pre-requisite level of competence for many roles; less what ‘good’ looks like, more what ‘acceptable’ looks like as far as the qualification defines the role. Offer very little insight into other key areas such as team working, productivity, integrity etc.

Policy/Process/Procedure

These are very useful in terms of defining what the work is, where key decisions are made and in some cases how to do it. Their value varies widely depending on how well they are documented, how recently they have been updated, how well they reflect the reality of the work, and how much detail they contain. They are highly relevant in selection and development, but not relevant in recruitment in terms of the basis for assessment.

Interviews

By far and away the most popular and widely used technique for recruitment and progression. They rely on the interviewers’ subjective opinion about what good looks like, and on their interview skills. The predictive validity of interviews can be increased through training and use of structured, competence-based interviewing techniques, however even with these

Competence profiles can be a very useful framework for defining what good looks like and providing criteria which can be used for assessing, comparing, identifying strengths, weaknesses and development needs. Like ‘Values’ they are less useful the more generic they or the behavioural indicators become or when the relevance to a specific role becomes difficult to observe.

Appraisal data

Well executed, appraisal data will provide useful insight into what good looks like by providing examples of specific achievements in the current roles. Highly relevant for setting performance goals and targets and planning personal development, appraisals also provide useful information for selection. They are not relevant to recruitment.

360 degree review data

This technique is primarily used for planning personal development. It can also provide valuable information for use in selection and reward, although this has to be managed with care. It provides a wealth of detailed feedback on what different stakeholders regard as good and bad in a role. Its strength is also its weakness, it takes time and effort to gather the feedback, and to interpret and analyse the volume of data captured. The data is subjective, highly contextualised and very personal; however it includes viewpoints from subordinates and peers as well as managers, and sometimes customers and suppliers, which is less readily available through other methods.

Psychometric test results

Psychometric tests are mostly used in recruitment and selection. They provide objective assessment and enable fair comparison of candidates. The selection of which tests to use is based on judgement about which competencies to test, then which of the available tests are best suited to the role. This chain of compromises can lead to problems with face and content validity and many struggle to relate them to the day-to-day work of the role. This especially true when applied to operational roles and can lead to ‘revolts’ with tests activities being discredited through lack of observed relevance. These tests are rarely used as training and development diagnostics.

Case studies and success stories

Such resources, including ‘A day in the life of…’ are an excellent and rich source of information about what good looks like, and for communicating a view of what the role involves. They are very useful for development, though can be highly contextualised and sometimes biased if too few ‘versions of the truth’ are used in their creation. They are not very useful for selection or recruitment and can be ‘backward looking’.

Opinion

Everyone has their own view of what good looks like, formed on their personal experiences. They tend to be single-sided (sometimes highly biased), overly contextualised, and undocumented, however this is an incredibly rich source of insight into current practice and predictions about what is becoming more or less important about the role. Often used in final stages of interview by the line manager who will have their own single-sided, highly biased, overly contextualised and undocumented view of what they expect the candidate in front of them to do!

So, there is no shortage of potential sources of information capable of contributing to a view of ‘what good looks like’; the list above is long enough and it is not exhaustive! Each has its own attributes and applications, but if we were to draw out the ‘best’ characteristics of all these methods, the list would be something like this:

  • describes the important elements of the work and identifies where key decisions or contributions are made
  • defines a balanced score card of performance measures identifies the key enabling competencies and provides easily observed positive and negative behavioural indicators
  • reflects company values and culture
  • reflects real-life ‘current practice’ and challenges people to demonstrate ‘best practice’
  • incorporates informed views of what aspects of the role are becoming more important
  • provides criteria for objective assessment and comparison.

There is method that exhibits these characteristics, which is gaining in popularity particularly in recruitment applications, but which appears to be relatively unknown compared to the others already mentioned in this article.

This method is known as ‘situational judgement’ and is based on defining a number of situations that are highly relevant to a job role with a range of options for dealing with each. Definition of the situations draws on many of the sources described above, and uses other techniques such as event-based interviewing and critical incident analysis. The aim is to define a set of situations that collectively represent the diversity of the role, but which focus on the specific situations where the most significant contributions are made and reflecting an informed view which situations are becoming increasingly important to the role. The options presented against each situation draw on the real-life experiences (good and bad) of those who have faced similar situations, and are anchored against the key competencies for the role (positive and negative behavioural indicators). This produces a highly relevant set of behavioural indicators that can be readily observed and used to assess competence objectively (i.e. what ‘good’ looks like). Each of the options presented are rated for effectiveness, again drawing on the experience of those who have applied the options in real life and experienced the consequences.

If you would like to know more about how we could help you define ‘what good looks like’ for roles in your organisation, please do get in touch.

 

Leave a Comment

Your email address will not be published. Required fields are marked *