Protect your company culture from social threatby
Are you at risk of unwittingly hiring an online troll? If you are not social media screening your candidates, the answer is a resounding yes. By using the latest tools available you can protect your people and culture from bullying behaviour.
Without doubt, one of the ‘hot topics’ in the screening sector is the emergence of social media screening. Today, people share so much of their personal lives and private musings online without much thought about what can be ‘seen.’ However, unless posts are diligently reviewed and edited or deleted, those private-made-public thoughts linger in a social media time capsule, waiting to be discovered by a stranger in an HR team conducting a search to see if a job candidate has the appropriate background.
This could be a potential disaster for a candidate but also an absolute minefield for the employer. But that minefield can be navigated successfully and, when combined with other checks, can help provide a truly holistic view of a candidate, enabling employers to make better, more informed hiring decisions.
Social media vs. adverse media
When thinking about adverse media searches, people may wrongly assume that the check includes social media profiles – but this is not the case. Adverse media searches are tailored to look at public media sources to identify actions already taken by an individual that are of a sufficiently adverse nature to have been reported in the public domain. In contrast, social media searches focus on behaviours, specifically those that someone has shared publicly via their personal social media accounts.
Put another way, social media searches reveal additional real-time insights into how someone interacts with others (comments on posts), expresses themselves (their own posts), and their opinions (the ‘like’ button). Interestingly, and perhaps one of the most powerful applications of the social media check, is that ongoing checks can reveal critical changes in those behaviours, making it a powerful risk management tool when used for ongoing monitoring.
It is ironic that a check to review behaviours can in turn trigger our own behaviours and innate beliefs and this itself can create risk.
Social media and changing privacy laws
Until recently, social media was not part of the background screening picture because it was often not well understood in the context of privacy or employment laws and privacy regulators were rightly cautious about its use for such purposes.
But privacy laws are now more robust and provide better guidelines around how data can be used. Alongside this is a growing focus on a corporation’s ‘purpose’ and values. So, attracting talent becomes a sum of two parts: does the candidate have the right qualifications and experience plus the right character to fit the corporate/team culture? Will the candidate further enhance the organisational purpose and reputation?
Furthermore, long before an organisation may deal with negative publicity or threats of legal action, it can suffer through the many ways that toxic employees negatively impact the people around them. Whether an employee’s problematic online behaviour impacts co-workers online or is mirrored in the workplace, if left unchecked, it can lead to dramatic increases in employee turnover, absence, and substantial losses in overall performance.
The risk of unconscious bias
Manually searching thousands of online sources is virtually impossible to do quickly and accurately – or cost effectively. It can leave companies vulnerable to errors and, even worse, allegations of unconscious bias in decision making. It is ironic that a check to review behaviours can in turn trigger our own behaviours and innate beliefs and this itself can create risk.
For instance, manual searches often inadvertently uncover information that is entirely irrelevant and illegal to use in employment decisions – such as race, religion, sexual orientation or political affiliation. It then becomes difficult to prove that this information was not used in employment decisions.
Even when a decision maker intends to ignore it, information can subconsciously bias them and organisations like the Equality and Human Rights Commission (EHRC) assume that if public online content is accessed, it is used.
Start with a risk assessment
Dedicated social media search tools filter out protected characteristics and class information, focussing solely on job-relevant information to significantly reduce these risks, while also complying with GDPR regulations.
It is the employer’s responsibility to ensure any background check ordered, including a social media search, is proportionate and relevant to the role the candidate will be undertaking.
The first thing to do is conduct a risk assessment and categorise roles into those that carry risk and those that do not. A flexible screening programme will allow you to build different packages to address various risks allowing you to adopt a good level of granularity that is privacy compliant and should include:
- Full information about the nature of the search being conducted with an electronic acknowledgment/consent to perform the check with an audit trail
- Assurance that all data is stored, retained, transferred, and accessed in accordance with requirements and that adequate technical and security measures are in place
- Support for candidates should they wish to exercise their rights to access, rectify or delete data, or to withdraw
- Filtered reporting matched to two known identifiers to ensure accuracy and job relevance
In addition, knowing where to draw the line is important for compliance, relevance and proportionality. If I want to make my social media posts public, I can, but equally I can restrict access and keep what I want private, and that must be respected. Requests for passwords to accounts should never be made – employers should only access public social media posts.
Searches should be designed to identify risk-creating behaviours such as harassment, violence, intolerance and crime, to highlight content that may introduce a toxic work environment or present brand risk.
Digital and artificial intelligence (AI) solutions
The best social media search products use AI-based software, which can read text and images like a human, identifying thousands of job-relevant, potentially high-risk behaviours. They should offer a combination of both automated and human analysis where technology is used, to ensure a robust and exhaustive search is conducted with a quick turnaround time. Specialist analysts should then review results to ensure accuracy.
Searches should be designed to identify risk-creating behaviours such as harassment, violence, intolerance and crime, to highlight content that may introduce a toxic work environment or present brand risk. For companies that are sensitive to substance abuse, searches could be tailored to include the above categories plus drug and alcohol content.
Custom keywords can help tailor search criteria further, allowing you to comply with legislation by ensuring that protected characteristics are pre-filtered and do not appear in results and/or using different keywords for different role types/risks. It can even assist in creating an ongoing monitoring programme post-hire that can be deployed to both manage ongoing risk or be part of your wellbeing programme by looking for behaviours that would allow teams to identify when pastoral support is needed.
HireRight is currently seeing an average 10% ‘hit’ rate for customer-defined criteria on public social media searches and most of these hits would not be picked up in an adverse media search. So, when it comes to analysing behaviours that impact corporate culture and reputation, this shows how important the social media check can be to an organisation to verify that candidates who are ‘good on paper’ can be ‘great in person’ too.