Table of contents
Recommended for you
Checking the pulse
Big data is not a new paradigm. It has been promulgated since Roger Mougalas of O’Reilly Media coined the term in 2005.
The 14 years since its inception has seen the growth in storage data worldwide, increasing exponentially – from under an Exabyte in 2005 to a gigantic 40 Zettabytes in 2019 (IDC). For a sense of scale, 1 Zettabyte is the equivalent of storage capacity of 250 billion DVDs. Some may argue new terminology is required for what we consider “big” data in 2019.
Figure 1 Source: Patrick Cheeseman |
Big Data now encompasses many interrelated disciplines fuelling a demand for people with entirely new skill-sets, for which sound reminiscent of a dystopian future – such as Machine Learning Engineer and Artificial Intelligence (AI) Designer. Data has been afforded a seat at senior leadership meetings, with a growing number of companies complimenting its strategic focus on data by introducing the Chief Data Officer to its ranks. We may still have some time to refine Isaac Asimov’s three laws of robotics before AI becomes truly artificial. In the meantime, its apparent that big data is disrupting the way we live.
The HR tech industry is booming with innovation, most of which is enabled by data driven advances. In the last couple of years I have seen what used to be highly complex and specialised technologies, requiring sufficient investment to build and maintain, become overnight commodities from cloud providers. Amazon Web Services Sage Maker radicalised the ease of access to machine learning and AI for developers. New services such as Rekognition, a computer vision service, and Comprehend – a natural language processor have taken things a step further. What used to be a hunter-gatherer like technology is now available in your local supermarket and anyone can use it.
Technology like IdealI uses AI to automate the recruitment process claiming “[Ideal] reduces bias and dramatically improves quality of hire.” An emerging trend in 2019 is HR Analytics which aims to make every part of the employment value chain completely data driven. This will dramatically change how HR professionals work. Some will see it as an opportunity to know more about their people, reduce churn and improve culture. Others will feel threatened by it. Either way, there are unknown ramifications which are yet to be understood.
It is obvious at the macro-level what some of the impacts are, when demand for a given role ceases. Currently evident in the threat placed on the transport industry with self-driving technology. But, a less apparent and more dangerous impact involves the de-humanisation of interaction from AI and machine learning – something which invades the right to privacy and crosses ethical boundaries without care for the impact to the individual.
Psychology labels this unconscious bias, something which the HR industry bolsters its defence against with the current trends in promoting concepts like equality, diversity and inclusion. This concept of unconscious bias has been personified by AI, breaking the “the data doesn’t lie” myth and exposing many companies that dived in head-first to varying levels of discrimination and prejudice as outcomes.
An example of this is COMPAS, in the USA - an algorithm which predicts the likelihood of criminal reoffending to guide sentencing. ProPublica in 2016 discovered that COMPAS predicts black defendants pose a greater risk of recidivism than they actually do. More recently, figures were released showing the Metropolitan Police’s facial recognition software “wrong identifies public as potential criminals 96% of the time”. Beside this ethical dilemma of facial scanning being used in public spaces, the consequences of using technology devoid of subjectivity is becoming a reality in an increasingly, alarming way.
AI in its simplest form is essentially a tool providing predictions based on a given set of data. Garbage in = Garbage out. This thinking leads to an epiphany of sorts.
Is the tool really at fault?
AI and Machine Learning are making predictions based on historical context. If this context contains bias, then it’s not a big leap to understand where this unconscious bias exists in algorithms. Even with the great leaps in recent years of society’s psychological upheaval in ethical understanding. Unconscious bias still exists in todays world and is making its way into algorithms through the great funnel that is big data. Because we capture more data than ever, we also need to understand that this includes both “good” and “bad” data.
So, how do we solve the unconscious bias that exists in big data technologies like AI? I think we solve it by eliminating it from our societal fabric. This is a big job and one that must start with education. So, instead of thinking about how “Big Data” is impacting HR, it might be an idea to think about how HR can impact “Big Data”. If AI is a mirror that reflects our societal unconscious bias, then maybe we should change our appearance for the better – instead of blaming the mirror.