Digitalisation in the age of Corporate Responsibility

Toshiba Robot
Photo by Alex Knight on Unsplash

In the last blog post we looked at the three fundamental questions which are particularly relevant for any board of directors. Because they are the basis upon which fiduciary duty, much beyond its legal definition, is constructed. And: they outline the framework within which the fiduciary duty of a board is bound to evolve over time. Also in its legal incarnation.

These three questions are:

  • Responsible for what?
  • Responsible towards whom?
  • Who is responsible?

In this present post I’d like to summarise what those three questions may mean when applied to one specific topic: digitalisation.

For a couple of reasons:

  1. together; sustainability and digitalisations are the two most dominant paradigm shifts our global economy is undergoing at the very moment.
  2. digitalisation, the resulting automatisation as well as geographical distribution, triggers questions and concerns that are new to businesses, and by and large unanswered. There is no previous and successful way to ‘solve’ the arising challenges. As of yet, at least.

Interesting and fairly abundant research has been published in this field. One of the most recent pieces is an academic paper by INSEAD researchers. What is it that they found and concluded?

Responsible for What’ & digitalisation?

In other words: For what is, or can be, a company or organisation actually responsible?

What can be said about this is:
Digitalisation / The digital economy can

  • Make existing CR issues manifest themselves in novel ways.
    Think the issues around AI and job replacement as an example.
  • Help alleviate or solve existing CR issues.
    An example would be transparency and the opportunities offered by distributed ledgers.
  • Intensify existing CR issues.
    Think big data and privacy concerns.
  • Open up new CR issues.
    Think manipulation of markets that are not geographically connected but nonetheless form a unified, unique, global market.

What is hence in the focus here are actions: Things that get done, implemented in some specific way. And the consequences thereof.

This aspect is therefore – all things considering – reasonably easy to address: it is ‘only’ an update of social convention in regards to what consists, or not, a morally correct and/or acceptable action.

Responsible towards Whom’ & digitalisation?

In other words: Towards whom (or what) can an organisation or company be responsible at all? Hence: from among all those those impacted by the firm’s action – towards whom among those is the company indeed also responsible and towards whom not?

This is – evidently so – the ‘stakeholder’ topic resurfacing. Combined with the question who the truly ‘relevant stakeholders’ indeed are. With the latter hinging on three components: power, legitimacy, and urgency.

What can be said about this is:
Digitalisation / The digital economy can

  • Change the salience of previously existing stakeholder groups.
    Think for a moment how social media has changed how important the public opinion is.
  • May make new – or previously ignored – stakeholder groups emerge.
    Consider for a moment about indigenous tribes basically unknown to the public at large. Or indeed the increased relevance of animals and their welfare.

The focus of this dimension is the target (‘patient’, or even ‘victim’) of the actions taken. Individuals, living beings, or potentially even objects, that are affected by actions. Morally speaking: are they important and relevant to be considered?

A question currently under academic (and possibly legal) consideration, and evidently unanswered for the time being, is: What about algorithm, robots, and AI? Are they, or should they be, considered stakeholders? And if so in what way and to what extent?

If somewhat more difficult to answer than the first question, this aspect is though reasonably well defined and hence answerable: it is ‘only’ an update of current practices, under consideration of possible consequences, and whether or not the ‘thing’ (person or other) affected, should morally have a right to be considered as being affected and hence having the right and weight to influence the outcome of the action.

Who is responsible’ & digitalisation?

In other words: Given action and ‘targets’ affect by the action, what (or who) is the entity that is at the root and origin of all the subsequent consequences?

The simplest answer would be: a company is responsible for its own direct actions.
This is indeed how this question has been answers for decades if indeed not centuries.
However, the answer is by far not as simple or clear in the present. In times of outsourcing, distributed teams, multi-layered sourcing and production processes, social media influencer trends, the answer invariably is much more nuanced and challenging.

‘Own (direct) actions only’ is no longer a lemma to go by. Even if indeed plenty of companies still live by it.
But where exactly does that ‘thin red line’ run?

This is where the concept of ‘complicity’ enters the picture.
Complicity’ describes how firms may contribute to social impacts through their relationships. An aspect that frequently is being discussed in the context of human rights already. But evidently it has a much boarder sphere of influence.

What can be said about this is:
Digitalisation / The digital economy can

  • Blur answers to the question of where responsibility resides between firms and individuals.
    For example: is the firm as an entity responsible, or indeed the individuals within that drive its operations (see earlier post on this exact topic). And in the sharing economy?
  • Raise novel questions concerning where responsibility resides between humans and machines.
    For example: if an AI tool is in charge of early HR hiring decision – say, pre-screening CVs – who is responsible for its decisions? Those who built the tool? Those that procured the data used to teach it? Those that owned that data originally? The organisation using the tool? The individual authorising the use of the tool? Or indeed the individual that pressed the ‘go’ button in the tool?

A question currently under academic (and possibly legal) consideration, and evidently unanswered for the time being, is: What about algorithm, robots, and AI? Are they, or should they be, considered actors, and hence be held responsible? Or is their creator the responsible? And whichever way it may be: responsible in what way and to what extent?

The focus of this dimension is to hone in on the ‘root cause’, or if you prefer the ‘trigger’ of action on the one hand, but also any effect onto (relevant) stakeholders on the other.

It is the most challenging of these three questions.
And the one that is least intuitive, least clear cut – while equally the most impactful in the overall picture.

More so as we move into a time and age where that trigger origin may lie with non-human – read: digital – actors …

Insights and Conclusion

What we can safely assume – at least for the time being: the intent, purpose and goals of non-human (digital) actors are set and defined by humans.

Puzzling and important to consider: whereas historically companies – organisations – where rather clear defined in terms of their boundaries, this is not necessarily any more the case. Think of eco-systems such as e.g. created through platforms like AirBnB, or LeBonCoin or Tinder. How do the three responsibility questions above need to be answered in that context?

And finally: consumers. They are also players with responsibility of some type in this overall eco-system.

The danger of AI is weirder than you think | Janelle Shane | TED Talk