top of page

Morality & Me - Prologue : Why talk about morality at work?

Do Stormtroopers ever take a moment to share their feelings about what they do at work?

Do you ever find yourself thinking about what "the right thing to do" is ? Or trying to justify an action which perhaps, deep inside, you don't feel 100% comfortable with?

Over the next few weeks, I will be writing a few articles exploring my own experience of morality throughout the different stages of my career, how I came to think about it, what the various contexts were, what it meant to me, and what impact it had. To start off this series, I wanted to take some time to define a few things first, like what we mean by morality, why and where it comes about in our working lives, and how it is currently talked about. Morality at work is the topic around which my MSc research will be centred for the next few months, and so I will also share some of the insights I have found so far in research, broader literature, and conversations to date.

How to define morality?

According to Jonathan Haidt from the University of Virginia (in an article from 2008), morality is one of the oldest topics in the history of intellectuality, and one that has remained consistently explored throughout the centuries - just think about Adam and Eve being expelled from the garden of Eden for eating the forbidden fruit that gave them the knowledge of good and evil. It is a constant in philosophy literature, with views and meanings being shaped through time by the development of civilisation. This makes defining morality particularly challenging, as its definition and perception is heavily contextualised. Still, Haidt offers in a 2011 paper a definition of moral systems as being "interlocking sets of values, virtues, norms, practices, identities, institutions, technologies, and evolved psychological mechanisms that work together to suppress or regulate self-interest and make cooperative societies possible". This definition in itself brings out the complexity of the factors involved in defining moral standards and driving moral decision-making, be it at an individual, group, or societal level. I find it interesting to note that the ultimate goal of moral systems according to this definition is to be able to function as cooperative societies, by self-regulating actions we might otherwise be tempted to take out of self-interest. While we all form our own very personal approach to moral standards, they are all underpinned by altruistic considerations (or absence thereof..).

From a psychology perspective, the emphasis to date has been mostly on defining how we as individuals develop these moral systems, from as far back as our early childhood. Some famous models and theories have been defined such as Kohlberg's six stages of moral development, which are sequential, i.e. one needs to have been reached to access the next, and go from obedience and self-interest on the one end, to adoption of social contract and universal ethical principles on the other end; in other words, from being primarily preoccupied about punishment or consequence of decisions on oneself, to thinking about what is good for society and human beings in general. As for many theories, while it has been widely used and built upon, it has also attracted critiques, in terms of the ability to generalise how people develop their moral standards, thinking for example about cultural differences, and other contextual factors, as well as considerations on the ability to capture and map the reality of this phenomenon, as I wrote about in a recent article.

In short, morality can be seen as an ensemble of personal beliefs and values on the one hand, and psychological mechanisms they drive on the other hand, which help us act and behave in a way that we feel is appropriate within society. Of course, this leaves a great deal of space for subjectivity, and individual differences in what this is made up of, and how it materialises, and I believe that the important part is to try and understand how people experience morality for themselves, rather than try and define and generalise what is "the moral thing to do".

How do questions of morality materialise at work?

When it comes to our working lives, I would argue that questions of morality come about, consciously or not, in more ways, at more levels and more frequently than we may realise. Here are a few ways in which this happens, connected with concepts which can be found in research literature:

  • Moral reasoning: This is quite a generic concept, and I shall start again with a definition, this time from a 2015 book by Øyvind Kvalnes, from the BI Norwegian Business School: "We can understand moral reasoning at work to be the activity of judging and deciding what is morally right and wrong, permissible, obligatory, and forbidden in an organizational context". This is effectively about how this assessment and judgement of doing the right thing comes about in all decisions, big or small, we make in the working environment. Here are a few examples. Starting from the seemingly trivial: do you copy your colleague's boss on an email chasing them for a response for the second time? Do you accept to prioritise a piece of work for a colleague you are friends with, even though you have other pressing deliverables for others? To the more serious: the business analysis you have just run shows that staff costs are too high for the business; do you recommend a staff reduction knowing that it may lead to colleagues losing their jobs? A new product hasn't fully completed all quality controls, yet there is pressure and cost implications in getting it launched; do you take a short cut?

  • Moral dilemma: This is a subset of moral reasoning, in situations where the two options someone is faced with have pretty much equal measures of negative impacts. In some of the examples above, there is a path that may clearly considered as the "right" or fair thing to do, depending on the circumstances. A moral dilemma is a lose-lose situation, where whichever option is chosen will cause some degree of harm. In research, it is often tested using what is called the trolley problem: a runaway trolley is coming down a track; on its path, there are five people tied onto the track, who will die if run over. You stand by a switch and have the power to move the train onto a separate track, where one person is tied down, who will die if you switch the tracks. The two options you face in this case represent two approaches to moral dilemmas. Switching the track, which research shows is what the majority of people would do, represents the utilitarian choice, where a decision is made based on objective assessment of the outcomes, in this case favouring the lesser loss of life. The decision of not switching the tracks, which is called deontological or duty ethics, is rooted not in the outcome of the decision, but the moral principle that is involved in the decision-making. In this case, switching the track would mean actively causing the death of an individual through one's action (despite saving five others), and the individual motivated by a deontological approach will consider that they cannot, as a human being, intervene to take an action which causes the death of another human being, preferring to let the situation unfold without their involvement.

Thinking about examples of moral dilemmas at work, I would argue that we often find ourselves standing at the switch. This may be when making certain hiring or promotion decisions, under particular conflicting pressures (although many will argue that these should only ever be objective processes, I shall park this for now..), or decisions related to organisation design, downsizing, or budget allocation for example, which may have direct impacts on different groups of individuals. Or more simply in some cases of calendar clashes, which can't be reconciled, and you need to decide who you might be letting down.

  • Moral disengagement: This is where things turn even darker... Moral disengagement is a theory which was one of the many contributions to psychology made by Albert Bandura, derived from his broader social cognitive theory. The definition of moral systems I quoted earlier talks about self-regulation, and this is at the heart of moral disengagement: it is about the mechanisms that people may employ to disengage, or distance themselves, from their moral standards, in order to allow themselves to take particular actions. It has been studied in the context of extreme behaviours, such as genocides, and in the context of work, can be connected to business scandals and unethical behaviours. Bandura identified eight mechanisms that people may use to disengage from their moral standards. I will share a couple of examples:

    • Moral justification: this is about attaching another higher purpose to a harmful action, to justify the damage that it causes. It happens for example when people are guided into committing atrocities in the name of particular ideologies. It also happens when industries producing harmful products emphasise their positive impact on local economies for example. In the recent series Dopesick about the oxycontin scandal in the US, it is about how Richard Sackler is portrayed to relentlessly claim that his drug is curing pain for millions, ignoring its deadly side effects

    • Displacement of responsibility: this is about how one might justify that a harmful action isn't of their own direct responsibility. There are two main ways in which this happens, one is by claiming that a harmful action was only made by obeying orders from superiors, or guidance from other parties (consultants, scientists); the other is by keeping a distance to the actual harmful activities, which is mostly for senior individuals, who give high level instructions, but actively seek to remain uninformed of actual harmful or unethical behaviours, to be able to claim ignorance

    • Dehumanising, or blaming the victims: using Dopesick and the oxycontin scandal as an example again here, this is the way in which victims were being blamed for the misuse and abuse of the drug, rather than acknowledging the danger of the drug. Dehumanising can take form in the vilification of the victims in the context of atrocities; in the corporate world, this could simply be the way in which staff reduction discussions focus on numbers, or even roles, in any case refraining from referring to actual individuals and personal consequences until later in the process.

I have covered a few concepts and points of theories with a view to try and show how often moral questions come about, and how much we as individuals have to negotiate moral ambiguity. This in turn can take up a lot of personal resources and energy, which is what I am personally particularly interested in: how much do we, in our corporate lives, recognise and acknowledge this, and how much do we talk about it?

How much do we talk about morality?

Despite being faced, as I tried to argue, with moral considerations on a regular basis at work, it remains something that is quite inconsistently talked about within organisations. Of course, matters of responsible business conduct are of increasing importance, and indeed organisations will typically have ethics officers, as well as codes of conduct and associated training. They will also in parallel have systems of values, detailing expectations put upon staff with regards to their behaviours at work. All of these should help people as they encounter situations of moral conflict, to help them find the most appropriate path to follow.

However, ethics at work tend for the most part to adopt a more normative approach as to what is appropriate or not in a business context, ultimately to try and avoid situations of wrongdoings such as corruption, or grossly inappropriate behaviours. Many of the examples I have quoted so far in this article wouldn't result in a situation of breach of code of conduct, or anything that might be legally challenged. Often, it is just people thinking about what feels right, particularly when there isn't a clear answer that can be found in a code of conduct or a set of rules. This isn't to say that we need more rules and more detailed codes of conduct, quite the opposite. We should however understand and appreciate the demands that such moral considerations put upon people, and energy and resources it takes out of them.

Talking more openly about such challenges may be one great way to help people work their way through them. Kvalnes, whom I quoted earlier, talks in his book about the principle of publicity, in the context of moral dilemmas and decision-making. This is the idea that to assess what we might consider to be morally acceptable, we should consider whether we are prepared to publicly present and defend our decision and thinking process to a large group of people, representing different types of stakeholders of the situation. I have found this a very effective way of judging how comfortable I was with some of my own decisions, as I will discuss further in the following articles of this series. The most effective and helpful here is not just to imagine oneself talking through a decision to a group, but to find a way to actually do it - perhaps not to a large group, but a small group of friendly colleagues, as actually articulating a thinking process immediately brings it to life, and by hearing yourself saying it, the chances are that you'll know whether you can stand by it.

I hope this introduction to the world of morality as I see it has been interesting so far, and provoked some thinking. In the next few articles I will publish over the coming weeks, I will work my way through my own career, to talk through examples of times where I have faced questions of morality, what impact they have had on me, and how they relate to some of the concepts I have shared here. I hope you will join me again, and as I start sharing more of my own experiences, would love to hear more about your own thoughts and experiences on the topic of morality!

If you would like to be notified of the next article in this series, and other posts, please click here and leave your email address on my homepage. Thank you for your support!


bottom of page