Data use is skyrocketing, raising ethical concerns beyond regulatory compliance. Colm McDonnell explores how embedding digital ethics ensures fairness, transparency, and accountability in organisations
Data has become an integral part of modern life, and its usage is growing exponentially. From businesses to governments, organisations are collecting, storing, and analysing vast amounts of data to gain insights, make decisions, and develop new products/services.
However, with great power comes great responsibility. The more data organisations process, the bigger the spotlight on them, not only to ensure regulatory compliance but also to focus on the significant ethical concerns resulting from data collection and its use.
Furthermore, the growing use of technology, including artificial intelligence (AI) and robotics, stems concerns about the extensive use of data and the potential for misuse.
What is digital ethics?
Doing the right thing, regardless of legislation, takes you into the field of ethics.
Organisations usually focus on various regulatory obligations that they must comply with, but organisations also have a responsibility to their stakeholders, including their employees, customers, vendors, and investors. This goes beyond regulatory compliance.
Accountability can be complex to define and demonstrate, often leading organisations to set out some principles they should adhere to such as privacy, fairness, non-discrimination, transparency, and more, while processing data.
Embedding digital ethics into an organisation involves promoting the moral values of the organisation through the alignment of data processing practices and processes with those values. Digital ethics refers to a set of principles and moral values that guide the responsible and ethical use of data.
The following eight guiding principles define an approach to AI and digital ethics.
Code of digital ethics
All organisations should establish a code of digital ethics that sets out their commitments to ethical data practices. Digital ethics by design should be considered right from the outset of any product development, product enhancement or any proposed processing of data.
Periodic training and awareness programmes should be rolled out to promote awareness of ethical data processing practices. This will eventually build a culture of trust, transparency, and safety within the organisation.
Human oversight and determination
Organisations must make sure AI systems do not take the place of human accountability and responsibility. There needs to be human oversight and safeguards in place to prevent misuse of data.
There should be cross-functional stakeholder collaboration and effective governance.
Proportionality, do no harm, safety and security
AI systems should only be used as much as is required to accomplish a valid goal. Risk assessment should be utilised to prevent any potential harm from these types of applications.
Fair and transparent algorithms
Organisations must ensure that their decision-making and algorithms are fair and impartial. This can be achieved through ongoing monitoring and periodic testing.
Transparency and explainability
Data should be collected and used with transparency, so individuals understand how their data will be utilised thereby allowing them to make informed decisions about whether to share their data.
Further, where deemed necessary, before collecting data, organisations should seek consent from individuals. This consent should always be freely given and be fully informed.
Inclusion
Unconscious or conscious bias can affect inclusivity in an organisation.
Organisations should take the necessary steps to ensure the processing of data does not result in or hide discrimination or bias.
Vulnerable data subjects who are the most susceptible to negative consequences of processing require additional consideration.
Autonomy, freedom, respect, privacy, and dignity
Individuals must be able to make their own decisions, take their own actions, and make their own choices.
Processing of data should not constrain human beings in how they want to live their lives. Autonomy for individuals to control how their data is processed should be ensured.
The processing of the data should be respectful of human values. Specifically, when the processing is carried out through AI, the outcome should not dehumanise individuals.
Sustainability
AI innovations should be evaluated to consider their effects on the environment and their ability to sustain through periods of time. These innovations should align with the organisation’s sustainability goals.
Colm McDonnell is Partner of Risk Advisory at Deloitte