Understanding what’s ethical and how to implement it ethically could require different approaches

Understanding what constitutes ethical behavior is necessary for designing machines that reason and behave morally. There is an ongoing disagreement over how to define what is morally right and wrong, after many millennia of moral investigation.

Different ethical theories offer diverse justifications for what defines ethical activity and dispute the subsequent course of action. It is necessary for this issue to have an engineering solution in order to design an artificial ethical agent.

The computational process of assessing and selecting among options in a way that is compliant with societal, ethical, and legal constraints is referred to as ethical decision-making by AI systems. In order to choose the best ethical choice that still permits the accomplishment of one’s objectives, it is vital to recognize and rule out unethical possibilities.

Ethical actions

We first need to understand whether it is feasible to establish a formal computational description of the ethical action in order to decide whether we can create ethical agents. Dennett is a philosopher who has written a lot about moral philosophy and ethics. He is renowned for his work on a variety of subjects, including free will, determinism, and the nature of awareness. He lists the following three conditions for ethical action:

  1. it must be possible to select among various actions;
  2. there must be general agreement among the society that at least one of the options is socially advantageous;
  3. the actor must be able to identify the socially advantageous action and explicitly decide to take it since it is the ethical thing to do.

Theoretically, an agent that satisfies the aforementioned requirements is allowed to be created. A strategy would be as follows. To begin with, we make the assumption that an ethical agent is always able to recognize the full range of options available to it. Having said that, it is simple to create an algorithm that can choose an action from such a list. Since our agent has a variety of options at his disposal, the first condition is satisfied.

We can provide the system with information on a series of actions, for example by labeling each action with a list of characteristics. These labels can be used by the agent in this situation to determine the best action. Let’s now imagine that we are able to assign each potential action in the given situation an “ethical degree” (for example, a number between 0 and 1, where 1 is the most ethical and 0 the least). The second criterion is satisfied by this. The actor can then employ this knowledge to choose the most ethical option.
The third requirement is met by that.

The different approaches

There are three primary categories for ethical reasoning:

  • Top-down approaches, which extrapolate specific choices from general rules;
  • Bottom-up approaches deduce general principles from specific examples. The goal is to provide the agent with enough information about what other people have done in comparable circumstances, as well as the means to combine that information into something ethical;
  • Hybrid approaches are used in order to foster a thoughtful moral reaction, which is seen to be crucial for making ethical decisions, they blend aspects from bottom-up and top-down approaches.

Top-down

According to a specific ethical theory (or maybe a set of theories), a top-down approach to modeling ethical reasoning specifies what the agent should do in a given situation. These models formally define the rules, obligations, and rights that direct the agent’s decision. Top-down approaches frequently build on Belief-Desire-Intention architectures and are an extension of work on normative reasoning.

Several top-down strategies choose different ethical theories. The satisfaction of a particular value is used as the basis for the decision in maximizing models, which roughly conforms to the utilitarian view of “the best for the most”.

Top-down strategies presuppose that AI systems can consciously consider how their actions may affect others’ morality. These systems ought to adhere to the following standards:

  • Representational languages with sufficient depth to connect agent actions and domain knowledge to the established norms and values;
  • Putting in place the planning processes required by the theory’s practical reasoning;
  • Deliberative capabilities to determine whether the scenario is actually morally righteous.

Top-down strategies impose an ethical system on the agent. These methods make the implicit assumption that ethics and the law are comparable and that a collection of rules is enough to serve as a guide for ethical behavior. They, however, are not the same. Usually, the law outlines what we are allowed to do and what we must refrain from doing. While ethics teaches us how to play a “good” game for everyone, the law only explains the rules of the game and offers no guidance on how to best win.

Furthermore, even if something is legal, we could still find it unacceptable. And even though we may think something is right, it might not be allowed.

Bottom-up

Bottom-up approaches presumptively believe that learning about ethical behavior comes from watching how others behave. A morally competent robot, in Malle‘s opinion, ought to include a mechanism that enables “constant learning and improvement”. According to him, in order for robots to develop ethical competence, they must acquire morality and standards in the same way that young children do. In a study, Malle asked individuals to rate their morality using the Moral Foundations Questionnaire, which assesses the ethical principles of harm, fairness, and authority. This information was used to estimate the moral acceptability of a collection of propositions.

Bottom-up approaches are predicated on the core tenet that what is socially acceptable is also ethically acceptable. It is common knowledge, nonetheless, that occasionally positions that are de facto accepted are unacceptable by independent (moral and epistemic) standards and the facts at hand.

Hybrid

Top-down and bottom-up approaches are used in hybrid approaches in an effort to make ethical reasoning by AI systems both legally and socially acceptable.

Instead of being founded on moral guidelines or optimization principles, this viewpoint is grounded in pragmatic social heuristics. According to this perspective, both nature and nurture have a role in the development of moral behavior.

By definition, hybrid approaches can benefit from both the top-down and bottom-up approaches’ advantages while avoiding their drawbacks. These might provide an acceptable path forward as a result.

Who decides the values?

The cultural and personal values of the individuals and societies involved must be taken into account while designing AI systems. It is particularly crucial to consider and make clear the following elements to evaluate the decisions made in light of such data.

Crowd: Is the sample from which the data is being gathered sufficiently diverse to reflect the range and diversity of people who will be impacted by the AI system’s decisions? Furthermore, data gathered about decisions made by people inevitably reflects (unconscious) prejudice and bias.

Choice: Voting theory advises that giving only two options can easily be a false portrayal of the real choice, despite the fact that a binary choice may initially appear to be simpler.

Information: The answers are always framed by the question that was asked. The phrasing of a question may imply political purpose, particularly those that stir up strong emotions.

Involvement: Generally speaking, not all users are equally impacted by the decisions that are made. Nevertheless, regardless of participation, every vote is equally important.

Legitimacy: Democratic systems require majority decisions. Acceptance, on the other hand, can cause concerns about the outcome when margins are extremely slim. The results also take into account whether voting is compulsory or voluntary.

Electoral system: it refers to the set of regulations that govern how people are consulted, how elections and referenda are held, and how their outcomes are determined. The way this system is put up greatly influences the outcomes.

Varying value priorities will lead to different choices, and it is frequently impossible to fully realize all desired values. Values are also extremely nebulous, abstract ideas that can be interpreted in a variety of ways depending on the user and the situation.

Decisions are made based on long-term goals and underlying shared values rather than on short-term convenience and narrow self-interest. Fishkin identifies the following as the vital elements of valid deliberation, based on the actual application of platforms for deliberative democracy:

  • Information: All participants have access to accurate and pertinent data.
  • Substantive balance: Based on the evidence they provide, several perspectives can be compared.
  • Diversity: All participants have access to and will be given consideration for all significant positions pertinent to the issue at hand.
  • Conscientiousness: Participants thoughtfully consider each point.
  • Equal consideration: Evidence is used to weigh opinions, not the person advancing them.

To which we can add one more principle.

  • Openness: For the purpose of designing and implementing collective wisdom approaches, descriptions of the options considered and the decisions made are transparent and easily accessible.

As we have observed, creating an AI agent that is morally right is not that simple, even following different approaches, the chance of achieving percentages (albeit low) of injustice may still occur.

How many times have we run into such systems, albeit they are at an early stage? For example, when a social media profile is banned or an account is suspended and we have no chance to appeal. These examples should alert us, due to the undemocratic nature of such an AI system that makes irreversible judgments. Not only does it take us back to dictatorial systems, but it does not allow a fair fruition of the platforms.

Therefore, if it’s true that such behavior should be avoided, to implement the most objective decision possible, it is necessary that any judgment should never be irrevocable, but we should always allow for appeal to a human judgment, especially where there is ambiguity. In addition, logical reasoning and common sense should never be lacking in the application of a rule so that maximum objectivity can be pursued.

How many times have we had to suffer erroneous rules endorsed by the majority that turned out to be wrong? Therefore, mere numbers do not guarantee the objectivity and ethicality of a rule even the non-acceptance of its uselessness when faced with a majority.

Responsible Artificial Intelligence by Virginia Dignum is available to purchase here