Ethics is a sub-discipline of philosophy. By definition, it is the theory of reflection of morality. This abbreviated definition alone raises the question of the extent to which it can meaningfully relate to more practical questions of technology or its sub-disciplines such as electrical engineering or electronics. This short essay is an attempt to clarify the relationship between the two disciplines and to make ethical reflection useful for technical decision-making.
Ethicists like to work with case studies that may be fictitious, but can highlight dilemmas or paradoxes. Let's assume, for example, that the self-optimizing systems[1] developed by the Fraunhofer Institute in the printed circuit board sector and based on AI would be applied to areas with a broad social impact. In this context, it would be technically feasible, for example, to use fully automated and somewhat euphemistically named self-service checkout systems in retail or gastronomy across the board. Let's also assume that these systems would self-learn and continuously optimize product range selection, ordering and payment processes as well as other quality factors in the respective service areas 'for the benefit of the customer'. Theoretically, an automated link with social or health data of individual persons or groups would also be conceivable, who in turn could be encouraged to behave 'better' via punishment and reward systems. Behavioral economics calls this 'nudging'. China sends its regards.
In the case of printed circuit boards, the Fraunhofer Institute justifies the use of such self-learning systems with the reduction of errors and efficiency. A camera replaces the more complex automatic optical inspection (AOI) and thus 'optimizes the decision quality of algorithms'.[2] The decisive factors here are 'inputting high-quality training data' and feeding it with a 'good selection of data'. This would save 20% of production resources. However, from a business perspective, this positive benefit at one stakeholder level - i.e. the respective producer or retail company - is offset by costs at another stakeholder level, in our example primarily for all those employees who operate checkout systems and would therefore be existentially affected by such a change. However, there are also enormous costs for society as a whole, which would have to absorb hundreds of thousands of unemployed people. Economic ethics call this externalized costs. And even the apparent benefit of saving customers time and hassle due to the elimination of complicated searches in the product range and long waiting times at the checkout is put into perspective by the fact that they now also have to do everything themselves when paying and possibly have to laboriously check everything.
What are technicians morally responsible for?
The question of ethics in relation to technology and technological development therefore primarily has to do with the (reasonable) setting of limits for what is (technically) feasible. This applies in particular to technologies whose theoretical consequences for humanity as a whole or for the inalienable human rights of groups or individuals can be enormous or are impossible or difficult to assess: Nuclear energy, genetic engineering, cyber-physical technology including transhumanism as well as challenges surrounding artificial intelligence. In the case of somewhat more moderate issues in terms of ethical relevance, decision-makers can draw on data from the well-established technology assessment process and take these into account on the basis of socially or culturally relevant value scales in the form of rationally justified boundaries.
"With the computerization of technology, technology ethics is growing even more closely together with information ethics ..."
However, despite all this, it remains to be questioned for what and to whom or before which authority technicians as relevant actors - and not 'technology' as an abstract term - are morally responsible. After all, the possibility of moral responsibility is based on the premise of the freedom of will of the subjects of action. In addition to natural persons, business ethics also partly attributes this to legal entities - and thus to the actual developers and producers of technology.Like any complex technology, modern electronics also have consequences throughout their entire life cycle, and technicians should not refrain from dealing with this in general (1. ©industrieblick - stock.adobe.com / 2. ©pixelrain - stock.adobe.com)
The ethical question of moral responsibility in technology can therefore only relate to technicians as actors or subjects of action endowed with conditional free will [3]. The theoretical basis for such an ethics of technology is thus - just as for business ethics - a 'consequence-based legitimation concept' [4], whereby the pragmatic issue is the ex ante 'weighing up of reasons for action with expected consequences of action' [5]. The extent to which the freedom of technicians to carry out their activities in technical fields leads to moral obligations, i.e. what the concrete object and degree of moral responsibility of technicians is, can be reflected on from this perspective with reference to Kantian philosophy.
The principle of the usefulness of technology for life
The philosopher Immanuel Kant sees so-called apodictic responsibility at the highest level of his theory of judgment. From this, the moral obligation of subjects of action - and thus also of technicians in the performance of their specific activity - arises necessarily from
- (universal) moral reasons, e.g.
the 'Golden Rule'[7] or inalienable fundamental rights and freedoms - categorical basic principles with unconditional validity, e.g. general human rights
- non-delegable or divisible necessity of actions, e.g. for the protection of human life
If the intention of a specific action in the technical field were to contradict these basic principles from the outset, it would be morally questionable to reprehensible or should be stopped or refrained from due to the resulting recognizable moral obligation. On a fundamental level, this can be understood as the principle of 'usefulness for life' [8], which was initially conceived by the rational ethicist Peter Ulrich for economic and business ethics. Because technicians usually, if not always, fulfill partial tasks in the economic or entrepreneurial environment, technology should therefore generally serve to 'secure the foundations of human life' and 'expand the fullness of human life' [9].
The theoretical basis for technology ethics
This fundamental approach also corresponds to the considerations of the IEEE (Organization for Advancement of Technology). It defines the objectives of 'Ethically Aligned Design' in relation to 'Artificial Intelligence and Autonomous Systems (AI/AS)' as follows: "We need to make sure that these technologies are aligned to humans in terms of our moral values and ethical principles (...) By aligning the creation of AI/AS with the values of its users and society we can prioritize the increase of human wellbeing as our metrics for progress in the algorithmic age" [10]. According to the authors, over one hundred leading global experts in the fields of artificial intelligence, law, ethics, philosophy and politics from the scientific community, governments and companies contributed to the policy paper [11].
So what are the specific 'general principles' that technology and technicians [12] should be guided by in everything they do [13]?
- Embody the highest ideals of human rights.
- Always give priority to the benefits for humanity and the natural environment.
- Reduce risks and negative consequences for society or people's living conditions, especially when technologies or techniques (such as AS/AI) develop into independent socio-technical systems.
The problem of the practical application of ethics in technology
The difficulty in the practical application of the above-mentioned principles in decision-making lies particularly in being able to assess the consequences of technology and engineering in such a far-reaching way in the future and ultimately to assume responsibility for the problematic applications of technology that are attributable to other subjects of action. This is the 'problem of attributing responsibility'. This raises the additional question of the extent to which the omission, i.e. also "the technical omission, can have unwelcome consequences" [14], since, according to the above considerations, there is also an obligation to the non-delegable necessity of actions.
In view of this and the increase in complexities, technology ethics as a reflection of moral principles cannot be pursued in isolation from other areas of ethics. Oliver Bendel provides examples of this in his definition of 'technology ethics' in the Gabler Wirtschaftslexikon: "With the computerization of technology, technology ethics is growing even more together with information ethics: (...) With regard to the development of production (...) of technology and technologies, in e-business (...), in Industry 4.0 (...) and generally with economic relevance, business ethics is also in demand, and with science-based (...) findings and products, science ethics" [15]. The issue of 'ethical sourcing' and ethical procurement in global supply chains, for example for electrical appliances, as well as the ethical handling of big data, also requires close cooperation between technology ethicists and other specialist disciplines.
Shared responsibility and voluntary commitment of technicians
As part of a value chain, technicians can therefore rarely be ascribed final responsibility, but at most shared responsibility - especially for the consequences of the technologies and products they develop or manufacture. Ethical aspects must therefore be considered from the outset in the technical field - for example in the definition of norms, quality standards or codes of conduct - just as in all other areas of human activity. These considerations have recently led to an increasing responsibility for technicians in the development of (partially) autonomous systems. This is because "systems such as robots - as much as they take on human tasks - cannot achieve moral autonomy" [16]. Among other things, systems should therefore be designed as 'spaces of possibility' in such a way that "decision-making with moral intent is maintained within them" [17].
The normative tasks necessary for programming such systems therefore require the technicians entrusted with them to voluntarily bind themselves to the principles described above from the outset when carrying out their activities [18]. In order to be better equipped than before for this relatively new and original - and no longer merely co-responsible - responsibility of technical ethics for moral action, the IEEE proposes very specific measures [19]:
- Integration of applied ethics into technical education and training as well as research and development
- Creating an ethical system/practice for the development and application of technologies
- Increasing transparency in the development and production process.
The VDE/DKE has also responded to the increasing ethical responsibilities in technical issues and launched a new working group on 'Ethics in technology' in 2018. Interested parties from various disciplines have been invited to join and can therefore register at
Conclusion
Just like all other specialists, technicians are morally obliged to align their actions and behaviour with generally recognized basic principles - for example, respecting human rights and creating greater benefit and minimizing or avoiding harm. Even if the consequences of the respective technique or technology are difficult or not yet foreseeable, technicians have a shared moral responsibility. If negative or destructive consequences are foreseeable from the outset or if technology has a massive normative impact on living environments, the consideration of moral factors in decisions and actions is, from an ethical perspective, even an indispensable prerequisite.
ethical perspective, the consideration of moral factors in decisions and actions is even an indispensable or necessary obligation. Only in this way can the 'usefulness of technology for life' be guaranteed, only in this way can it serve people and not vice versa, with the positive side effect of continuing to be recognized by them as trustworthy and positive.
www.brand-excellence.at, www.image3.eu
About the person
Dr. Peter Vogler is a management and communications consultant who specializes in supporting processes with complex tasks and stakeholder structures. He holds a doctorate in philosophy and is involved in education and training as a lecturer at universities and as an adult trainer.Dr. Peter Vogler
References:
- Fraunhofer Society for the Promotion of Applied Research e. V.: https://www.fraunhofer.de/de/presse/presseinformationen/2020/mai/intelligente-optimierung-von-leiterplatten.html | accessed on 27.07.2020
- Ibid.
- 'Conditionally free' because the idea of an unconditional free will leads to conceptual-logical difficulties because, according to Peter Bieri, the will must 'belong' to a subject and this subject is always determined by 'history' and 'experience'. Even the technician can only ever decide within the conditions of natural law, which is why his or her will must also be thought of as 'conditional'. Cf. also: Bieri, Peter (2001): The craft of freedom. On the discovery of one's own will; Vogler, Peter (2014: Genug geschwiegen. How business ethics can be grounded in practical philosophy
- Heidbrink, Ludger: The role of the concept of responsibility in business ethics, 2010
- Ibid.
- -
- Example of a variant of the 'Golden Rule' = 'Do unto others as you would have them do unto you'
- Ulrich, Peter: Integrative business ethics. The foundations of a life-serving economy, 2008
- Ibid.
- IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems: Ethically Aligned Design. A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems, 2019
- Ibid.
- According to the IEEE, the addressees of responsibility in the field of AI/AS are 'technologists'. By this, the organization means 'anyone involved in the research, design, manufacture or messaging around AI/AS including universities, organizations, and corporations making these technologies a reality for society', cf. ibid.
- Cf. ibid.
- Bendel, Oliver: Technology ethics, cited from https://wirtschaftslexikon.gabler.de/definition/technikethik-53883 | accessed on 27.07.2020
- Ibid.
- Ethics and technology - how do they fit together?, cited from: https://www.dke.de/de/veranstaltungen/symposium-ethik-in-der-technik/rueckblick-2018 | accessed on 27.07.2020
- Hubig, Christoph (Institute of Philosophy at TU Darmstadt), cited in: Ethics and technology - how do they fit together?; cited from: Ibid.
- Vogler, Peter: Enough silence. How business ethics can be grounded in practical philosophy, 2014
- IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems: Ethically Aligned Design. A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems, 2019
- Ethics and technology - how do they fit together?; quoted from: https://www.dke.de/de/veranstaltungen/symposium-ethik-in-der-technik/rueckblick-2018 | accessed on 27.07.2020