monochrome photo of shapes square and triangle digital wallpaper

Seven Principles of Digital Being

(This is a translation of the closing remark from my Japanese book “Digital Identity,” published in 2021. Note that this remark follows the chapters on privacy, where privacy principles are extensively discussed. Thus, such things as the “Right to be forgotten” are taken for granted, and the followings are in addition to those privacy principles.)

On Closing – Seven Principles of Digital Being

 The COVID pandemic forced many people and organisations to migrate to the cyber continent without adequate preparation. Because of the suddenness of the situation, there is a poor set of governing principles. As a result, all were faced with challenges that some had been aware of for some time.

These include, among other things, the problem of anonymous slander by the heartless, the loss of access to one’s past statements, photos, etc., due to account suspension (a.k.a. account ban) by powerful platformers, and the limitation of the services one can receive due to the inability to prove one’s own attribute.

 In light of this, the author believes that the framework now needed for cyberspace must fulfil the following seven principles, the ‘Seven Principles of Digital Being’.

Principle 1: Accountable Digital Being
Everyone should be able to establish and re-establish Accountable Digital Beings, where they are held accountable for their actions.

Principle 2: Expressive Digital Being
Each person is able to express himself or herself through his or her digital being, using data attested to by others about one’s own qualities as well as data expressed by oneself.

Principle 3: Fair Data Handling
All participants shall observe the Privacy Principles with regard to the handling of data concerning individuals.
The purpose of data handling shall not harm the individual concerned.

Principle 4: Respect for the right NOT to be forgotten
Technical measures shall be taken to ensure that Digital Beings cannot be pretended to have never existed and that their attributes cannot be overwritten.

Principle 5: Human Friendly
The system should take into account information asymmetries between individuals and legal entities, the bounded rationality of individuals and the vulnerable groups in society.

Principle 6: Adoption-Friendly
The technology should be open, utilise existing infrastructure as far as possible and be continuously tested to ensure interoperability.

Principle 7: Everyone benefits

Individuals, of course, but also companies and governments, should be able to benefit from the system (otherwise, it would not be implemented and would not stand as a system).

Principle 1: Accountable Digital Being

I do not think that you would all run out into the centre of town and suddenly shout out certain things or slander people. This is because you will be made to explain why you did so and take responsibility for the consequences of your actions. In other words, we don’t do stupid things in public because we are ‘accountable beings’ in the real world. Occasionally there are people who do, but they are so rare that they are featured on news programmes.

 In contrast, what about in cyberspace? Do you slander people and shout conspiracy theories at the top of your voice because you think you are in a safe place? Of course not if you are reading this. But many people do. And it has led to reality TV show performers being hunted down.

 These serious invasions of privacy must be prevented. And it is important that people can establish an accountable Digital Being for this.

 This does not mean that they should “always use their real names”. There is nothing wrong with acting under a pseudonym or anonymously. Rather, it is necessary in order to have freedom of speech and thought. However, once an incident has occurred, an appropriately authorised body (perhaps a court warrant and permission from the privacy commissioner, for example) must be able to remove that mask of anonymity. This situation is called ‘Partially Anonymous’, and the agency that exposes it is called the ‘Designed Opener’.

 Such a mechanism allows a person who has committed a tortious act that harms another person to be lifted from anonymity and to be subjected to
(1) An explanation for what they have done
(2) provide evidence that it was justified
(3) be punished if it is found from (2) that it was not justified.
In other words, ‘accountability’ is required. And when people become aware of this, they will consider whether or not the actions they are now taking part in – such as flaming – are really socially acceptable. In many cases, people who take part in flaming incidents do so out of a sense of ‘personal justice’. They are carrying out ‘private punishment’, so to speak, and derive pleasure from it. However, when users tried to send offensive messages and the app ReThink displayed the message, “Are you sure you want to send this?”, 93% of the targeted teenagers were discouraged from posting.

On the other hand, in a context where this kind of accountable Digital Being is becoming more common, failure to have it can also lead to social exclusion. So one needs to be able to establish it whenever one wants to, and to re-establish it even if one has to leave for some reason.

Principle 2: Expressive Digital Being

 Recall Chapter 7. The key to our happiness has been the maintenance of good relationships with those close to us. This required a process of projecting the various attributes onto the other person in order to get them to have the impression of how we want them to see us (‘projected-identity’), then measuring the gap with the perceived impression (‘perceived-identity) that they create from that, and projecting additional attributes onto them in order to make the gap smaller.

 To achieve this, it must be possible to fine-tune which attributes are to be projected out to whom. It will also be important to be able to show that many attribute values are not only claimed by the person themselves but also attested by third parties.

 Thus, an ‘expressive’ Digital Being must be able to freely and selectively provide detailed attributes with third-party proof.

Principle 3: Fair Data Handling

 Thus, individuals seeking their own well-being means actively and selectively giving out data. However, in order to feel comfortable doing this, they need to be in a position where they can have a degree of trust and confidence that the data they give out will be treated fairly. It would be a tragedy if the information you gave out, saying that it was only for you, were made public to everyone you know. Similarly, it would be a terrible thing if information that was only given out to one company leaked out and all companies learned about it. The gap between self-identity and perceived-identity, which had been previously minimised, would widen.

 The above-mentioned unauthorised publication or leakage is an example of improper handling. There are many other types of improper handling. Any treatment that is not within the expected scope of the individual concerned or that may cause harm to the individual is improper handling. In order to be considered as proper handling, the privacy principles must be adhered to, for example, in accordance with a privacy framework such as ISO/IEC 29100 (JIS X 9250).

 In order to avoid going outside of the individual’s expectations, it is necessary to communicate to the individual exactly what is being done. This is the role of the Privacy Notice. The standard for this is ISO/IEC 29184, for which the author was the project leader. This will soon be a JIS standard.

 In addition, for individual systems, it is also necessary to use the Privacy Impact Analysis (PIA) framework to conduct risk assessments and obtain approval from stakeholders. The PIA will include external stakeholders and the publication of PIA reports, which will ensure transparency and monitoring of operations to third parties to a certain extent.

 In order for the legitimate handling of such data to become widespread, not only legislative pressure but also social pressure will be necessary. In this sense, the responsibility lies with each and every one of us as members of society.

Principle 4: Respect for the right NOT to be forgotten

 The right not to be forgotten is an unfamiliar term. The author considers it to be one of the fundamental human rights in cyberspace, and is inextricably linked to the similar term ‘the right to be forgotten’.

 It is well known that the answer to the question “when do people die?” is “when they are forgotten”, but we who have migrated to the cyber continent would be erased from existence if our data were completely deleted from there. Of course, it is possible that we disappear of our own will, but unless we do so, we should be able to remain as ‘Accountable Digital Beings’. That Digital Being will naturally include various attributes about you.

 Besides physical erasure, erasure on the cyber continent also includes social erasure. This is done by rewriting a person’s attributes as socially undesirable in the eyes of the mainstream at the time. In contemporary Japan, a person can be socially erased by attaching attributes such as being a paedophile or a misogynist and promoting them as such. Even if you don’t go that far, just deleting their university graduation records and labelling them as academic fraud would be damaging enough. (This would be unthinkable in Japan, but it should be assumed that there are countries where such a thing would be possible.)

 Digital Being needs a countermeasure against such attacks. It needs to be able to store attributes about itself with the signature of the issuer of those attributes. Since the issuer may later claim that the key used to sign it did not exist, it must also be recorded in an objective and tamper-proof manner.

 In view of this, it is important to store the signatory’s key in a time-stamped and widely-stored vault, such as a blockchain, which is used for various economic activities, so that anyone can later verify that the signature key was valid at the time, or, if the need arises, write his or her attributes in such a place so that an attacker cannot erase or rewrite them. In this way, even if the physical entity in the real world is deleted, it will remain as a memory on the cyber continent.

Principle 5: Human Friendly

 When considering information systems, the people who use them must be considered first. In practice, however, the convenience of the people or machines managing the system often takes precedence. As a result, systems that are ‘difficult to use’ or ‘deceive’ people are provided.

 When thinking about being human-friendly, there are a number of assumptions that need to be made about individuals. The author regularly considers the following points.
(1) Information asymmetries between individuals and corporations
(2) The bounded rationality of individuals
(3) The existence of socially vulnerable people who are not part of the majority.

 Information asymmetry between individuals and corporations refers to the fact that corporations generally have more available resources and a greater volume of information than individuals. In an information economy, the greater the volume of information, the more advantageous it is in terms of transactions. As a result, in a pure market economy, an optimal equilibrium is not reached. Therefore, measures are needed to support the individual side and alleviate information asymmetries. Third-party evaluations and the publication of such evaluations are among such measures.

 The second point, bounded rationality of individuals, refers to the fact that human beings have limits to their rationality, no matter how rationally they try to act, due to the limitations of their cognitive abilities. For example, if a rational individual were to enter into a contractual relationship with an entity, he or she would read and understand the contract, as well as the laws and circumstances surrounding it, and act accordingly, but most people do not. Or rather, it is impossible to do so. Most people will not be able to read and understand the wording of the contract, and even if they are competent, they will not be able to do so time-wise in a situation where it would take them a month to read just the privacy policy that is presented to them in a year. The operation of society on the cyber continent should be based on an acknowledgement of this.

 For example, when providing a service to an individual, the content of the terms of use or privacy notice should be as similar as possible to what common sense would imagine, and only the differences and important information from it should be extracted and presented to that individual, with the full text available for later reference. The GDPR also requires this.

 The third, the existence of socially vulnerable people who are not part of the majority, means that such non-mainstream people are not left out and should be included in such a system.

 People who stand on the side of the majority tend to mistakenly believe that their common sense is justified or that what is easy for them to use is sufficient. However, in many cases, this becomes an offence against minorities.

 Some minorities are minorities based on physical characteristics and abilities, while others are minorities in the sense of ideological beliefs and cultural backgrounds. The needs of these people are easily ignored by the majority, sometimes out of indifference, sometimes under the guise of ‘justice’. The new rules of the cyber continent must not tolerate such things. And if we are friendly to minorities, we will surely be friendly to the majority, both cognitively and in terms of user experience.

Principle 6: Adoption Friendly

 When new technologies emerge, it is seldom the case that something completely new on a zero-based basis rewrites the existing one in one swoop. Usually, they are built on top of existing infrastructures while utilising them. In this context, it is important for new technologies to consider affinity and connectivity with existing technologies.

 Also, open technology is far easier to spread than closed technology. This is because if the formulation process is open, it is easier to receive requests from a wider range of stakeholders, resulting in a wider scope of application and easier penetration among engineers.

 In addition, to ensure interoperability, a common testing infrastructure is needed to check compliance with the specifications. This can be seen from the example of Open Banking in the UK. Specifications are written in everyday languages such as English. However, unlike programming languages, everyday languages allow for a certain amount of interpretation fluctuation. As a result, it is often the case that the same English standards document is interpreted differently and resulting products do not interoperate. In the case of Open Banking in the UK, it was noted that initially, it took many weeks for a single bank and Fintech to connect. The solution to this was a compliance test suite*. Each implementation now goes through it, and the time taken to connect is reported to have been reduced to around 15 minutes.
The security part of the suite was later donated to the OpenID Foundation, where it was enhanced and became the FAPI Self-certification test. This is used in Australia, Brazil and other countries.

 It is not just about initial connectivity. Systems undergo renewal due to a variety of factors. At this time, too, it is necessary to check that compatibility is not compromised. If a conformance test suite is in place and is used for ongoing development, the risk of incompatibility is considerably reduced.

Principle 7: Everyone benefits

 The last principle, ‘Everyone benefits’, is a rarely mentioned but very important principle. Environmental changes at the expense of a few are difficult to implement, and even if they are implemented, they do not last long.

 When privacy is a priority, people tend to focus only on strengthening individual rights, while, on the other hand, when efficiency is considered from a corporate perspective, individual rights may be seen as an obstacle. However, if you try to take unbalanced measures, you will face strong opposition and end up stuck in the status quo or in a state of relative poverty. It is also ethically unacceptable for the majority to overrun the minority and to devour the profits of the minority through the tyranny of the majority.

 There is a concept in economics called Pareto improvement. This is a change where no one is worse off than they are, and someone is better off than they are. Fortunately, data utilisation is not a zero-sum game. Data does not get consumed or disappear when it is used, and if used with care, both the companies that use it and the individuals who allow it to be used can benefit from it. In many cases, Pareto improvements are possible.

 Although the figures are a little out of date, a 2016 European Commission report* states that the increased use of data will account for 5.4% of the GDP of the EU27 countries by 2025.

To receive this fruit, both individuals and, of course, companies and governments must shape the system in such a way that they can benefit from it. Otherwise, the system will not be implemented and will not stand as a system.

 I hope that each and every one of you who reads this book will be able to move society in this direction.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.