Responsible Design for Children

Two incidents from last week - one being the denouncement of `Instagram kids', followed by the announcement of 'Facebook's mining of children's data - prompted me to reflect on some design considerations that could have been paid more attention by the data-driven industry.

For me, there are three key pillars in designing good AI systems for children:

  • Getting the design right, child-appropriate

  • Getting the AI right, be responsible

  • Getting the legal obligations right, be a good citizen

Pillar 3 is a must, and stronger legislation is being updated in this space every day. We have a long research history on pillar 1 and some of which is being fed to pillar 2, although we could use more. However, pillar 2 is still crumbling and finding its feet. Despite the hike of AI and ethics discussions over the last several years, pillar 2 is not getting completely right; and I am hoping to outline why I think so here.

How AI systems are challenging designing for children

We have a long tradition of researching and writing about designing for children. Druin's seminal work in 2002 marks a landmark for the thinking in this space, particularly about 1) how to involve children meaningfully in the design process, or 2) how to design products for children that are aligned with their developmental needs. We have achieved some good understandings in this space and have been particularly successful in designing children of special needs or supporting children's learning, etc.

However, the fast adoption of AI-based technologies is stretching our knowledge in this space tremendously. First of all, we have a very limited understanding of how children make sense of the technologies they are being exposed to, now from their infancy, such as smart home devices, smart toys, or other web-based platforms for entertaining or socialising. We have not yet a full understanding of these aspects for the grown-ups; and even less for our children: How do the auto-play and video recommendations truly affect children's self-autonomy or viewing experiences online? To what extent they are able to perceive that they have been 'nudged' by algorithms to do something online, and make a conscious decision about such nudges?

Although we have a wealth of knowledge about how to design things in an age-appropriate way for children, achieving age-appropriateness for different children seems to be a challenge as big as tackling climate change. The latest children's code from the ICO provides more specific guidelines on this front; however, executing such design considerations can be a daunting task for SMEs in this market space (usually constituting 1-2 developers). There are much to be learned from the toys industry: How can we express the expectations of designing for children of different age and how can we verify that this has been implemented with sufficiency?

A critical next step for developing future good AI systems for children is to establish scalable and standardised approaches to age-appropriate designs. For this, we need to be cooperative, drawing together experiences and standards from different industrial sectors and bringing together the strength of legislation and research.

What is missing when designing age-appropriate AI systems?

The AI community has increasingly recognised the negative effects that these systems may pose on their users or stakeholders, such as exposing stakeholders to unfair decisions or information filter bubbles that are over-personalised for them. There have been regularly occurring discussions about designing fair AI systems through a responsible and user-centered process; however, the AI community needs to approach its design process more carefully. And when I say `more carefully', I particularly mean the consideration of the following three dimensions:

  1. Considering involving diverse stakeholders in the design and development process

  2. Considering drawing on the extensive research from the child-interaction community

  3. Truly putting children's best interests upfront

I will draw some examples to illustrate what I mean by the above three dimensions.

Considering involving diverse stakeholders in the design and development process

It has often been the case that when designing technologies to be used by children or applied to children's lives, children and their opinions are not directly consulted. Take Instagram's announcement for their next release of the platform for an example - parents are going to be involved in the next design cycle so that a `safer' Instagram for Kids will be in place, providing all the safeguarding mechanisms needed for the parents to keep their children safe on the platform.

This choice of parents as the key stakeholders not only reflects a usual way of directing responsibilities to the users but also an undermining of children's autonomy in this process. As having been discussed in several recent research, keeping children safe online should entail increased support for their development of digital literacy and skills and less surveillance and monitoring, even if this means their own parents/guardians. Such surveillance harms parents-children trust as well as children's actual coping skill development.

Before making a claim that your system is taking a human-centered approach, it is essential to reflect on 1) who should be involved in the design process, and 2) what the implications could be if someone were left out of the process.

Considering drawing on the extensive research from the child-interaction community

Related to our discussions about pillar 1, although the design community does not yet have an answer to all the new challenges related to the AI systems for children, it has a long research track record in related spaces, such as supporting children's learning with digital technologies by drawing on established education pedagogies or medical research.

Several academics have criticised non-experts jumping on the design of AI learning systems with limited experiences in education and related theories. Children cannot be simply treated as data points in these cases; the design of algorithms need to carefully consider whether the assessment of a child on their certain aspects has fairly considered their individual circumstances, and how such an assessment result may affect the child's learning or even life opportunities in unexpected ways.

There is a tremendous amount of rich research knowledge in this space; developers of new AI systems for children should consider working closely with experts in this space so that a new privacy notice is indeed implemented in an interpretable and actionable way for children of different ages, intelligent algorithmic recommendations are indeed nudging children towards a more positive learning experience or media engagement, and children's digital autonomy is encouraged and respected instead of being invaded.

Truly putting children's best interests upfront

The recent regulation development in protecting children's digital data rights has spun a quick collection of responses from the industry, e.g. the front mentioned withdrawal of Instagram for Kids, the introduction of age-verification in many online platforms, and etc. However, the concern remains that big players in this space are still seeking alternatives to the new regulations, instead of making fundamental changes to their current practices. For example, mining children's data to see how best to optimise for the interests of under 13 can be both unethical and irresponsible.

The fundamental changes require a re-think of the sole data-driven digital economy that the valley is operating with. This model has brought huge, unimaginable wealth to a few, but at what cost?

Ours and a number of other recent research have shown repeatedly how developers have found the lack of alternative business model in this digital economy has made it impossible for them to carry out truly ethical designs, such as collecting no personal data, making no personalized recommendations unless for better user experiences, sharing no tracking data to third-parties for revenue, and etc.

However, the pressure of survival has provided limited options for many developers or SMEs. Despite the changes that new regulations are bringing, it is probably high time to consider what alternatives - real good - could look like. However, before the alternatives are here, remaining truthful to children's best interests should be a key consideration for this industrial sector, in conjunction with the other design considerations that should be treated with attention.


1 October 2021