October 29, 2019
It is time for a new direction in tech, one that defies the status quo of the Silicon Valley culture, innovation hype and AI fear, and realigns innovation with social progress. That's what Techfestival is all about.
We live in a world increasingly dominated by technology. Technological innovation is accelerating at an exponential rate and is propelling the tech industry to ship new products and services on something that looks like a daily basis. While they aim to solve real human problems, they also cause a lot of ethical issues that profoundly impact our societies and the very core of our existence. The truth is that technological progress doesn't align with human best interests anymore. It is time for a new direction in tech, one that defies the status quo of the Silicon Valley culture, innovation hype and AI fear, and realigns innovation with social progress. That's what Techfestival is all about.
Fig. 1. Keynote hour at Techfestival Circle stage.
After 2 successful editions that gave birth to the Copenhagen Letter in 2017, and 150 guiding principles in the Copenhagen Catalog in 2018 (Fig. 2), Techfestival came back this year, from the 5th to the 8th of September 2019, with a variety of convivial events, brilliant speakers, engaging workshops and constructive discussions around the most pressing questions in tech development.
During 3 days, in an intimate setting and a participatory approach, participants engaged in a new conversation about tech with real debates and tangible outcomes. Experts, practitioners and digital minds from all over the world co-created varied sessions to invite visitors to deep dive into the impact of tech on humans and societies, and question what really happens when technology impacts the way we live, work, play, eat, build our cities and exercise modern democracies.
Fig. 2. Snippet from the guiding principles of the Copenhagen Catalog.
Among the 200 events hosted, evening keynotes and Q&As included fascinating speakers like Aza Raskin, co-founder of the Center for Humane Technology, Jimmy Wales, co-founder of Wikipedia, Chris Messina, inventor of the hashtag, Payal Arora, author of "The Next Billion Users", and Linda Liukas, programmer and illustrator at Hello Ruby. Techfestival also counted 17 all-day summits covering topics such as sustainability, Govtech, public data, food tech, urban mobility, as well as numerous 2-hour sessions about Design, ethics, AI, Mixed reality, cryptocurrency, and much more.
In this report, I highlight 4 events that stood out and share my observations and learnings. The Designing for Meaningfulness workshop presented a new framework for designers that dissects what meaningfulness really entails when creating smart products. In the face of its recent development, the Ethical Design in the age of AI session invited participants to question the ethical dilemmas caused by AI, while Design materialisations presented how Speculative Design can help foresee its possible adverse consequences. Finally, the Endings summit delved into why endings in every area of life create meaning and how we can design better offboarding experiences.
Highlight #1 - Designing for Meaningfulness
In a world where technology has just become a fashion, smart products, desperate of bringing more and more convenience, often have a short life span. The Designing for Meaningfulness workshop invited us to reflect on what makes a smart product meaningful for us. Vanessa Julia Carpenter proposes exploring that question through 3 lenses: our self-development, our relation to others and our relationship to time. As part of her PhD about Designing for Meaningfulness in future smart products, Vanessa developed a framework to spark reflection on the experience of meaning when designing products or services, and identified 4 experiential components: purpose, coherence, resonance and significance. Her framework also pinpoints 6 mechanics of meaningfulness (personal development, moments of significance, value over function, meaning in everyday life, critical thinking and offline artefacts) to question and be critical of the many aspects of meaningfulness (Fig. 3).
Fig. 3. Slide summarising the framework (3 lenses, 4 experiential components, 6 mechanics and 4 manifestations of meaningfulness).
Divided into groups of 3, we assessed the meaningfulness of the Oura Ring, a smart ring that aims to help people with sleeping problems by continuously recording their physiological parameters such as heart rate, temperature and movement. One unusual thing about the 'Oura Ring' is that you cannot perceive its smartness as it just looks like a standard ring, and unlike many assistive products, it is a passive product that gives control back to users by letting them decide when they want to access their data and act on it. The framework enables us to question the kind of relationship we want to have with assistive technology and the necessity of using technology to solve very human problems like sleeping. Besides, using assistive technology to help humans be more mindful of their own body brings concerns about human autonomy in the future. Although the experience of meaningfulness is very personal, this framework provides excellent prompts to help designers be more mindful of their underlying intentions when designing smart products.
Highlight #2 - Ethical Design in the Age of AI
In the context of rapid technological progress, AI is more than ever under the spotlight as it can scale at an unprecedented pace for better or worse. Acknowledging the urgency of being mindful of the way AI is designed into products and services, many companies started to publish their AI ethics guidelines, as well as the EU commission which is now talking about "AI ethics by design". The Ethical Design in the age of AI workshop explored the tremendous power AI has today, but also its weaknesses, highlighting some of the ethical dilemmas society has regarding the use of the technology. On the one hand, AI can assist and expand human capabilities as it can hear, see, detect patterns, predict and optimise, but it also has clear limitations that can be very dangerous as it cannot understand what it does and is never in doubt. Facial recognition systems with racially skewed performance, social media feeds manipulating people's emotions, algorithms discriminating against women’s job applications, Machine Learning classifying terrorists based on human features are some of the examples of what can go wrong with AI if we don't pay attention to how we design it.
Fig. 4. One of the posters about one of the 4 questions: How do we develop one ethical code among designers ?
While other professions such as doctors or journalists take a Hippocratic oath, designers don't have any similar ethical code in place. Peter Svarre, a digital strategist and writer on AI, invited us to collectively think about the different aspects of these ethical dilemmas. Divided into groups, we explored 4 questions: "when should designers be worried?" - "what are the most important principles and heuristics?" - "how do we develop an ethical code among designers?" (Fig. 4), and "who is responsible?". To prompt ethical thinking by participants, each group got 3 cards with controversial statements, we had to discuss whether we agree with them or not.
As part of the 'ethical code for designers' group, the trigger questions enabled us to think about the role of designers within project teams, the tension between a designer’s will and company goals and the conflicting interests between efficiency versus ethical concerns. A consensus emerged about these questions in my group. First, designers should care about ethics as much as everyone else in a team; it cannot be someone's role as it needs to consider different perspectives; and cultural and ethical questions should be asked in various disciplines. Secondly, progress in AI should be slowed down to make sure ethical considerations are adequately addressed, it is not a race we should try to win against China. Finally, designers cannot have a licence or take a Hippocratic oath like doctors, as they still have to meet their company goals which often differ from their ethical concerns, and it is complicated to expect designers to be able to always stand for their moral convictions. Although the consensus revealed some directions, the discussion highlighted the complexity of finding a solution as it is a wicked problem that requires alignment at every level of society from designers and technologists to companies and governments.
Highlight #3 - Design Materialisations
With the increasing pace of new technologies and cult of design thinking and design sprints, companies are moving rapidly to solve real problems through desirable solutions for users, but they are also causing a range of unintended consequences. As French philosopher Paul Virilio said: "When you invent the ship, you also invent the shipwreck; when you invent the plane you also invent the plane crash; and when you invent electricity, you invent electrocution... Every technology carries its own negativity, which is invented at the same time as technological progress". The lifejacket was one of the design materialisations of the shipwreck. But, what is the lifejacket in the age of AI and machine learning?
Design Materialisations explored what Speculative Design is and how it can help us visualise the possible unintended consequences of AI, to better foresee and mitigate them. When using Design thinking, design sprints and lean startup methods, we only prototype the positive and desired outcomes, the "happy paths". But what about the unhappy paths? While Design creates solutions that solve problems that exist, Speculative Design pre-creates solutions to issues before they occur.
In this session, a group of students from the Institute of Visual Design at the Royal Danish Academy of Fine Arts Schools of Architecture, Design and Conservation presented a series of posters based on the 12 tracks of the Techfestival (Fig. 5). Through the lens of Speculative Design, they probe what a probable future would look like with the rise of AI. What could be a probable future of work, democracy, art, cities, or food? Are we going towards a world where humans will need technology to be mindful of their own body? Are we going towards a world where humans would pay a subscription to customise their version of the world to only see what they like? These were some of the questions raised by the students through Speculative Design.
Fig. 5. Some of the posters created by the students about Food, Work, Information and Democracy.
Highlight #4 - Endings
Endings are inevitable. Everything on earth comes to an end: life, products, services, relationships. All the things humans or nature create evolve over time until they end. However, acknowledging that fact doesn't make us prepared for it or willing to accept it. The Endings summit explored how and why things end and how to plan and design for it, in 4 areas: life, nature, relationships, and products & services.
To kick off the day, Dr Joana Casaca Lemos, designer & researcher focused on integrating ethical and sustainability practices, invited participants to write one sentence of an ending experience they had on a post-it note, and give it to the person seating next to them. Then, we were asked to write as many feelings we could think of that this experience inspired us. By sorting the feelings of all participants on the wall, it started to make the invisible visible and to give us a lexicon of the many emotions associated with ending experiences (Fig. 6).
Fig. 6. Outcome of the workshop to create a shared lexicon of emotions associated with ending experiences.
Shoshana Berger, global editorial director at IDEO and writer of the book "A Beginner's Guide to the End", gave a poignant talk on the end of life with great insights on how to better live your life by being more prepared for its end. Without death, life has no meaning. Endings make the time we have precious. When thinking about that time, it resonates with how we want to be remembered. Imagine that you could read your obituary notices. What do you think would be written about you? Is this what you would like to be remembered for? Practising loss can be a way to help you live closer to death to let it teach you how to live, with less fear, by following your heart and giving more selfless love.
Joe Mac Leod, the author of « Ends » and former Head of Design at Ustwo, talked about why we overlook endings for humans, products, services and digital and why we shouldn’t. He explained that companies are putting a lot of effort into onboarding and retaining consumers as they think of engagement as a single action rather than multiple ones and have a narrow view on consumers that totally excludes their civic life. However, some companies like Netflix or Easy Gym started to see the value of having a different approach to the offboarding experience by making it easy for customers to leave and come back whenever they want.
Finally, Francesca Desmarais, systems thinker and strategist with a strong background in environmental and social design, gave a talk on the "end of climate" and how do we engage with the long timescale of ending climate as we know it. She framed it around 4 challenges: invisible time, inequality, imagination and inevitability. Invisible time is about feeling homesick in your own home; inequality about how poor people are more vulnerable to change; imagination on how will future look like; and inevitability was about the social pressure we all live as being part of an interconnected system. She then proposed some practical actions in different areas of life: politically, religiously, professionally and being.
One takeaway is that the end of one thing is not only pain and sorrow, it's also the beginning of something else. While endings give meaning, rituals create moments to remember. Humans have always had ceremonies for lifecycle moments like death, and physical objects play a fundamental role in them. The frantic digitisation of every area of life brings a lot of concerns on how it is impacting our rituals. Nowadays, if you don't have a picture of yourself online, it's like you don't exist. This questions whether we have collectively failed to keep the value of analogue artefacts.
To put those learnings into practice, participants were invited to form teams and choose one focus area between life, nature, relationships and products & services. As part of the products & services team, we picked the e-scooters to capture issues associated with the end of service. We then clustered these into broad sequential phases to describe the descending engagement (Fig. 7) that we summarised into a short paragraph describing the ending experience of the e-scooters service. One critical problem is that shifting from ownership to a renting model removes the connection humans have with physical objects, and disengages them from what's happening after usage, leaving cities in a total mess.
Fig. 7. Slide representing the product & service lifecycle,
explaining how to cluster the identified steps of the descending enagement
Acknowledging the offboarding problem, participants were then invited to shift to a problem-solving mindset and imagine what a better ending would look like for their area. What if we could recreate the lost connection we had when giving up ownership for rental models by building a story between users and the scooters they used. Giving a name to each scooter, receiving tailored messages to know how much time you spent with a scooter, how much kilometres you did together, how much CO2 you helped to save by not using your car or how much more fluid your city became thanks to you, were some of the ideas discussed during the workshop.
While endings are inevitable, the direction of technological progress is not. It's undeniable that technology has profoundly changed our world as we knew it, but it is up to us to steer its development where we want it to go. During 3 days, I’ve been amazed to see so many like-minded people eager to drive change in tech and apply a human-centred approach to data, Design and mixed realities. In parallel of these events, a think tank gathering 150 thought leaders of the tech industry has developed the Tech Pledge for everyone working with and in tech to take (Fig. 8).
Fig. 8. Techfestival Think Tank writing the Tech Pledge
While this list of commitments aims to make sure good intentions are accountable, there are some that seem quite difficult to apply in practice considering some of the discussions I was part of (see Highlight #2). Strong statements like “to always put humans before business, and to stand up against pressure to do otherwise, even at my own risk” might raise some debate about the level of risks we’re talking about here. Being in a position to commit to this directly depends on your level of influence in your own organisation, as well as your personal situation. In a context of increasing competitiveness, is it really realistic to expect practitioners to put themselves in a position where they could lose their jobs for it? Although this one might seem a bit extreme, since its release, 1323 people including myself have taken the pledge, and I hope many will follow as we only have one world to live in and if we don't pay enough attention we will end up with one we didn't want.