The AI for Good Global Summit: AI for UN Sustainable Development Goals

AI for Good Global Summit – ‘Towards AI and Data Commons’ panel (from left): Professor Stuart Russell (UC Berkeley), Trent McConaghy (Founder, Ocean Protocol & BigchainDB), Professor Francesca Rossi (University of Padova & IBM Research), Dr. Chaesub Lee (Director, ITU Telecommunication Standardization Bureau)

By Yolanda Lannquist

 

The second AI for Good conference took place over May 15-17 at the ITU headquarters in Geneva with partnership with XPrize, ACM and several United Nations agencies. The objective of the Summit was to brainstorm practical projects for applying artificial intelligence (AI) towards achieving United Nations Sustainable Development Goals (SDGs), including ‘No Poverty,’ ‘Zero Hunger’ and ‘Climate Action.’ Click here for more information about the event.

 

AI + Satellites

Led by Professor Stuart Russell, UC Berkeley

Panelists presented projects that apply machine learning to analyze data collected from satellite imagery. The first project aims to predict deforestation before it occurs to notify authorities. It uses satellite images of deforestation events to recognize early signs of deforestation, such as the construction of a road in an uninhabited forest zone. The second project tracks livestock to reduce cattle raiding and intergroup conflict for regions where cattle comprise a large part of economic activity such as South Sudan. In the third project, satellite images assess crop and property holdings and losses in flooding, in order to automate and lower costs for micro-lending and insurance in rural areas.

Professor Russell also emphasized the value of an infrastructure platform for continuous global services for automated analysis of satellite data streams. The platform would be costly, but once set up, it can service several projects and the costs can be amortized over these projects. A UNEP participant later also proposed a planetary dashboard for global water monitoring using AI.

 

AI + Health

Led by Marcel Salathe, Ramesh Krishnamurthy, and Sameer Pujari from the WHO

The AI + Health track presented several applications of AI to improve quality and access of healthcare services. The group aimed to identify low-hanging fruit applications and bottlenecks in healthcare to improve primary care, outbreak and emergency response, health promotion and prevention, and AI and health policy.

Fifteen startups and organizations presented new healthcare applications for AI. Several startups apply AI and machine learning to detect early diseases or symptoms, such as diabetic retinopathy, osteoporosis, or diagnostics for snake bites and skin cancers. Others provide health portals or infrastructure to improve public health, for example in India. One UNICEF project uses AI to power epidemic modeling and another rapidly detects and monitors malnutrition in patients.

 

Trust in AI

Led by Professor Huw Price from the University of Cambridge, Professor Francesca Rossi from the University of Padova and IBM Research, and Stephen Cave, Executive Director of the Leverhulme Centre for the Future of Intelligence at the University of Cambridge

This track discussed the importance of trust-building in applying AI for good. This involves trust among stakeholders, trust in the people developing AI technology, and trust in the data and technology itself. In the first category, presenters discussed earning trust of stakeholders affected by technology, such as patients using AI mental health applications. In another project, the Centre for the Future of Intelligence (CFI) at the University of Cambridge presented a pilot digital app for East African poultry farmers. Irakli Beridze, leader of the Centre for Artificial Intelligence and Robotics at UNICRI discussed a project to map and assess the impact of AI-driven automation on social stability, crime, and migration in developing countries and transition economies.

There is large variation in cross-cultural comparisons of trust and global narratives surrounding AI and robots. Whereas many societies feature either utopian or dystopian narratives for the impact of AI on society, there are very few middle-ground scenarios. One presenter also demonstrated cross-national comparisons of regulation for autonomous vehicles, while another discussed differences in the concept of fairness across cultures. Finally, AI systems must be demonstrably trustworthy. The panelists deconstructed the notion of ‘trustworthiness’, and discussed regulation or industry practices for diverse and unbiased datasets.

Diversity and inclusion is key for trust-building, and is relevant for AI development professionals, narratives and myth-building about AI, inclusion of a broad array of stakeholders who will be affected by AI, and representative datasets. The track leaders finally presented ‘TrustFactory.AI,’ a platform for gathering ideas for building trustworthy AI.

 

AI + Smart Cities and Communities

Led by Renato de Castro, SmartCity Expert, and Alexandre Cadain, Co-founder and CEO of ANIMA

The ‘AI + Smart cities’ track included speakers from several global smart city initiatives, including in Brazil, Japan, Amsterdam, Dubai, and Singapore. A common theme was the need to bring broad stakeholders to design and participate in smart city initiatives. One practical recommendation was collecting a repository of lessons learned, best practices, and failures so that other smart city projects can learn from others’ experiences. For example, while 70% of smart city pilots have failed in Amsterdam, leaders have compiled a ‘Graveyard of Ideas’ so that others can learn from these past mistakes and failures.

The track discussed several projects. The first focused on the need to preserve differences in local cultures in cities; “through AI we can enhance cultural heritage of each city, so there will be many different definitions and variations of smart cities.” The second project focused on applying technology for citizen empowerment, particularly for lower socio-economic groups. A chat-bot pilot in South Africa has been helping domestic abuse patients, while another helps the homeless identify opportunities for entrepreneurship in their local neighborhoods. Projects and applications should be begin with the ‘need’, and the ‘problem owners’ and important stakeholders should define the mission.

 

Themes common across some panels included:

  • The importance of multi-stakeholder, interdisciplinary, and diverse voices in AI development and projects in AI for good. More diversity from geographic regions, such as Africa, alongside ethnic and gender diversity are important. Interdisciplinary collaboration is needed among technologists, sector experts and industry in combination with policymakers to build successful projects applying AI for good.

 

“Diversity is central. Projects must be multidisciplinary, multi-stakeholder and multi-cultural.” – Francesca Rossi, University of Padova and IBM Research

 

  • There are important trade-offs to balance in applying AI for good. For example, individuals’ data privacy or security or other human rights may be compromised to gather training data to predict food shortages, escalating xenophobic conflict, or for healthcare breakthroughs.

 

“Privacy is a human right, but so is food, water and shelter. We need to move from prioritizing privacy as the primary risk source to an approach that’s holistic and balances the risks of misuse along with the risks of ‘missed use.’” –Robert Kirkpatrick, Director, UN Global Pulse

 

  • The need to consider intercultural differences in the expectations and narratives about AI and robots in society. Some cultures exhibit more trust towards robots or technology. Professor Zhe Liu of Peking University explained that while Western cultures have narratives of robot rebellion and take-overs, such as in the film Terminator, these fearful ideologies do not exist in Japanese and Chinese cultures. For example, the Japanese cartoon ‘Astroboy’ features a friendly relationship between a boy and robot. Professor Liu Zhe fears high “mistrust” and “overtrust” in technology in East Asian societies, which can lead to dangerous consequences such as reliance on artificial care and friendships and deception from of counterfeited companionship.

 

“In East Asia, popular culture favors AI and robots as companions. People have misplaced trust so that human-robot interactions and interpersonal relations are conflated.” – Professor Liu Zhe, Peking University

 

  • Applications from blockchain technology can be used to authenticate, validate and secure data, particularly citizens’ data, that is later analyzed and used in AI. Moreover, other features of blockchain such as tokenization, smart contracts, or cryptoeconomics can support governance of AI, for example by incentivizing industry for ethical or safe AI. Susan Oh, Chair of AI, Blockchain for Impact at UN GA, argues that rather than hard regulation, tokenization and blockchain-based incentives and other systems for collaboration can support AI governance.

 

Incentives from cryptoeconomics can be incorporated in the governance of AI.” by Toufi Saliba, AI Decentralized

 

Towards AI and Data Commons

On the final day, the panel ‘Towards AI and Data Commons’ included unique and break-through perspectives on the development of a ‘Data Commons’ to incorporate and advance projects for AI for Good. Urs Gasser, Executive Director of the Berkman Klein Center for Internet & Society at Harvard University introduced a ‘Data Commons’ framework. The Data Commons is composed of several layers: at the foundation is the Technical Infrastructure (Where the data lives), upon this layer is the Data (Qualitative/Quantitative, Structured/Unstructured), and upon this lie the data features such as labels (Metadata, Taxonomies of datasets). The top two layers are composed of the Organizational Practices (Collaboration, Incentives), Institutions, Law + Policy (Accessibility, Privacy, etc.), and the top layer includes Humans (Knowledge, Education).

On the same panel, H.E. Omar Bin Sultan Al Olama, AI Minister of the UAE, introduced the efforts of the UAE in applying AI for SDGs such as climate action. He also introduced a joint report to be soon published between the Government of the UAE and The Future Society summarizing key insights from the inaugural Global Governance of AI Roundtable, which took place at the World Government Summit in Dubai in February 2018.