OP-ED | Tracing developments on Artificial Intelligence

-in Latin America and the Caribbean

By Chelceé Brathwaite

WHILE some still consider AI to be beyond the grasp of developing countries, our South American neighbours have been shattering that stereotype. AI is being deployed in a number of their endeavours: to speed up artefact findings in Peru; to increase crop yields in Colombian rice fields through AI-powered platforms; to boost security and enhance customer service in Brazil’s banking sector; to create vegan alternatives with the same taste and texture as animal-based foods in Chile’s food industry; to predict school dropouts and teenage pregnancy in Argentina; and to forecast crimes in Uruguay.

Some of the push in AI adoption in these countries has come from academics and researchers, like the ones at the University of Sao Paulo who are developing AI to determine the susceptibility of patients to disease outbreaks; or Peru’s National Engineering University where robots are being used for mine exploration to detect gases; or Argentina’s National Scientific and Technical Research Council where AI software is predicting early onset pluripotent stem cell differentiation.

These and other truths were revealed to me at a Latin America and Caribbean (LAC) Workshop on AI organized by Facebook and the Inter-American Development Bank in Montevideo, Uruguay, in November this year. I was the lone Caribbean participant in attendance, presenting my paper entitled: AI & The Caribbean: A Discussion on Potential Applications & Ethical Considerations, on behalf of the Shridath Ramphal Centre (UWI, Cave Hill).

DEFINING AI
While AI has no universally accepted definition, it describes machines and systems that can acquire and apply knowledge, and execute intelligent behaviour. Beyond robots and autonomous hardware devices, AI’s application also extends to software-based operations in the virtual world like Siri. At the heart of AI is technology that exhibits cognitive capability.

The term AI was first introduced in 1956, in the context of work done by computer scientists like John McCarthy, Alan Turning and Marvin Minsky. AI’s rise to pre-eminence today can be attributed to a number of factors, including unlimited access to computing power and the growth of Big Data. In the last six decades, the world has witnessed a trillion-fold increase in computing power, and worldwide data is expected to grow from 33 zettabytes to 175 zettabytes by 2025.

Towards AI’s Ethical & Legal Considerations
Despite the hype, adoption of AI provokes a number of ethical and legal questions: like, where should responsibility lie for deaths caused by an autonomous vehicle that intentionally decides to crash? Who should own the copyright on content created by AI, especially in legal regimes where such protection extends only to human-created content? How would you prevent an AI-powered hiring system from exacerbating gender and racial inequalities in specific job roles? The LAC AI Workshop attempted to examine these and other ethical issues, by providing a forum, not for engineers and software developers, but for academics, philosophers and lawyers who debated topics such as those included below.
Data Governance and Privacy

AI runs on lots and lots of data. But, in a context of eroded public confidence in data-collecting organizations and data-consuming technologies, it is questionable whether current data governance frameworks are sufficiently flexible, yet still robust to maintain privacy protection. Moreover, not enough of an understanding exists of the impact of data monetization models on privacy.

One potential solution mooted are data trusts – legal structures providing independent stewardship of data, where third-party users are responsible for using and sharing data in a fair, safe and equitable way. Notwithstanding the model’s challenges, particularly with compliance considerations, data trusts would go some way in allaying concerns on how sensitive data is held and used by AI technologies.

BIAS AND DISCRIMINATION
Amazon’s AI recruiting tool showed bias against women, AI facial recognition systems worked better for white men than black women, and an online chat bot became racist. In all three cases, a European study found a common contributor to be the training data used. This finding questions whether the quantity and quality of our data could avoid biased outcomes, and whether algorithms could be prevented from creating profiles that discriminate against certain social groups.
If AI technologies follow the “garbage in, garbage out” rule, then tackling biased and discriminatory outcomes must begin at the data input level. Here, research points towards controlled distortion of training data, integrating anti-discrimination criteria into classification algorithms, post-processing the classification model once extracted and correcting decisions to maintain proportionality among protected and unprotected groups. Recognition that preconceived ideas derive in part from an algorithm’s design means that algorithms must be “audited” for their susceptibility to discrimination, and that there must be full transparency, even if that itself raises issues of intellectual property and national security.

ROGUE AI AND UNINTENDED CONSEQUENCES
The Avengers: Age of Ultron shows a case of AI application gone wrong. While not as dramatic, self-driving cars running red lights and autopilot systems gone wrong portend disastrous consequences. Even more alarming are reports of autonomous weapons systems being developed to aim and fire at “human enemies”. Prospects of errant application force the issue of AI design. Some AI R&D guidelines propose incorporation of controllability, transparency, safety and ethics from inception. But the proliferation of non-mandatory guidelines compared to other normative frameworks casts doubt on enforceability and compliance. In turn, the question of who is responsible for unintended consequences arises: should blame be laid at the feet of developer/user, the machine, or neither? Transposing existing regulatory schemes to advanced technological developments also proves challenging. Although product liability traditionally focuses on negligent design/manufacture or breach of duty, AI’s autonomous and evolving nature makes it difficult to identify the point of defect or predict dangerous outcomes – and thus assign liability.

LESSONS FOR THE CARIBBEAN

So, what is happening here in the Caribbean?
AI remains, at best nascent, with limited R&D. Reports of its limited application in The Bahamas, Belize and Guyana and the absence of policy discussions belies the significant potential for AI here.

We could dream of a day when Fedo, a risk stratification AI system for predicting patient susceptibility to non-communicable diseases (NCDs) is used in the Caribbean’s health sector, where NCD-mortality is highest in the Americas. Or, when Dragon Medical Virtual Assistants assist the region’s critical nurse shortage, which in 2010 measured 1.25 nurses for every 1000 people. How about See & Spray, a weed control AI robot that could reduce herbicide expenditure by 90%? Or AI harvest robots replacing 30 workers in the Caribbean’s agricultural sector, where the food import bill is expected to reach USD 8-10 billion by 2020? Could we ever see AI systems developed by Google, Harvard and NASA which predict earthquake aftershocks, flooding and hurricanes as part of the Caribbean’s disaster management and mitigation efforts, to save lives and mitigate potential loss?
Instead of dreaming, I propose the following three steps that the Caribbean can take to better position itself to harness AI’s potential.

First, we must develop an appetite for such technologies.
South America’s engagement in this field is a testament of the region’s innovative capabilities and appetite for such technologies. This cannot be done without firms and governments that are willing to adopt and utilize these systems in their provision of goods and services. In addition, we need research and studies that demonstrate how can AI be leveraged to solve some of the region’s developmental challenges. It falls to the region’s academia and private sector to find innovative AI solutions and spur demand for their subsequent development and adoption.

Second, we must form strategic partnerships
Google is developing an AI system to predict and send flood warnings in India; Unilever is testing various AI solutions in South America; MIT and Harvard are hosting AI R&D conferences in Uruguay, but, who are we partnering with in the Caribbean? Recognizing the importance of strategic partnerships and taking steps to reach out to organisations like the IDB to fund such initiatives; or companies like Facebook and Google to develop and test AI solutions in the region; or AI R&D centres and universities to partner with; are all potential avenues for overcoming existing financial and resource constraints which hinder our progress in this field.

Third, we must initiate AI-related policy discussions
Realising the wider ethical and legal considerations arising from the application of AI, we must ask probing questions like: are the existing frameworks capable of addressing our concerns? And how can we mitigate risks and instil public confidence in such technologies?
Beyond technologists like engineers and developers, the discussion must involve policymakers who must be on the front lines in developing adaptive and anticipatory frameworks. Similar to Mexico’s move towards an AI strategy, which aims to transform the country from an observer to a key actor, we must look towards the development of holistic approaches.

While not exhaustive, this list of recommendations is a start to riding the AI wave. It is now up to us to either learn how to ride it, like our South American neighbours are doing, or we get washed ashore.

(Chelceé Brathwaite recently graduated (with distinction) from the Masters in International Trade Policy Programme of the Shridath Ramphal Centre for International Trade Law, Policy and Services, Cave Hill Campus, UWI. For her postgraduate research paper, she developed a Cross-border E-commerce Trade Policy for Barbados. She is a past intern at the Barbados Ministry of Foreign Affairs and Foreign Trade, and a current intern at the CARICOM Secretariat’s Office of Trade Negotiations)

SHARE THIS ARTICLE :
Facebook
Twitter
WhatsApp
All our printed editions are available online
emblem3
Subscribe to the Guyana Chronicle.
Sign up to receive news and updates.
We respect your privacy.