Skip to content
1

Blogs

What's so Artificial about Artificial Intelligence?.... Changing our world responsibly

There’s not much artificial about Artificial Intelligence: it doesn’t just exist in the ether, and given its increasing pervasiveness in our daily lives, it pays to take a closer look at AI’s impact on social assets and the environment generally. Because even though Artificial Intelligence is unquestionably a vehicle for economic change in the future (the world has already changed irreversibly in the wake of classificatory logic), what does all that mean for those working in algorithmic management structures: filling boxes in Amazon warehouses the size of ten aircraft hangers? What does it mean for data regulation, and where do its extended supply chains leave the environment?

Just like the Wizard of Oz working feverishly away at the handles and levers behind his curtains, maintaining a viable impression of magic in the process, once you pull the curtains away on AI you’ll find an equally complex and feverish human infrastructure: low paid workers stuffing boxes, data categorisation that is increasingly reliant on crowd working, and ever more extended supply chains criss-crossing the planet. There’s not much artificial or automated about any of that: it’s what Kate Crawford describes in her new book, “Atlas of AI”, as a “chain of extraction”, stretching clean around the world: “…made from natural resources and delivered by people performing tasks to make systems appear autonomous”.

Take ImageNet for example: thought up in 2007 at a time when the bulk of AI research was still focused on mathematical modelling and algorithms. But Fei-Fei Li, a young research assistant at Princeton (where else), had the bright idea of restocking and expanding the range of visual data available for “training” AI algorithms. Today ImageNet has more than 14 Million images in 20,000 categories, and is, by some distance, the world’s leading training dataset for object recognition: testing the accuracy of machine learning algorithms. That’s how the traffic camera will know immediately if the car you were driving in a bus lane last month was a 2019 model Mercedes C-Class with right hand drive, and Belgian number plates: ImageNet probably gave it the picture.

Fei-Fei Li is now Director of the Stanford Artificial Intelligence Lab, and her interest in ImageNet (amongst other things) makes her one of the richest women in the world.

But hold on a moment: those tens of millions of images didn’t get there by themselves, all of them were individually “scraped” from the Web (including the picture of your Mercedes: talk about poetic justice): all were labelled one at a time using a lexicon of nouns supplied by WordNet, and that certainly didn’t happen using Artificial Intelligence. The process was all too human. Thousands (and thousands) of Crowd Workers did the heavy lifting of scraping and labelling, and even leaving aside their working conditions (safe to assume this was unregulated labour, which is where appropriate use of social assets comes in), there were also major problems with the WordNet lexicon itself: images of people were regularly labelled “alcoholic”, “thief” and “drug addict” because…well, because the Crowd Workers thought they looked that way, and the lexicon didn’t give them other options (I’m leaving out the really bad stuff here, but a lot of it was deeply racist and misogynist too). Much has since been corrected, but outdated training sets are still circulating on sites where files are routinely shared between peers.

Whatever the potential of AI might be (and its vast: see above), this kind of development if fundamentally inconsistent with ESG values: designed to protect ethical community engagement and ensure high levels of corporate governance. And the central problem is a perception that Artificial Intelligence lacks any human component…but it doesn’t, and that’s where better regulation comes in.


Better Regulation: Better Business


The creation and use of training datasets of this kind (so crucial to the future of AI) requires tougher regulatory oversight, as Kate Crawford puts it: “something that challenges the narrative that just because a technology can be built means it should be deployed”.

But help is at hand: in April the European Union published Draft AI Omnibus Regulations, and the Australian Government has already issued new regulatory guidelines of its own (www.industry.gov.au): the so-called “AI Ethics Framework”. Responsible companies, fully committed to ESG, principles, are also working hard to develop emerging technologies that are capable of aligning our future much more closely with the human and social demands of the present.

They, at least, haven’t forgotten that there’s a lot about Artificial Intelligence that isn’t really artificial at all…


Red Ribbon Asset Management


Red Ribbon Asset Management (www.redribbon.co) is constantly searching for new and compliant ways to apply emerging technologies, including Blockchain, AI and Data Analytics: achieving its MII objectives of optimal environmental and social impact, consistent with above market rate returns, and steadfastly committed to enhancing customer experience through the intelligent adoption of mainstream impact investment strategies.


Executive Overview


Just because a particular technology is viable doesn’t mean it should be embraced without oversight, not to mention a proper ongoing regard for ESG values. For my own part, I’m convinced it will be part of all our futures, but that first requires a proper appreciation of the human values that underpin its success.


 Invest in Red Ribbon Asset Management 

RRAM logo

Red Ribbon is committed to identifying and building on investment opportunities that are fully in compliance with its core Planet, People, Profit policy: not only offering above market rate returns for investors but also protecting our Natural Capital.

 

If you would like to know more about joining our Mainstream Impact Investment journey click here

Suchit Punnose

Suchit Punnose / About Author

Leave a Reply