For two decades I had been working in the Wild West of tech startups, building digital products that might “put a dent in the universe”. But the closer my work had taken me to the cutting-edge of intelligent systems, the deeper the questions I have found myself asking.
In my most recent role, I spent five years as Chief Operations Officer of an AI startup, where we built tools that interacted with one of the most intimate aspects of human life: our emotions.
Our customers included some of the world’s biggest brands and companies. For them, any technology that can understand how people think and feel would be a holy grail. The potential was enormous… but for good or bad?
Alongside my COO role, I was given the opportunity to join a diverse expert community at IEEE, who are writing the world’s first ethical standards for AI. For nearly five years I have been Chair of one such group, and I take part in various others, contributing to the vibrant movement behind the world’s forthcoming standards, regulations and best practices in AI development. Through this work, I have helped to build toolkits, conduct research, advise major institutions on ethical practices, and run ethics workshops from Tunisia to Tokyo.
Twenty years ago, my best friend and I were building our own social media platform for fellow backpackers, as we planned a round-the-world trip. We built a site that was like MySpace and Facebook in one, before either of them existed. We thought we would travel the world while building an internet success… Our optimism outweighed our delivery.
I think this positive thinking has kept me working in innovation and entrepreneurship, despite the risks of failure. Today, as seventy years of AI development finally explodes into the mainstream – amidst talk of rampant unemployment and inequality (and just maybe the end of the human race) – I’m still optimistic about technology. I believe there are pragmatic ways for us to navigate this new frontier, and harness its power to build a positive future for planet and people.
One thing most people can agree at the moment is that things are changing, fast. Regulation and strategy can take years to produce, while the tech is evolving by the week. Working with the Ai Institute gives me the chance to share up-to-the-minute insights from the emerging field of AI Ethics, framed within a startup-style approach of experimental action, as we race to get ahead of the technology and its potential impact.
My intention for this course is that you will leave confident that you are tooled-up and ready to participate, by applying practical ethics to your own work. Whether you’re planning to use third-party AI tools or building your own, or you’re simply curious about responsible AI practice, let’s roll up our sleeves and get stuck in together.
© Copyright {year} AI Institute. All Rights Reserved. Terms & Conditions | Privacy Policy | Ai Policy