It’s the custom at Objectivity to invite our colleagues to give talks before the Christmas Party that will entertain and educate us all.
This year, I spent some of my personal time exploring Artificial Intelligence and one of my targets was a sub branch called reinforcement learning. It was fascinating and in many ways I found a connection between the trade offs needed within the Ai model and some conscious or unconscious decisions that I’ve made in my life.
I suspect we’re all making similar choices and I thought it could make an interesting topic for one of our Christmas Talks. I wanted to try and illustrate how some of those decisions can affect the outcomes for an Ai agent who wants to achieve a particular goal.
I hoped that by crystallising that trade off and demonstrating the impact in an alternate reality, it might help us all make some of those choices more consciously more of the time.
The video of my talk is here and it’s 16 minutes long:
With this link to one of my personal gitHub pages, you can see the Ai Agent and control his life choices yourself. The obstacles and death zones are generated randomly each time .
The simple rules by which the agent lives his life allow him to learn from experience and solve quite challenging problems. Although I didn’t cover this in my talk, I also designed a maze where one wrong step leads to death and then I set the Ai agent loose to learn the solution. There were more life lessons from that exercise but it’s a longer story.
If you’re interested in learning more about Ai and thinking how it can help solve some of the problems or capitalise on opportunities in your business, please give me a call.