Ethics is what drives us whether our ethics are good or bad

3.20: Professor Linda King, Oxford Brookes, set out the importance of x functional collaboration with faculties and industry

3.35: Professor Nigel Crook, Director of the Institute for Ethical AI, set out their impressive projects, industry collaboration progress and importance of getting the Ethical aspects of Ai right in the context of their robotics and AI work.

4.00: Julian Burnett (IBM) set out the implications of people being commercially, or consumers left in out and the un/conscious aspects of the tech environment we are creating

and Susan Scott-Parker (Disability International) set out a passionate view of the importance of accommodating people with disabilities during recruitment and onboarding. The huge issue of the disability which will become low IQ which will marginalise 40% of people from the workforce.

4.15: Tom Pickering ( and Whilst somehow my slides were on autopilot! I set out the drivers AI ethics, and AI’s blind spots. I shared how critical thinking creates the fluidity to realign strategy by continuously realigning: analysis; the way of working and governance. This enables leadership to get a perspective on AI’s etc role in the business plan and keep strategy alive by continuously and fluidly critically thinking to keep strategy healthy. I shared how outcome centric regulation could make sense of piecemeal AI initiatives, and how the 1-day Game Changer fluidly realigns strategy and actions, establishes governance and prevents ethical and strategic misalignments. This equates to a strategic realignment and the means to stay on track.

4.30: Sana Khareghani, Head of UK Office for AI set out a very pragmatic view, with a Canadian twist, of the UK Gov’s plans to regulate, develop leadership and other capabilities and collaborate with industry.

4.45: Panel Discussion, chaired by Linda King covered the ethical aspects of how and if regulation technology or people are the answer and the role, they might play


Right now, personally, I think it’s fair to say that AI is recklessly being driven and sprinkled around without any impartial advice or strategic oversight? So, it’s important to get a grip straight away?


We are ready to run some game changer programs to enable the executive teams and cross functional quorums immediately: get a grip; governance; act and stay healthy.


If this is not immediately, I think the focus on tech dominance will definitely continue to usurp social responsibility.


My biggest concern is a lack of awareness and visibility of the macro issues how they impact local strategies the rework and unintended consequences and the increasing downside on people’s ability to cope.


The way people are is a pretty good barometer, highlighted by the polarisation, lazy ‘ism labelling,  attempts to silence, and inability to control our attention are all signs that we are not coping.


A good collaborated positive day, with a lot of work to do ahead!