On 22nd March 2018 the AI and Robotics Directors Forum was held at the Gallup, The Shard London. Industry speakers from Ocado, Saberr, Satalia, Google, Sage, Sky, Gallup PWC & the ICO contributed.
This is my summary and working notes from the 14 speakers and various discussions:
“Was there an emissions test for the original Model T Ford?”
Will humans ever be ready to accept our decisons are wrong vs machine led decisions?
The unresolved people related business issues, and our thinking will still dominate and determine the success of AI. The ICO don’t seem to have a clue and plan to keep tabs on progress entirely flawed.
There seems to be a loss of our rational decision making because by developing AI we are embarking on an unprecedented (inevitable?) process that will almost certainly lead to the demise of the human race along with worklessness which historically always lead to severe social unrest.
The success of AI will be determined by our thinking, yet our thinking is becoming increasingly flawed and made worse by the backdrop of uncertainty, the seismic challenges and increasing speed.
AI is likely to increase this uncertainty too, further undermining human ability to cope, and ultimately drive the right decision making!
As the Chinese curse says –” Interesting times”. Corporations are making less money too so there will be less effective support coming from public sectors raised from taxes, so the broader solution is difficult to see.
Its not good enough to say “we don’t know”. Its an important time to face and prepare to manage the downside, rather than bowl on regardless?
AI seems to be used by Sky to fudge people related customer facing issues, when there are good precedents left on the sidelines that already proven to resolve these.
Exec Summary
- Flawed human thinking will still drive development for a long time to come: machines copy our flawed biases.
- The same issues remain low engagement, involvement and trust and AI seems likely to make that worse making risk management and roll out very difficult: ethics execution will be down to people on the job not proptocol
- Watch out for flawed assumptions that drive some AI business models such as the assumption “people are machines”
- Humans will be using machines for collecting data and we will be asking the questions
- The people side of AI in defining Vision, so ethics management will remain crucial
- 30-50% of jobs will go over next 10 years
- Machines will ultimately be making decisions
- It’s very difficult for people to identify problems: machines are too fast and data sets processed too complex
- People make emotional responses, The right thing to is not necessarily the best outcome
- ICO do not seem to have a clue: transparency and human intervention in machine learning driven applications seem completely unrealistic. Auditing is likely to become if not now impossible.
- People will have to learn that we are wrong and “machines know better”
- Industry leaders such as Ocado must collaborate with research they cannot wait for technology
- Costs of automation vs off shored 1/3 and 1/9 vs on shored costs
- Tech needs to understand humans not the other way around
- No man is an island – human success has defined by working together in small teams and AI should support that
- There are major threats due to militarisation and other aspects of governance without a plan
- as a participant said there was not an emissions test for the Model T Ford
Headlines
- Watch out for flawed assumptions: it was said – “people are machines;” if so we have no chance vs machines and AI?
- there are many examples why people are terrible at making decisions
- We overestimate people’s ability to Handle Complexity
- · people work allocation – allocating work to 15 people has 1 million variants!
- · Transportation optimisation to 24 destinations – would take 20Bn years to process the options at 1m calculations / second
- General intelligence / singularity is when a machine will be able to make all decisions to complex and predictable inputs
- 70% or people’s cognition is currently wasted on office politics
- The employment tipping point will be when people cannot retrain fast enough to keep up with AI
- Embedding ethical requires AI to be explainable; a paradox – which seems to go against machine learning?
- Malfunctions and speed of making mistakes are very hard to identify
- Little mention of managing quality of AI and interaction between complex systems
- Ocado stay ahead Collaboration is with academics and others – can’t wait for the technology to pop out
- Costs of automation offshored 1/3 and 1/9 onshored
- Complex AI is when the data input is non standard and processes are complex. Where the process is standard and simple you don’t need AI – only RPA
- Sky seem to be implementing AI when they’d be better off getting their customer service basics right first and people on board?
- ICO have little chance of staying up to date or managing the data risks, the law dictates transparency and human visibility with machine learning seems 100% flawed?
- Biases in society are reflected in the algorithms
- Board will be responsible for ethics
- Next phase of soft robotics sensing is identifying slippage, crushing and supporting maintenance. Machines having a mutual understanding of human task and goals
- Consensus that 30% of jobs will be taken in 10 years. PWC dress this up by refering to jobs not roles!
- System design: Key factors are return on investment and piloting software systems off line
People side
- Tech understand humans not the other way around
- People make emotional responses, right is not the best
- No man is an island – human success is defined by working together in small teams
- Communication patterns define the success of the teams – defined by team members being equally involved
- Optimised automation will make people very unhappy, social media is already leaving people isolated
- Encourage behaviour change defined by different actions taken, Ai can coach on these
- Fears of being judged are important to avoid or machine sharing sensitive information with others!
- · How will computers know what to share what might be sensitive?
- There are the same problems to address that AI could make much worse:
- · 39% ok with why they are doing what they do
- decision biases making the wrong decisions – what defines that the right decision right?
- People are not good at answering questions – focus on posing questions, and evaluating answers
- · Be ready to accept we are wrong: who decides?
- · Humble to machine
- 45% free to express thoughts
- 48% feel employers care for employees
- 30% feel they are important to the future of the company
- Ethics cant rely on regulators because decisions happen on the job at every level
- Google offer same voice service for male and females, community aspects are important to avoid loneliness
Other questions & issues
- Organisations could be decentrailised due to technology such as blockchain
- People Asking questions will become more central for people leaving machines to find the information
- Sense checking is key is going in right direction, how will this happen when people cant keep up? we will be the slowest step / weakest link?
- People who are not on the AI journey in the main see Ai as a threat, so start to experiment
- How will regulation keep up with the changes?
- How will the ICO audit check systems?
- How can cultures keep up when computing power will be 1000x the entire computing power of all the people on the planet over the next 10 years
- How could AI improve customer loyalty and relationship?
- Power considerations 5% of UKs total power consumed in data centres – significant changes could make a dramatic difference to global power consumption
- Materials availability 97% of rare earth metals in electronics are owned by China
Business Adopters / Applications
- Automating and standardising tasks: used at 02 to switch mobile providers in hours, reduce delays to reduce WIP related enquiries
- AI enables Deutsche Bank to listen into all calls to id potential fraud, rather than a sample
- Slaughter and May – use to read contracts, and pull out relevant information in non standard legal documents
- Paypal have reduced fraud from ind average 1.32% to 0.32% by using AI
- Goldman Sachs only have reduced their traders from 600 to 2 people, supported by 200 computer engineers
- Ocado: process 1.4m orders per week using 1000 dumb robots and 1200 supporting software engineers, and 300 development engineers
AI deployment
· Vision driven
· Ask questions
· Emotional needs
· Complex unpredictable
· Ethics – right thinking
· Trust based – alignment and engagement
· Manage getting visibility on going in the wrong way faster
The importance of developing our human thinking
This is the Facebook mantra that would have directed Facebook’s and partners actions to break the law: “To get people to the point where there’s more openness – that’s a big challenge,” Zuckerberg said. “But I think we’ll do it. I just think it will take time. The concept that the world will be better if you share more is something that’s pretty foreign to a lot of people, and it runs into all these privacy concerns.”
Talking like that and encouraging your people to think of a lot of worries about personal confidentiality and dismissing it as increasingly the stuff of the past, was always going to invite disaster. get your business thinking right… to protect & enable your business to thrive.
best Tom