The House of Lords is where it was happening this evening, another #artificialintelligenceai all party parliamentarians group, “#artificialintelligence in the boardroom”

Can we manage the risk?

– Macro level ethical and societal impact
– Micro level, eg, verifying chapgpt output and how it was arrived at

Where can we achieve and upside and manage the risk? e.g. Google maps

What are the underlying assumptions

– we can’t keep up with its rate of output
– development is exponential, so the notion that we can train young people now is nonsense. 2 days behind with exponential development, you’ll never catch up

Can we understand or verify the output? nope, it’s not a human process

Ethics in AI and bias is another nonsense because AI runs on bias!

chatgpt will reduce our thinking and therefore our human processing too.

Is ai serving us, or are we serving AI?

Stunned silence when the panel was asked about a case study of “AI for good”

A speaker commented that the cat is out of the bag and it’s gone too far. The parallel is that’s what the silent German population said during the rise of Hitler

#artificialintelligence is not inevitable, #boards can decide where there is an upside and whether they can manage the risk.

Deployment of AI without considering the outcome is the definition of reckless

There are very few successful AI case studies that deliver an upside, so watch the psychology driving you..

Ny view as a techie and engineering board member is broadly don’t deploy AI unless you can manage the risk, and there is a clear upside. such applications are hypothetically using AI for the design of chips, and satellite navigation systems are potential applications.

Destroying work at a macro level, those are your consumers, so they need money to buy your products, which is why Henry Ford increased his employees’ wages so they could afford to buy his cars.

Overall, it was a pretty inconclusive discussion that raised a lot more questions without many answers

When you look at the psychology of AI and scenarios we are creating unsolvable technical problems, the closed loop assumptions are that more technology will fix these issues, more of the same will lead to a different outcome is surely the definition of madness?

China led the pack on ai development too, so there is no chance of catching them, and in doing so, a race – bottom,

In particular, ai will increasingly dominate warfare, and the main difference between China and the West is that China ccp doesn’t care about people.

Needs sorting, but do board’s know the psychology driving them or who is pulling their strings?

PS AI is in the wrong hands and being led by China, so the rest have to follow. Re Cyber, China through their leadership of quantum computing and AI will create unstoppable cyber attacks, launched as ever at a time of their choice. Cyber attacks are rightly categorized by the UK Gov as warranting a nuclear response. We have no chance of catching China, how can you catch a country without ethics? There are a lot of Chinese students in our universities, many from Cambridge Uni last night, there are 20k CCP members in Western corporates too. PS re AI you cant define or program ethics into ai either, that requires faith and good people-people relationships, the AI fixes bias discussion is absolute nonsense. Right now people haven’t even got the courage to stand up for what a woman is so AI has a free reign, leaving most CEOs to have become puppets for these lies? Enjoy Tom