Held on 9 Sept 2019 at the UK Parliament
Professor Birgitte Andersen, CEO, Big Innovation Centre
Intro: Ai healthcare market is worth $26bn rising to $111bn by 2025, with more apps than any other sector. Young and old alike are active users. Progress is largely being fuelled by VCs, who find.
Dr Ali Parsadoust, CEO of Babylon Health
Visibility if advice goes wrong – who is responsible. AI is used for decision support but what of the liability if the medic ignores the advice: health diagnosis is not binary.
When should a medic overrule the decision support? Do they have visibility on how the decision support advice was reached?
With decision support, the medical profession is being deskilled over time, so maybe less likely to challenge the decision support.
Who has the expertise to analyse the AI trials, it requires a complex multi-disciplined approach?
Who owns the liability for AI and who is liable for errors – the NHS, technology supplier, data…
Who has the expertise to regulate it?
Single users can use applications directly on the international market. This 1-1 relationship is a route to the trojan horse of dismantling the health system.
Data use is an issue; consent, ability to revoke data use, difficult to anonymise, and process freedom of information requests
There are different ethics requirements/country
The supporting data is vast because the whole patient is considered.
Professor Roma Maguire, Professor of Digital Health and Care, University of Strathclyde
Human factors are important – what is normal/normal or predicting symptoms is not helpful, creates a self-fulfilling prophecy
50% said it was a good idea and 11% said these applications were dangerous.
Major trust issue between clinician and patient
Dr Navin Ramachandran, Healthcare Specialist in distributed ledger and IoT, IOTA Foundation
10% of global GDP spent on healthcare, with mental health badly serviced
60% of people globally have zero access to healthcare
Diagnosis tools are based upon probability, with data passed to the clinician to prescribe online, broader signals increasing phone use if the patient was depressed increased risk of suicide.
Data set formats are variable and unintended consequences can be caused due to a lack of boundaries.
These direct services could fragment national healthcare frameworks and introduce a private healthcare solution by stealth
A professor from St Thomas hospital says the expectation is of radiologist’s headcount savings but the complexity of the technology continually so the goalposts shift, so productivity improvement is difficult to isolate and realise. St Thomas Hospital alone uses 43,000 pieces of digital equipment largely locally sourced by local clinical staff with freedom of information requests laborious and manual to answer because data is stored on different platforms. They are developing collaborative models to provide the algorithms to run on their data. The power and value are retained in the radiology data.
The risk is being borne by clinicians because these AI tools are used for “decision support only” so are not used directly for decision making.
In summary
Rather a complete mess with the development spend fuelled by VCs. It seems the technology is already too complex to interrogate, and humans are already too stupid to manage and regulate AI in healthcare. There are some fundamental issues that should be resolved, but because the development is being fuelled on a piece meal basis by VCs who is interested to pick up the baton and resolve these issues?
The importance of trust and the power of human-human face-face communication beyond purely the psychosomatic as part of the solution seems to be overlooked and unmeasured. The mental health aspects of reducing the face-face contact and predictive analysis tools seem to achieve a fundamentally negative impact. Does a focus on tech driven convenience work with a reduction in people contact work in healthcare and the treatment of disease?
This in part is akin to Elon Musk’s preferred strategy of democratised decentralised AI although that too leaves huge gaps around control and oversight of the macro implications, data use the societal impact, market shrinkage, regulation and legislation.