Sunday, November 12, 2017

7 ways to use AI to massively reduce costs in the NHS

I once met Tony Blair and asked him “Why are you not using technology in learning and health to free it up for everyone, anyplace, anytime?” He replied with an anecdote, “I was in a training centre for the unemployed and did an online module – which I failed. The guy next to me also failed, so I said ‘Don’t worry, it’s OK to fail, you always get another chance…. To which the unemployed man said 'I’m not worried about me failing, I’m unemployed – you’re the Prime Minister!” It was his way of fobbing me off.
Nevertheless, 25 years later, he publishes this solid document on the use of technology in policy, especially education and health. It’s full of sound ideas around raising our game through the current wave of AI technology. It forms the basis for a rethink around policy, even the way policy is formulated, through increased engagement with those who are disaffected and direct democracy. Above all, it offers concrete ideas in education, health and a new social contract with the tech giants to move the UK forward.
In healthcare, given the challenges of a rising and ageing population, the focus should be on increasing productivity in the NHS. To see all solutions in terms of increasing spend is to stumble  blindly onto a never-ending escalator of increasing costs. Increasing spend does not necessarily increase productivity, it can, in some cases, decrease productivity. The one thing that can fit the bill, without inflating the bill, is technology, AI in particular. So how can AI can increase productivity in healthcare:
1. Prevention
2. Presentation
3. Investigation
4. Diagnosis
5. Treatment
6. Care
7. Training
1. Prevention
Personal devices have taken data gathering down to the level of the individual. It wasn’t long ago that we knew far more about our car than our own bodies. Now we can measure signs, critically, across time. Lifestyle changes can have a significant effect on the big killers, heart disease, cancer and diabetes. Nudge devices, providing the individual with data on lifestyle – especially exercise and diet, is now possible. Linked to personal accounts online, personalised prevention could do exactly what Amazon and Netflix do by nudging patients towards desired outcomes. In addition targeted AI-driven advertising campaigns could also have an effect. Public health initiatives should be digital by default.
2. Presentation
Accident and Emergency can quickly turn in to a war zone, especially when General Practice becomes difficult to access. This pushes up costs. The trick is to lower demand and costs at the front end, in General Practice. First, GPs must adopt technology such as email, texting and Skype for selected
patients. There is a double dividend here, as this increases productivity at work, as millions need not take time off work to travel to a clinic, sit in a waiting room and get back home or to work. This is a particular problem for the disabled, mentally ill and those that live far from a surgery. Remote consultation also means less need for expensive real estate – especially in cities. Several components of presentation are now possible online; talking to the patient, visual examination, even high definition images from mobile for dermatological investigation. As personal medical kits become available, more data can be gathered on symptoms and signs. Trials show patients love it and successful services are already being offered in the private sector.
Beyond the simple GP visit, lies a much bigger prize. I worked with Alan Langlands, the CEO of the NHS, the man who implemented NHS Direct. He was adamant that a massive expansion of NHS Direct was needed but commented that they were too risk averse to make that expansion possible. He was right and now that these risks have fallen, and the automation of diagnostic techniques has risen, the time is right for such an expansion. Chatbots, driven by precise, discovery techniques, can start to do what even Doctors can’t, do preliminary diagnosis at any time 24/7, efficiently and in some areas, more accurately, than most Doctors. Progress is being made here, AI already has successes under its belt and progress will accelerate.
3. Investigation
Technology is what speeds up the bulk of investigative techniques; blood tests, urine tests, tissue pathology, reading of scans and other standars tests, have all benefited from technology. In pathology, looking at tissues under a microscope is how most cancer diagnosis takes place. Observer variability will always be a problem but image analysis algorithms are already doing a good job here. Digitising slides, and scans also means the death of distance. Faster and more accurate investigation is now possible. Digital pathology and radiology, using data and machine learning, is the future.
4. Diagnosis
AI already outperforms Doctors in some areas, matches them in others and it is clear that progress will be rapid in others. This does not means that Doctors will disappear but it does mean they, and other health professionals, will have less workload and be able to focus more on the emotional needs of their patients. Lots of symptoms are relatively undifferentiated, some conditions rare and probability-based reasoning is often beyond that of the brain of the clinician. AI technology, and machine learning, offers a way forward from this natural, rate-limiting step. We must accept that this is the way forward.
5. Treatment
Robot pharmacies already select and package prescriptions. They are safer and more accurate than humans. Wearable technology can provide treatment for many conditions, as can technology provided for the patient at home. Repeat prescriptions and on-going treatment could certainly be better managed by GPs and pharmacists online, further reducing workload and pressure on patients time. Above all patient data could be used for more effective treatment and a vast reduction in waste through over-prescribing.
Treatment in hospitals through automated robots, such as TUG, are already delivering medication, food and test samples, reducing the humdrum tasks that health professionals have to do, day in, day out. Really a self-driving car, it negotiates hospital corridors, even lifts, using lasers and internally built AI maps. The online management of treatement regimes would increase complaince to those regimes and save costs.
6. Care
Health and social care are intertwined. Much attention has been given to robots in social care but it is  AI-driven personalized care plans and decision support for care workers along with more self-care that holds most promise and is already being trialed. AI will help the elderly stay at home longer by providing detailed support. AI also gives support to carers. It may also, through VR and AR, provide some interesting applications in autism, ADHD, PTSD, phobias, frailty and dementia.
7. Medical education
Huge sums are spent on largely inefficient medical training. There are immense amounts of duplication in the design and delivery of courses. AI created content can create high quality, high-retention content in minutes not months (WildFire). Adaptive, personalized learning gets us out of the trap of batched, one size fits all courses. On-demand courses can be delivered and online assessments, now possible with AI-driven digital identification, keystroke tests and automated marking make assessment easier. Healthcare must get out of the ‘hire a room with round tables, a flipchart and PowerPoint (often awful)’ approach to training. The one body that is trying here is HEE with their E-learing For Health initiative. Online learning can truly reduce costs, increase knowledge and skills at a much lower cost.

It is now clear that AI can alleviate clinical workload, speed up doctor-patient interaction, speed up investigation, improve diagnosis and provide cheaper treatment options, as well as lower the cost of medical training. We have a single, public institution, the NHS, where, with some political foresight, a policy around the accelerated research and application of AI in healthcare could help alleviate the growing burden of healthcare. Europe has 7% of the world’s population, 25% of its wealth and 50% of its welfare spending, so simply spending more on labour is not the solution. We need to give more support to healthcare professionals to make them more effective by taking away the mundane sides of their jobs through AI, automation and data analysis.

 Subscribe to RSS

Monday, November 06, 2017

47% of jobs will be automated... oh yeah...10 reasons why they won’t….

I’ve lost count of the times I’ve seen this mentioned in newspapers, articles and conference slides. It is from a 2013 paper by Frey and Osborne. First, it refers only to the US, and only states that such jobs are under threat. Dig a little deeper and you find that it is a rather speculative piece of work. AI is an ‘idiot savant’, very smart on specific tasks but very stupid and prone to massive error when it goes beyond its narrow domain. This paper errs on the idiot side.
They looked at 702 job types then, interestingly, used AI itself (machine learning) which they trained with 70 jobs, judged by humans as being at risk of automation or not. They then trained a ‘classifier’ or software program with this data, to predict the probability of the other 632 jobs in being automated. You can already see the weaknesses. First the human trained data set – get this wrong and it sweeps through the much larger AI generated conclusions. Second, the classifier, even if it is out by a little can make wildly wrong conclusions. The study itself, largely automated by AI, rather than being a credible forecast, is more useful as a study of what can go wrong in AI. Many other similar reports  company in the market parrot these results. To be fair, some are more fine-grained than the Frey and Osborne paper but most suffer from the same basic flaws.
Flaw 1: Human fears trumps tech
The great flaw is over-egging the headline. The fact that 47% of jobs may be automated makes a great headline but is a lousy piece of analysis. Change does not happen this way. In many jobs the context or culture means that complete automation will not happen quickly. There are human fears and expectations that demand the presence of humans in the workplace. We can automate cars, even airplanes, but it will be a long time before airplanes will fly across the Atlantic with several hundred passengers and no pilot. There are human perceptions that, even if irrational, have to be overcome. We may have automated waiters that trolley food to your table but the expectation that a real person will deliver the food and engage with you is all too real. 

Flaw 2: Institutional inertia trumps tech
Organisations grow around people and are run by people. These people build systems, processes, budget plans and funding processes that do not necessarily quickly lead to productivity gains through automation. They often protect people, products and processes that put a brake on automation. Most organisations have an ecosystem that makes change difficult – poor forecasting, no room for innovation, arcane procurement and sclerotic regulations. This all militates against innovative change. Even when faced with something that saves a huge amount of time and cost, there is a tendency to stick to existing practice. As Upton Sinclair said, “It is difficult to get a man to understand something, when his salary depends on his not understanding it.”
Flaw 3: Low labour costs
What is often forgotten in such analyses is the business case and labour supply context. Automation will not happen where the investment cost is higher than hiring human labour, and is less likely to occur where labour supply is high and wages low. We have seen this recently, in countries such as the UK, where the low-cost labour supply through immigration has been high, making the business case for innovation and automation low. Many jobs could be automates but the lack of investment money, availability of cheap labour and low wages makes the human bar quite low. There are complex economic decision chains at work here that slow down automation.
Flaw 4: Hyperbole around Robots
Another flaw is the hyperbole around ‘robots'. Most AI does not need to be embedded in a humanoid form. Self-driving cars do not need robot drivers, vacuum cleaners do not need humanoid robots pushing them around. Most AI is invisible, online or embedded onine or in the electronics of a device. As Toby Walsh rightly says, when he eviscerates certain parts of the Frey and Osborne report, there’s no way robots will be cutting your hair or serving your food by weaving through busy restaurants with several plates of food, any time soon. The ‘Reductive Robot Fallacy’, is the anthropomorphic tendency to equate AI with robots along with the idea that robot technology has to look like us and do things the way humans do them. The vast majority of robots, AI-driven machines that perform a useful function, do not look like humans, many are online and almost invisible.

Flaw 5: Hyperbole around AI
AI is an idiot savant. It is incredibly smart at specific things in specific domains but profoundly stupid at flexible and general tasks. This is why entire jobs are rarely eliminated through automation, except for very narrow, routine jobs, like warehouse picking and packaging, spray painting a car and so on. Accountants use spreadsheets, restaurants use dishwashers, mixers and microwaves. Most automation is partial, as the general worker still outfoxes AI. There are severe limitation to AI in many fields, not least the sheer amount of processing power needed to fuel the applications as well as limitations in the maths itself. There is also a great deal of hype around the 'cognitive' capabilities of AI, led I suspect by that misleading word 'intelligence'. AI is not conscious and has little in the way of cognitive skills. It may win at GO but it doesn;t know it has won.
Flaw 6: Garbage-in, garbage-out
This common flaw, as Walsh rightly spotted, was that the training data in the Frey and Osborne paper was either a 0 or 1 probability of automation but the outputs were between 0 and 1. This is an example, not so much as garbage in-garbage out, as binary-in range-out. You can see this manifest itself in some absurd predictions around jobs that are unlikely to be automated, as well as underestimates in others, like hairdressing, waiters and cleaners. Beware of AI generated predictions.
Flaw 7: Heuristics
The process of automation in employment is a messy business with many variables. Heuristics can help here. First we can categorise jobs first as: Cognitive v Manual; then… Cognitive routine, Manual routine, Cognitive non-routine, Manual non-routine. But even the distinction between manual and cognitive is not mutually exclusive. Few manual jobs require no knowledge, planning or problem solving. These can be useful rules of thumb but the world rarely falls neatly into these binary of four-way categories. Yet they often lie at the heart of predictive analysis. Beware of simplistic heuristics.
Flaw 8: Human bias
Bias in analysis is all too common. Take just one axemple; the analysis of education. The people doing the analysis are often academics or people who have an academic bent. The Frey and Osborne paper conflates education into one group, as if kindergarden work was the same as academic research. The routine aspects of education, the fact that most teachers, trainers and lecturers do a lot of admin and work that is actually routine and repetitive, is conveniently ignored. Google, Wikipedia and online management and learning has already eaten into the employment of librarians and teachers. It is a displacement industry. Take one service – Google. As the task of finding things became super-fast, the process of learning, research and teaching became quicker. Library footfall falls, as we no longer have to troop off to the library to get the information. Amazon has commoditised the purchase of books. Commoditisation is what technology is good at and what Marx recognised as a driving force in market economies. Educators don’t like to hear this but they have a lot to gain here. Teaching is a means to and end not an end in itself. It has and will continue to be automated, not by robots but by smart, personalised, online learning.
Flaw 9: Activities get automated, not jobs
In truth most jobs will be partially automated. This has been going on for centuries with technological advances. Sure, horse grooms and carriage drivers no longer exist but car mechanics and taxi drivers do. Typesetters have been replaced by web designers. ATMs have simply changed the nature of bank tellers, not completely automated the process. Indeed, in many professions the shift has been towards more customer service and less mechanical service. What matters is not necessary the crude measure of ‘jobs’ being automated but rather activities’ being automated. By activities, we mean specific tasks, competences and skills. 
Flaw 10: New jobs
65% of today’s students will be employed in jobs that don’t exist yet.” This is the sort of exaggeration that feeds bad consultancy. Most will be doing jobs that have existed for some time. Many will simply be doing jobs they didn’t plan on doing (and don’t like) or jobs that have changed somewhat through automation. Predicting which jobs or activities get automated is easy compared to predicting what new jobs will be created. The net total is therefore difficult to establish. Fewer people may be needed in certain areas but new jobs will be created, especially in services.
To be fair, more recent analyses have moved on to more fine grained concepts and data. McKinsey did a detailed analysis of 2,000-plus work activities in 800 occupations, with data from the US Bureau of Labor Statistics and O*Net. They quantified the time spent on these activities and the technical feasibility of automating them. NESTA dis a breakdown of specific skills.
The crude headlines will continue but we’re starting to see more detailed and realistic analysis that will lead to better predictions. This is important, as educational bodies need to be able to adapt to what they will be required to teach as well as what they teach and to whom. As the change accelerates, education and training will need to be more sensitive and adaptive to the changes. This means more accurate predicting of demand and quick adjustments in supply. I’d go for around half of the Oxford figures with the caveat that more service jobs will be created, so that the net total will be 10-20%. There will be no sudden shift in months but a gradual bite by bite into activities within jobs. This is the field I work in, invest in, write and talk about (see WildFire), so I'm not coming at this from the sceptic point of view. AI will change the world and the world of learning but not in the way we think it will.

 Subscribe to RSS

Friday, November 03, 2017

EdTech – all ‘tech’ and no ‘ed’ – why it leads to mosquito projects that die….

‘EdTech’ is one of those words that make me squirm, even though I’ve spent 35 years running, advising, raising investments, blogging and speaking in this space. Sure it gives the veneer of high-tech, silicon-valley thinking, that attracts investment… but it’s the wrong word. It skews the market towards convoluted mosquito projects that quickly die. Let me explain why.
Ignores huge part of market
Technology or computer based learning long pre-dated the term EdTech. In fact the computer-based learning industry cut its teeth, not in ‘education’, but in corporate based training. This is where the big LMSs developed, where e-learning, scenario-based learning and simulation grew. The ‘Ed’ in ‘Ed-tech’ suggests that ‘education’ is where all the action and innovation sits – which is far from true.
Skews investment
The word EdTech also skews investment. Angels, VCs, incubators, accelerators and funds talk about EdTech in the sense of schools and Universities – yet these are two of the most difficult, and unpredictable, markets in learning. Schools are nationally defined through regulation, curricula and accreditation. They are difficult to sell to as they have relatively low budgets. Universities are as difficult, with a strong anti-corporate ethos and difficult selling environment. EdTech wrongly shifts the centre of gravity away from learning towards ‘schooling’.
Not innovative
I’m tired of seeing childish and, to be honest badly designed, ‘game apps’ in learning. It’s the first port of call for the people who are all ‘tech’ and no ‘ed’. It wouldn’t be so bad if they really were games' players or games' designers but most are outsiders who end up making poor games that no one plays. Or yet another ‘social’ platform falling for the old social constructivist argument that people only learn in social environments. EdTech in this sense is far from innovative; it’s innocuous, even inane. Innovation is only innovation if it is sustainable. EdTech has far too many unsustainable models – fads dressed up as learning tools and services.
Mosquitos not turtles
Let’s start with a distinction. First, there’s what I call MOSQUITO projects, that sound buzzy but lack leadership, real substance, scalability and sustainability. They’re short-lived, and often die as soon as the funding runs out or paper/report is published. These are your EU projects, many grant projects…. Then there’s TURTLES, sometimes duller but with substance, scalability and sustainability, and they’re long-lived. These are the businesses or services/tools that thrive.
Crossing that famous chasm from mosquito to turtle requires some characteristics that are often missing in seed investment and public sector funding in the education market. Too many projects fail to cross the chasm as they lack the four Ss.:
Senior management team
Sales and marketing
There are two dangers here. First, understimulating the market so that the mosquito projects fall into the gap as they fail to find customers and revenues. This is rarely to do with a lack of technical or coding skills but far more often a paucity of management, sales and marketing skills. There’s another danger and that’s bogging projects down in overlong academic research, where one must go at the glacial speed of the academic year and ponderous evaluation, and not the market. These projects lose momentum, focus and, in any case, no one pays much attention to the results. As the old saying goes, “When you want to move a graveyard, don’t expect much help from the occupants.
Either way a serious problem is the lack of strategic thinking and a coherent set of sales and marketing actions. When people think of ‘scale’ they think of technical scale, but that goes without saying on the web, it’s a given. What projects need is market scale. What is your addressable market? This is why the ‘schools’ market is so awful. Where are the budgets? Who are the buyers? Who will you actually sell to? How big is the market? Do you realise that Scotland has a different curriculum? What market share do you expect? Who are your competitors? Answer these questions and you may very well decide to find a proper job.
Education is not necessarily where it’s all at in the learning market. Neither is that, now rather dated, culture of wokplaces with pool tables, dart boards in offices full of primary colours, that look more like playschool than tech startup. We spend only a fraction of our lives in school, less in college and most of it in work. The corporate training and apprenticeship markets have more headroom, offer more room for innovation and have sustainable budgets and revenues.

 Subscribe to RSS