The Future of Work

Things are starting to change…

In his talk, “Ideas are Your Only Currency”, Rod Judkins recounted how a surgeon at the Royal Free Hospital, London, had asked him to teach a group of Applied Medical students on how to think more creatively. Rod Judkins is an author and lecturer from the prestigious art school, Central St Martins, College of Art.

The medical school recognised the pressing need to produce more ‘idea students’ rather than ‘students who have skills’. It was clear to them that a lot of the skill-based jobs at the hospitals are going to be redundant as many procedures, such as eye or kidney operations, can now be done by robots. Diagnostics too, are increasingly automated.

Instead, they would like to focus on training future doctors to apply creative thinking with the available medical technology. An example of this applied medical creativity is the ‘spray-on skin’ with stem cells for burn victims. They are also trying to get students to think about how current procedures can be improved. In other words, they wanted to produce more medical innovators rather than the typical doctors.

How much longer do we have until full automation?

Firstly, this depends on the sector. Some areas like agriculture, in particular grain production are already highly automated and have very little human input left. Other areas depend on coordination of a whole value chain, e.g. on-line grocery retailing, which can take decades to establish. There are also fields where human input is currently essential and full automation is only a distant prospect.

MachineLearning_decisions

From whatsthebigdata.com.

Secondly, the introduction of new technology may be delayed where cost benefit analysis fails to justify its implementation. Dejian Zeng is an NYU grad student who spent six weeks working undercover at a Pegatron iPhone factory in China. From the experience, he was convinced that with the current operation, the iPhone will likely never be made in the US. He said that with a wage of 2320 Yuan per Chinese worker, (around $400), it is impossible to pay even a base salary to US workers with that same amount. Thus, if the factories are to be relocated to the US the bulk of the work will have to be replaced by machines.

In other words, as long as the cost of labour is lower than the cost of machines the factories will remain in China due to the low wages. In fact, according to Zeng, tasks such as putting the camera and battery into their respective housing are already automated – hence, even the Chinese workers are under no illusion that their work station will be replaced very soon, too.

Thirdly, it depends on whether the new technology is being integrated into an existing system or is used to replace an old system entirely. If it is being slowly phased into an existing system, then coordination and fit would be the main challenges. The difficulty of coordinating the different elements of new and old technology should not be underestimated. The existing workforce needs to be retrained, including giving them an adjustment period for trial and error. In some cases, it would simply be easier and cheaper to scrap everything, re-vet the workforce, and start from scratch.

Lastly, the cause of delay to implementation may actually stem from political and legislative hurdles, and not so much from an economic one.

In a paper by Grace, Salvatier, Dafoe, Zhang and Evans, titled, “When Will AI Exceed Human Performance? Evidence from AI Experts”, a large survey asking machine learning researchers on their beliefs about progress in AI was conducted.

AI Human

The result is as follows:

The researchers predict that AI will outperform humans in many activities in the next ten years, for example:

  1. translating languages (by 2024),
  2. writing high-school essays (by 2026),
  3. driving a truck (by 2027),
  4. working in retail (by 2031),
  5. a bestselling book (by 2049), and
  6. working as a surgeon (by 2053).

The researchers believe that “there is a 50% chance of AI outperforming humans in all tasks in 45 years and of automating all human jobs in 120 years, with Asian respondents expecting these dates much sooner than North Americans”.

What kind of workforce will we have in the future?

1. Human to machine

Automation gives a picture of a completely hands-free operation. The reality is that grocery crates get stuck in between conveyor belts, sensors need replacing after clocking a certain number of hours, software needs to be debugged and computer security needs to be updated.

Another factor is what’s known in the industry as ‘contingencies’, which are unanticipated complications that cannot be programmed for. For example, self-driving container trucks in their long haul across state lines may experience accidental spillage of dangerous chemicals, bad weather conditions or even just flat tyres. That is why that despite self-driving trucks, truck drivers would still be needed for the trip as a monitor and a precautionary measure.

From those needs, there could be a new job industry arising purely for maintenance, monitoring and organising of the robots, AI, sensors, and various moving parts – in fact, the rise of a whole new breed of automation rangers or mechanics to upkeep the entire auto-ecosystem.

In the meantime, we also need humans to train machines, especially in areas where we don’t have millions of data points for training. This will be a huge industry in itself.

2. Human to human

In a recent article, “Never mind the robots; future jobs demand human skills”, Sarah O’Connor, writes, “By 2030, there will be 34 ‘super-aged’ countries, where one person in five is over 65. Robots can help workers to look after these people but they cannot replace them, nor should we want them to. As the chief executive of Adidas pointed out recently, robots cannot even lace shoes into trainers, let alone help a frail person into the shower. They do not possess any of the qualities that make humans good at caring for each other, like compassion, patience, humour and adaptability.”

The noble idea of how a substantial workforce will take care of the elderly is misleading – not many will have the stamina, perseverance or dedication to nurse an elderly. It is difficult to be convinced that many Mr or Ms Nightingale can be plucked out of this device-fixated generation. To be in the care industry, one must have a calling for it precisely because the job needs a lot of patience and compassion. Those who enter because there are no other alternatives are bound to be frustrated and carry out their job poorly.  As a cautionary note, look at how many abuses there are reported in the old folks homes.

The care industry may be the fastest growing for now, but only people who have never cared for an elderly would romanticise what is in essence a very emotionally and physically demanding, non-glamorous, solitary and repetitive job. To add, the elderly might not even want to be attended to by humans, preferring rather, robotic help as to retain their dignity and independence.

Nor do the old might even need to. Advances in health science will mean that the elderly are more mobile, with advanced medicine preserving most of their cognitive functions, and if they become invalid, there is even an exoskeleton that could be used as a walking aid.

Care is expensive and there are major challenges how to pay for it. Without immigration as a source of cheap labour automation is the only sustainable solution.

3. Human and government

Is it so far-fetched to imagine that due to reduced government budgets, many civil servant posts would be made obsolete or that many types of bureaucracy ranging from building permissions to passport renewals will be processed from end to end by machine learning algorithms?

Perhaps politicians at Westminster too, would be phased out. Imagine if policies were being produced by algorithms – minimising monetary and social costs, maximising benefits – for education, healthcare or immigration. For a discussion, one would tune in to BBC Parliament channel to watch the parliamentary debates on the algorithm-produced policy strengths and biases. One then votes for algorithms instead of politicians. Would this produce a better outcome than today’s method?

Our politics and their consequences have always been a jumbled muddle of hits and misses, luck-of-the-draw exercises where we are never sure whether promises will be kept and if kept, fulfilling them come with huge execution risks and incompetence.

Policy timing would no longer be a finger-in-the-air decision but rather dependent on the assistance of Big Data. For example, an idea to privatise a particular industry may be a good idea but a collection of data might tell the population to wait a couple of more years. Or it might flag that more legal infrastructure would first be needed to support the decision before it is carried out. In the future, AI and Big Data could act as both compeller and caution-device and benefit our political system tremendously in ways unimagined now.

It was Jacque Fresco, the futurist who had recently passed away, who said, “Computers don’t have ambition, they don’t say, ‘I want to control people.’ They don’t have gut instincts.”

4. Entrepreneurs and modular solutions

The beneficiaries of technology are not limited to those who know how to code. Just like how we are able to surf the internet through browsers or prepare spreadsheet through Excel without a lick of coding knowledge, so will there be solutions that bridge the gap for those less technology-literate.

Firstly, any tasks or potential businesses will be built upon modular solutions that can be bought off-the-shelf just as if buying the ingredients for a dinner you are about to prepare. These modular solutions can be sold individually or as packages to be assembled and tailored to your entrepreneurial requirement, neatly fitting with each other as Lego bricks would do. These packages can provide analytics, access to data providers, robotics, machine learning and other AI solutions.

Just as cloud computing lowered the bar of entry to internet start-up companies a similar data, machine learning and robotics start-up scene may develop that is built on common tools and services, without requiring detailed knowledge of all areas but the one the company is exploring.

5. Human and the financial market

The Economist claims that from trading to credit assessment to fraud prevention, machine learning is advancing. From the article, the start of 2019 will see even Chartered Financial Analysts needing AI expertise to pass their exam.

There is certainly a shift for Wall Street from hiring Ivy League jocks to siphoning-off as many quants into the industry, and the benefit of using so many quants have already begun to show. In fact, the Economist writes, “Quant hedge funds, both new and old, are piling in. Castle Ridge Asset Management, a Toronto-based upstart, has achieved annual average returns of 32% since its founding in 2013. It uses a sophisticated machine-learning system, like those used to model evolutionary biology, to make investment decisions.

It is so sensitive, claims the firm’s chief executive, Adrian de Valois-Franklin, that it picked up 24 acquisitions before they were even announced (because of tell-tale signals suggesting a small amount of insider trading). Man AHL, meanwhile, a well-established $18.8bn quant fund provider, has been conducting research into machine-learning for trading purposes since 2009, and using it as one of the techniques to manage client money since 2014.”

The rise of robo-advisors and other Fintech services signal coming changes as well. According to the BlackRock Global Investor Pulse Report, 58% of Millenials respondents would be interested in robo-advice compared to 26% baby-boomers.

6. Human and construction

According to the article “Why the Construction Industry May Be Robot-Proof”, “Well, there is something unique about housing. Typically, home construction activity is custom work – remodelling, renovation, teardowns replaced by a single home, maybe a few homes built on a cul-de-sac. And it is difficult to gain economies of scale – or to automate processes – when every job, or close to every job, is unique.”

Outside some prefab and highly standardised new builds using some 3D printing technology much of the building industry will remain very human centered. Although jobs like architects and interior designer could be machine learned, clients would still want to interact with humans when discussing the design of their house – unless it’s much cheaper to do otherwise. Automating a job like hanging wall papers, with all the non-uniform surfaces and odd corners in a typical house, cannot be done easily in the near future.

Construction is also happening in the virtual world. The virtual reality (VR) and augmented reality (AR) market will be worth almost $37 billion by 2027, according to the latest analysis conducted by IDTechEx. We are underestimating the number of creators, artists, programmers and audio specialists the developers will need for this space.

7. The unemployed and Jugaad

Many become unemployed as human capital becomes neither complementary to machines nor are they cheap enough to make using machines non-viable. This aggravates wealth and income inequality. In the US, the share of wealth owned by the bottom 80% has fallen from 18.7% in 1983 to 11.1% in 2010. Median family income has stagnated while incomes rose significantly for the top 1%.

In the case for technology, inequality may result in little or no access to new technologies for the lower rungs of society. A simple example would be where the poor could only afford bargain-priced ink-based printers while the rich have long upgraded theirs to germanium-based printers.

Jugaad, according to the Wikipedia, refers to “an innovative fix or a simple work-around, a solution that bends the rules, or a resource that can be used in such a way. It is also often used to signify creativity – to make existing things work, or to create new things with meagre resources.”

Just because you are unemployed, doesn’t mean that you don’t have to continue to ‘labour’ for your comfort. It may just be, that as a direct result of many becoming unemployed and lacking resources, they are forced to lengthen the lifetime of their existing technologies by repairing or modifying them. If a little part of their device gets broken for example, rather than throwing away the whole device and buying a new one, they may have to rely on a 3-D printer to print the replacement part. The 3-D printer may be operating commercially at a shop, or locally owned by the council for public use.

Necessity is the mother of invention and we might see the spirit of Jugaad being adopted not just in India, but everywhere else in the world.

Validity of the claim and does it matter?

In “The Zombie Robot Argument Lurches On”, Lawrence Mishel and Josh Bivens write that our claim of ‘robots taking over jobs’ is based on a flawed analysis and there is no basis for claiming “automation has led – or will lead – to increased joblessness, unemployment, or wage stagnation”.

Therefore, any policy recommendations based on this flawed analysis, ranging from redesigning education and retraining the workforce to providing universal basic income, would not make much sense. Furthermore, Mishel and Bivens emphasise that the automation narrative is not validated by the Acemoglu and Restrepo report, writing, “The estimated impact of robots is small, and automation broadly defined does not explain recent labour market trends.”

Here, Mishel and Bivens were referring to the Daron Acemoglu and Pascual Restrepo investigation into the equilibrium impact of industrial robots in local US labour markets. Acemoglu and Restrepo concluded that only a relatively small fraction of employment in the US economy are being affected by robots and therefore, there is no support of the view that new technologies will make most jobs disappear and humans largely redundant.

Mishel and Bivens urge us not to be distracted by this ‘zombie robot narrative’, focusing instead on current issues such as wage stagnation and the rising inequality stemming from failure of macroeconomic policy and globalisation. Which brings us to the question, is this the right way to frame the issue?

Whether you have evidence for this right now or not it is important to remember that many decisions have a very long lead time, such as how to educate our children or how to design government policy.

Personally, between the doubters and the dreamers, I know who I would choose to believe in.

What should we do to prepare?

1. Measure and track for good policy decisions

In the report, “Information Technology and The U.S. Workforce: Where are We and Where do We Go From Here?”, Brynjolfsson et al recommend developing a series of indices such as:

  • Technology progress index
  • AI progress index
  • Organisational change and technology diffusion index

To complement this, I think, would be to appoint a committee of ‘machine economists’ to produce a subjective, annual report of progress on technology and workforce – with good bullet point summaries for TL;DR readers. They can use proxies, such as the most impactful new device to come into the market or numbers of students graduating from technology related courses, and so forth. Whatever event that these ‘machine economist’ deem significant would be included in the annual report, keeping in mind that the intended readers are other economists, policy makers and journalists.

The idea is to focus more on the end result of the technological progress rather than adding up all the inputs that make up technology. Maybe this would be more useful as well as easier to account. One can then surmise the progress by following the evolution of the annually produced bullet points.

2. Innovative education more suitable for the ‘New Machine Age’

I think we have an imperative to redesign an education system better-suited to this technological revolution, not in token resistance to being replaced by machines, but rather to take advantage and multiply these technological dividends as much as we can. Looking at the welfare and social situations of many nations today, the decision to change the current education system may prove necessary rather than merely discretionary.

We need to equip the future workforce with a mind-set to take advantage of all the new possibilities that technology would bring. After all, didn’t Richard Hamming, the American mathematician say, “Teachers should prepare the student for the student’s future, not for the teacher’s past”?

Children need to be exposed to data handling and basic programming at a much earlier age – as early as four or five. In general, these concepts are currently introduced either during the secondary school or at the university.

By introducing data collection, probabilities and other statistics, arrays, IF/THEN, loops, and related ideas in bite-sized and fun format, the children will grow up being familiar with these concepts and learn them naturally. Furthermore, it’s good to teach children logic, a useful way of thinking that would benefit other areas of life too – much more useful than having them memorise the times-tables. I strongly believe that getting an education should be an adventure and not torture.

Creativity will be another important outcome of education, something that machines are still rather poor at, in contrast to rote learning of facts.

The objective is not that they all become programmers, but rather that they become technologically literate – think of how developing nations make the English Language as a subject in schools, not to have a bunch of English teachers, but so that the students can do jobs conducted in the English medium. Similarly, while we don’t know the exact nature of future jobs we can be fairly sure that reliance on machines for repetitive work will increase and humans will be required for more creative and lateral thinking type tasks.

3. Counter the negatives

Professor Ian Golding who runs the Oxford Martin programme on technological and economic change assessed that we are underestimating the shock that technology will unleash on the world. According to his interview with the Business Insider Australia, he believes that we are on the brink of a ‘premature de-industrialisation’ that will “rattle societies in developed and emerging nations, as huge amounts of industrial labour is re-shored to advanced economies with fully automated production systems.”

To quote him, “There will be a re-shoring of production to the advanced economies and of call centres and other machine processes. Of course, this won’t be a re-shoring which will be labour intensive; it will be capital intensive. So, it’s not going to create jobs. I see that one of the real downside risks associated with this is a rapid widening of inequality. Large swathes of people, I think, will find that it’s challenging to get decent jobs. There will be lots of jobs for unskilled and service people, things that don’t require much machinery.”

The implication of this is that for developing countries, the path where wealth can be achieved through stages of industrialisation has just vanished into thin air. The global consequences of this would be economic growth slowing long before catching up to developed levels. At the same time, adopting increasingly cheap technology from around the world should raise living standards even in the poorest places, even if countries fail to raise their productivity much on their own.

What of the developed nations? At the backs of our mind is the realisation that without a share of production, the bottom 80% would remain in relative poverty and little opportunity to improve their lot. This kind of social composition cannot possibly be sustainable, even in a developed country with relatively better social welfare.

Recently, Eric S. Lander, president of the Broad Institute of MIT and Eric E. Schmidt, executive chairman of Alphabet, worried about the lack of funding for research in the US, shared an op-ed in the Washington Post,

“The United States has the most dynamic private sector in the world, with entrepreneurs, investors, big companies and capital markets all eager to license technologies and launch start-ups. But those ventures are often driven by technologies that come from basic research. Few companies undertake such research because its fruits are typically too unpredictable, too far from commercialisation and too early to be patentable.

That’s where government comes in. While investing in basic research typically doesn’t make sense for a business, it has been a winning strategy for our nation. For 60 years, the federal government has invested roughly a penny on each dollar in the federal budget into research at universities and research centers. In turn, these institutions have produced a torrent of discoveries and trained generations of scientific talent, fuelling new companies and spawning new jobs.”

In other words – do.not.cut.funding.in.science.and.technology.

4. Competitive landscape

One major concern to be addressed is the rise of superstar firms. In “The Fall of the Labor Share and the Rise of Superstar Firms”, Autor, Dorn, Katz, Patterson and Van Reenen posit that if globalisation or technological changes advantage the most productive firms in each industry, product market concentration will rise as industries become increasingly dominated by superstar firms with high profits and a low share of labour in firm value-added and sales.

They found that, “firms with superior quality, lower costs, or greater innovation reap disproportionate rewards relative to prior eras. Since these superstar firms have higher profit levels, they also tend to have a lower share of labour in sales and value-added. As superstar firms gain market share, across a wide range of sectors, the aggregate share of labour falls.”

From the FT article “Competition is not for losers – even in the digital era”,  John Thornhill writes, “Competition policy has long been viewed predominantly as a tool to promote economic efficiency. But it remains a political and social construct. As such it must be reinvented to match the needs of our times, whichever economic story you believe.”

Unlike previous episodes of increasing firm concentration though these superstar firms may be the ones undertaking the kind of ground-breaking research that delivers a better future, without hurting consumers through higher prices which are often close to zero marginal cost anyway. It will be important to rebalance the patent system such that these new giants can’t lock out rivals forever through litigation and to occasionally make these new natural monopolies open up their data to others.

Conclusion

Nothing comes without costs, especially in a technological revolution such as this. We are potentially facing massive unemployment and a tsunami of shifting and re-ordering of the workforce as we have never known before.

What may place us at risk is our inability to reform ourselves, and our way of doing, in time to mitigate the negative impact on the workforce. What concerns me is that in assuming a defensive position against technology, we fail to capture all of the abundant dividends the technology may bring us.

Worse still, if we let the old technology crystallise for lack of enthusiasm and nostalgia for the past. All this, because we fear the future too much and act too little in the present.

We should approach the future of work with optimism instead. Creativity, lateral thinking and ambition are all human strengths that will allow us to achieve great things in conjunction with machines.

Given the rich bounty of automation we’ll also be able to raise the living standards of every human being on earth, while at the same time making the remaining work more meaningful and less soul-destroying, whether through fewer hours spread across more people or through more interesting work.

Leave a comment