In an ingenious study published this summer, US researchers showed that within a few months of the launch of ChatGPT, copywriters and graphic designers on major online freelancing platforms saw a significant drop in the number of jobs they got, and even steeper declines in earnings. This suggested not only that generative AI was taking their work, but also that it devalues the work they do still carry out.
[…]
Most strikingly, the study found that freelancers who previously had the highest earnings and completed the most jobs were no less likely to see their employment and earnings decline than other workers. If anything, they had worse outcomes. In other words, being more skilled was no shield against loss of work or earnings.
That’s got to be unnerving for a lot of folks. Chart here:
“[W]ithin a few months of the launch of ChatGPT, copywriters and graphic designers on major online freelancing platforms saw a significant drop in the number of jobs they got, and even steeper declines in earnings.” https://t.co/TGaym4Iivb pic.twitter.com/e0PKP700BR
— Steve Stewart-Williams (@SteveStuWill) November 10, 2023
But wait, there’s more:
But the online freelancing market covers a very particular form of white-collar work and of labour market. What about looking higher up the ranks of the knowledge worker class?
For that, we can turn to a recent, fascinating Harvard Business School study, which monitored the impact of giving GPT-4, OpenAI’s latest and most advanced offering, to employees at Boston Consulting Group.
BCG staff randomly assigned to use GPT-4 when carrying out a set of consulting tasks were far more productive than their colleagues who could not access the tool. Not only did AI-assisted consultants carry out tasks 25 per cent faster and complete 12 per cent more tasks overall, their work was assessed to be 40 per cent higher in quality than their unassisted peers.
Employees right across the skills distribution benefited, but in a pattern now common in generative AI studies, the biggest performance gains came among the less highly skilled in their workforce. This makes intuitive sense: large language models are best understood as excellent regurgitators and summarisers of existing, public-domain human knowledge. The closer one’s own knowledge already is to that limit, the smaller the benefit from using them.
Fascinating stuff.
Join the Discussion