Sam Altman, the founder of OpenAI expects his company to capture as much as $100 Trillion of the world’s wealth.
He proposes a plan that will help redistribute that wealth and make sure that everyone is able to participate.
Now, it’s important to understand that five years ago, many AI researchers mocked this man for his ideas about AGI and how fast progress will increase
No, one is mocking him now.
In fact, many people are asking him to pause AI research for 6 months, so we can put safety measures in place.
So, keep this in mind as you hear his predictions, this person tends to see where the AI puck is going, much better than most.
In Sam Altman’s blog he argues that AI advancement will replace the need for human labor.
As the cost of human labor falls towards zero, we need to set up a policy that will allow us to distribute resources and improve the standard of living for everyone.
At the same time Goldman Sachs releases a report talking about the massive potential that AI will have on the economy globally, including what percentage of the workforce is expected to be replaced by automated AI.
A very recent study from MIT, shows some surprising findings as white collar workers are asked to use tools like ChatGPT to help with their work.
Using AI tools seems to improve human productivity, but also reduce the productivity inequality in workers.
That is, people using these AI tools tend to produce less substandard work, have less errors and also tend to “give up” less on their assigned tasks.
In this video we look at these key studies that seem to indicate how powerful these AI tools will be. One thing to keep in mind is that in most of these projections the authors assume that progress will continue at the current growth rate.
No one is assuming an exponential growth rate.
Here’s Adam GPT, an employee at OpenAI responding with a picture from a blog called “Wait But Why”, which by the way is a great read.
The point here is that projections that assume that progress will continue at the same pace, those will likely severely underestimate where we end up in let’s say 10 to 20 years.
So, let’s dive in…
The first part is Sam Alman’s blog posts titled “Moore’s Law for Everything”.
He walks us through his reasoning for how to handle the changes that AI will bring.
He starts by explaining that as AI gets better and better, it will slowly take over white collar work, followed by assembly line work as well as most manual labor and soon after than most work as we know it, including scientific discoveries.
Eventually these smart machines will help us make smarter machines which will further accelerate the pace.
The price of Human labor will fall towards zero.
So will the cost of most goods.
This will need to be handled carefully of course
Sam suggests that we need policies that will redistribute the wealth gained from AI and allow everyone to improve the standard of living.
“This revolution will generate enough wealth for everyone to have what they need, if we as a society manage it responsibly.”
The article’s title “Moore’s Law for Everything” basically means that every good or service we produce will rapidly drop in price every few years.
In part 3 of the article, he explains the idea that instead of taxing labor as we have done in the past, we should use tax as a way to distribute ownership and wealth to citizens, to allow everyone to benefit from the economy as an equity owner.
This would provide a sort of “floor” that will allow everyone to live life as they want, but also allow for an unlimited “ceiling” where people would be free to continue to create companies, businesses and other ventures to generate capital.
Sam also believes that the growth of AI will make land more valuable, since it’s one of the few truly finite assets.
“There is also about $30 trillion worth of privately-held land in the US (not counting improvements on top of the land). Assume that this value will roughly double, too, over the next decade–this is somewhat faster than the historical rate, but as the world really starts to understand the shifts AI will cause, the value of land, as one of the few truly finite assets, should increase at a faster rate.”
So the two asset classes that would be taxed are land and corporations, with the proceeds going to all the people in that country.
I’m skipping some of the details here, and I encourage you to read this for yourself. Sam Altman is obviously a bright fellow and he has put a lot of thought into this, along with his team.
All links will be down below in the description.
At the end he warns that we have little time to start thinking about this before the wave of changes hits us.
He concludes that if we are able to execute this effectively, the future can be almost unimaginably great.
Now, 5 or 10 years ago, this may have been dismissed as science fiction and wishful thinking, but now it’s obvious that EVERYONE is taking notice.
Here’s a recent paper by Goldman Sachs.
Goldman Sachs is a big deal because of it’s size, global reach, prestige etc.
Its analysis tends to move markets and you can be sure that the ideas in this paper are being shared and talked about and acted upon across the globe.
It start by saying
“The recent emergence of generative artificial intelligence (AI) raises questions whether we
are on the brink of a rapid acceleration in task automation that will drive labor
cost savings and raise productivity. “
“reflects a major advancement with potentially large
“the labor market could face significant disruption”
“we find that roughly two-thirds of current jobs are exposed to some degree of AI automation, and that generative AI could substitute up to one-fourth of current work.”
“The boost to global labor productivity could also be economically significant, and we estimate that AI could eventually increase annual global GDP by 7%.”
They briefly describe why this generative AI is different from its predecessor machine learning methods, sometimes referred to as narrow or analytical AI.
They specifically mention 2 OpenAI products, ChatGPT and Dall-E as well as pointing out the fact that AI is able to be used for generalized purposes, not just specific uses.
In the next chart they show the Human baseline performance, shown here with the dotted line and AI performance seems to have surpassed human baselines, somewhere around 2016-2018.
They expect the money invested in AI to be 1% of US GDP by 2030.
And that’s assuming it matches the modest pace that software investment grew during the 90s.
More and more companies are talking about AI, specifically on company earning calls.
And this is where they show a breakdown of which industries will likely be most affected by AI in the US and Europe.
Here in the US, it’s expected that outdoor jobs such as building, grounds cleaning etc, will not be affected by AI much, but Office and Administrative Support will be exposed to automation by almost half.
Legal is next on the list.
It makes sense that Legal is so high up on the list.
AI’s ability to do document review and discovery at a very accurate rate and very quickly, will allow it to replace many hours of human labor.
Human labor which in this case is very expensive.
When I tested ChatGPT 3.5, it already had a strong ability to draft legal documents, add suggestions for clauses to add, as well as being able to explain what those clauses mean in simple language.
It was also excellent at reading through massive legal documents and pulling out specific details, such as a list of all fees that were mentioned in an apartment lease.
AI can also be great for its predictive ability.
AI can be trained to analyze historical case data and predict the outcomes of ongoing legal cases.
But, on average All Industries expect to see a 25% exposure to automation by AI.
By the way, more advanced, developed countries are MUCH more exposed to automation than smaller, developing nations for many reasons including the fact that AI’s are currently more advanced in English than in other smaller global languages.
The other important thing to understand here is that AI automation will be coming MUCH faster for our knowledge work, while manual labor will be affected, but at a much slower rate.
The number of jobs in the US that could be completely substituted by AI is 7%, while 63% will be complemented by AI.
How AI complementary roles will play out in terms of job losses is hard to estimate.
Does one person being able to let’s say put out 5x more code, does that mean that 4 people are let go? Or does that mean that the other 4 will be able to use their freed-up capacity toward other activities that increase output?
It may be that they have an unlimited need for high quality code and applications, but there are tasks that have a finite demand and those jobs will be destroyed.
Next Goldman shows several potential outcomes from a massive job displacement.
According to one outlook, we can expect to see the jobs lost be replaced by new jobs that didn’t exist before.
For example 60% of workers today are employed in occupations that did not exist in 1940, implying that
over 85% of employment growth over the last 80 years is explained by the
technology-driven creation of new positions.
However some other economists argue that Technological change displaced workers and created new employment opportunities at roughly the same rate for the first half of the post-war period, but has displaced workers
at a faster pace than it has created new opportunities since the 1980s.
This last part is interesting to note, basically saying that their projection can vary depending on how powerful the AI will be.
Here they use something called O*NET difficulty level of the tasks.
These are rated 1 to 7, with 7 being the highest level. The most advanced.
For example rating your level of speaking, a 2 would mean you can greet tourists and explain tourist attractions.
A 4would mean interviewing applicants to obtain personal work history and a 6 would be to argue a legal case before the supreme court.
So, these projections assume that AIs like GPT-4 will stay at around level 4.
It certainly seems like GPT-4 is able to handle tasks around difficulty level 4 right now.
To operate at a difficulty level of 6, which would be things like negotiating a complex treaty between two countries, or determining the mathematics required to simulate a spacecraft landing on the moon, or estimating the total amount of resources under the world’s oceans.
GPT-4 is not there right now, but Goldman Sachs would double its estimates if it’s able to reach that level.
GPTs are GPTs
Next let’s look at a paper called GPTs are GPTs: An Early Look at the Labor Market Impact
Potential of Large Language Models.
This is a paper by OpenAI
University of Pennsylvania
So for those who are still catching up to all the AI lingo.
Here are the basics:
LLM means Large Language Models
And GPT means Generative Pretrained Transformers.
I encourage everyone to read up on these to gain a deeper understanding, but the main point is that these are the technologies that have created the recent AI boom.
So you can substitute LLMs and GPTs with AI, just think of them in general as AI systems.
Here are the main points of the paper:
Our findings reveal that around 80% of the U.S. workforce could have at least 10% of their work tasks
affected by the introduction of LLMs, while approximately 19% of workers may see at least 50% of their
The projected effects span all wage levels, with higher-income jobs potentially facing greater exposure to
Significantly, these impacts are not restricted to industries with higher recent productivity growth.
Our analysis suggests that, with access to an LLM, about 15% of all worker tasks in the US could be completed significantly faster at the same level of quality.
When incorporating software and tooling built on top of LLMs, this share increases to between 47% and 56%
of all tasks.
This finding implies that LLM-powered software will have a substantial effect on scaling
the economic impacts of the underlying models.
We conclude that LLMs such as GPTs exhibit traits of general-purpose technologies, indicating that they could have considerable economic, social, and policy implications.
Now, here I would point out that it’s very possible and likely in my opinion that the findings that Goldman Sachs, for example, put out, may be somewhat more conservative.
OpenAI and Microsoft have been accused, by some, of putting out research papers that are overhyped.
However keep in mind that these are produced by scientists, PhDs and Universities and in general, not people who are there to pump a company’s stock price.
Only time will tell which estimation will be the correct one, but as the beginning of 2023 has shown us, AI progress is moving very, very fast.
So in this paper they use the O*NET 27.2 database which contains information on 1,016 occupations, including
their respective Detailed Work Activities (DWAs) and tasks.
A DWA is a comprehensive action that is part of completing a task, such as “Study scripts to determine project requirements.”
A task, on the other hand, is an occupation-specific unit of work that may be associated with zero, one, or multiple DWAs.
So these are the activities workers do in their occupations.
And the “Exposure” is a measure of whether access to AI systems like ChatGPT would reduce the time needed to complete the task or if the AI system itself could complete the task by at least 50%.
So for example, if you had to come up with headlines for a text ad in Google, that fit in let’s say 50 characters. Normally you would have to sit down and write each one, test it to make sure it fit and then create variations.
This would take time, effort and creativity.
ChatGPT, in my experience, reduced the time needed to complete this task by about 90%.
I could produce roughly 10x the output or if I had a specific number of headlines I needed, I could produce them in 1/10 the time.
This would make a Google Ad Copywriter occupation be exposed to these AI tools.
Me loading the dishwasher however, ChatGPT was 0% helpful.
So the dishwasher loading job is NOT exposed to LLMs and AI powered tools.
The other type of exposure the paper mentions is LLM+.
This means that a job task could NOT be reduced by half by LLM alone, but COULD if some other tool or software was built on top of it.
So for example if you have to take a list of purchases and categorize them for accounting purposes, a LLM can do that if you have those in a text file that you can paste into it, or have it retrieve that data from the web.
However if you just have the physical receipts, ChatGPT can’t quite do that right out of the box, right now, as of this recording.
GPT-4 is capable of image recognition, but it’s not fully rolled out yet.
So LLM+ here means that you would need something else, like image recognition software to link the two and THEN the job would be exposed.
Meaning it would cut in half with LLM assistance.
Another example would be image generation systems.
Keep in mind that this study is not looking at progress in robotics and any sort of smart devices, this is strictly looking at tools like ChatGPT and other software with LLMs built in.
So, what were the results? So here’s the top level finding:
“Our findings suggest that, based on their task-level capabilities, LLMs have the potential to significantly
affect a diverse range of occupations within the U.S. economy, demonstrating a key attribute of general-purpose
One thing that jumped out at me, is that they used human “judges” to determine how any particular job task would be affected by AI.
So they may ask a number of humans how they would rate “proofreading office emails” for example to be exposed to LLMs.
They also created a Prompt for OpenAI’s GPT-4.
The model was released in 2023.
So GPT-4, the LLM, is scoring the results alongside human judges.
Here’s a chart of the Similarity of Human and GPT-4 ratings.
They. Are. Similar.
Near the top end of exposure ratings, humans are on average more likely to rate an occupation as exposed, but the ratings are very, very similar overall.
It’s an interesting piece of this study, showing that even the study participants themselves… the people who are observing the study and making subjective human judgements about it… that they themselves are exposed to being replaced with AI.
Or at least that the AI can provide subjective judgements that are similar to that of humans.
With that, let’s take a look at some of the results of this study.
First on wages.
The overall trend reveals that higher wages are associated with increased exposure to LLMs.
However there are numerous lower wage occupations that have a very high exposure.
So for example a low wage email customer support person might have all their tasks exposed to LLMs and there are numerous examples of that.
While highly paid office workers might have, in general, more tasks exposed to LLMs.
In terms of skill, here’s a quote:
“Our findings indicate that the importance of science and critical thinking skills are strongly negatively
associated with exposure, suggesting that occupations requiring these skills are less likely to be impacted
by current LLMs.
Conversely, programming and writing skills show a strong positive association with
exposure, implying that occupations involving these skills are more susceptible to being influenced by LLMs”
Here is a chart of job roles that are most exposed.
A few were labeled as “fully exposed”
These are Mathematicians, tax preparers, financial quantitative analysts, writers and authors, web design, accountants and auditors, journalists, legal assistance and secretaries, clinical data managers and climate change policy analysts.
The rest of the study goes deeper and breaks down how affected different groups will be by years of training, education levels and many other factors.
In summary It finds that most occupations have some exposure to LLMs, with higher-wage occupations having more tasks with high exposure.
Approximately 19% of jobs have at least 50% of their tasks exposed to LLMs, considering both current capabilities and future LLM-powered software.
The study ends with a summary, ostensibly written by a human.
As well as a part that was written by GPT-4, summarizing the study.
As well as a disclaimer:
LLM assistance statement.
GPT-4 and ChatGPT were used for writing, coding, and formatting assistance in this project.
I expect we will see a lot more of these statements as these tools permeate society.
I’m not sure what the first paper was to start adding these disclaimers, but we will likely see more and more policies around the use of LLMs.
So, what do you think?
Are lives going to be completely changed in 5 to 10 years.
Is this going to be a utopia or a nightmare?
Please comment below.
I read every single one.
Thank you for watching (⌐■_■) ☜(ﾟヮﾟ☜)