Governance of Superintelligence | OpenAI proposes measures for safe AI development.

Sam Altman is asking congress to be regulated. Is this a trap?

Google’s leaked documents show that they are rapidly losing the grip on AI.

And now OpenAI releases a proposal for how to govern AI and calls for immediate action.

Something is up. It seems like something is happening behind the scenes.

Let’s take a look…

So, here’s a 30 second timeline of what happened with AI in the last 6 months… is that right?! It’s been less than 6 months… that’s unreal.

1. OpenAI releases ChatGPT late 2022.

This is like the starter’s pistol that gets everyone racing towards developing the most advanced AI they can.

Not just US tech companies, but all the big global players as well.

China launches Ernie bot.

Russia launches GigaChat.

Britain tries to launch BritGPT, which, I still say, should be called GPTea.

Just do it.

Bill Gates apparently managed to buy 49% of OpenAI before anyone even knew it existed. 

So Microsoft is putting ChatGPT into everything in an attempt to, in their own words, “make Google dance”.

Then Meta/Facebook came along, released their model LLaMa, which immediately gets reverse engineered and now anyone with a laptop has access to these models.

What does all of this mean?

Well, to put simply.

“The cat is out of the bag”

Any hope that AI could be contained is gone.

This technology is very powerful, very accessible and it’s not inconceivable that some guy living in his mom’s basement will stumble on a breakthrough or hack or application that is much farther than we realized was possible.

If you missed Google’s leaked documents, here are some highlights.

This was written by a Google researcher and leaked by an anonymous employee.

The big point he is making is that while Google and OpenAI are racing each other, another “faction” is beating them.

“Eating their lunch” as he puts it “lapping us”.

In other words, this faction is not winning by a little bit, it’s completely beating these massive companies.

That faction is Open Source.

That means anyone with a computer or a laptop, anywhere in the world who is willing to learn, experiment and share their findings on the internet.

What this means the big tech just lost the grip on AI innovation. It’s now in the hands of the people.

But soon after this leak, Sam Altam and others are meeting with Vice President Kamala Harris who is now crowned the AI czar.

Then Sam Altman and Marcus and an IBM representative are pulled in front of congress to testify about AI regulations.

Now this could be very bad, there are a lot of people who are very cynical about what’s about to happen.

Full disclaimer, I don’t think this is what’s happening necessarily, but it’s certainly something that might be happening.

And that is 

REGULATORY CAPTURE

This is something that happens unfortunately in the US.

In a nutshell this is where large companies use regulations of the US government to their advantage.

They know more about their industry than the regulators and so they influence the regulators to make laws in their favor.

The companies that are talking to the politicians have a lot of connections and influence that others that try to enter the space don’t have. Etc

Similar to how weapons companies influence weapons companies, drug companies influence the FDA, the fear is that AI companies will use this new government agency as a “moat” to keep others out of the AI development game.

So there is one theory that Sam Altman could be pushing for regulations, not because of his stated mission of trying to bring AI safely to the world, but rather because of how far ahead OpenAI is, basically calling for regulation now would get him into the inner circle of people who are making the regulations.

In fact, one of the congressmen did ask him if he wanted to be their advisor, to be the person actually shaping those regulations.

Which, to his credit Sam Altman said, no he was far too busy.

But also, this could be seen as Sam Altman, pulling the ladder up, so to speak.

He and his company were able to spin up the AI model and now let’s start regulating and making sure that others can’t get to that level.

Now I personally don’t believe that is his goal.

That’s just my opinion, certainly if his goal was to become one of the richest people in the world, this would be the playbook to make that happen.

If and a few others players hold the keys to this disruptive technology and then are able to get the government to basically outlaw any future competition, OpenAI would easily grow to become as large as the other tech giants, it would cross the trillion dollar mark and keep going.

Sam Altman however does seem to keep pushing, repeatedly for something that appears to be more alighted with his stated goals than just with his net worth.

Keep in mind, he does NOT have equity in OpenAI. He limited how much money he would make from OpenAI and he limited how much investors could make.

They did grant a 49% stake to Microsoft, but that is limited and those shares do revert back to OpenAI’s non profit arm once Microsoft makes their investment back.

All of these decisions have been criticized and questioned, but it does seem like Sam Altman is fighting really hard to avoid making billions of dollars.

When Elon made hundreds of millions from selling PayPal, he immediately put all that capital into Space X, Tesla and Solar City and almost lost it all.

Some people aren’t playing for money, at least in the sense that money isnt’ the end goal.

Sam Altman seems to be wired in that way as well.

Here is his Tweet shortly after his congressional hearing

“something like an IAEA for advanced AI is worth considering, and the shape of the tech may make it feasible:

The IAEA is the International Atomic Energy Agency that’s been around since 1957 and is an international effort at keeping the world safe from misuse of Atomic Energy.

Sam Alman adds:

(and to make this harder to willfully misinterpret: it’s important that any such regulation not constrain AI below a high capability threshold)”

And he ads that part because I think this is what the critics are attacking.

They are saying that he is simply doing this to create a moat around his business and keep others out.

Now, a lot of the responses seem like they are not taking Sam at his word.

It’s important to take this with a grain of salt, but also with an open mind.

These guys set up this company in a way that would limit how much money they would make, that was decided way back in 2015, before anyone could predict how far AI would go.

So either this is a long con and they are just really good conmen OR, they genuinely believe in their mission as well as the potential dangers of AI.

Let’s read the proposal, the authors are Sam and Greg Brockman who are 2 of the original founders or OpenAI and Ilya Sutskever who is seen by many as one of the world’s top expert on AI.

So that last part to me is very important, because most of the conversation is about, what if the AI kills us?

That’s an important question and we do need to get AI alignment right.

But, the question that people aren’t’ talking about is, what if a small group of people have full control over AI and we have no say in how it’s deployed?

We have no access to its benefits.

Think about your least favorite politician right now.

Who really boils your blood?

Who really gets your blood pressure dangerously high.

Imagine that person having control of this AI so that it’s carrying out their will, guarding them day and night and working out scientific solutions to how to keep them alive indefinitely.

These next 10 years could be crucial.

If all the hypothetical AI benefits are real, as we think they are.

Then the next 10 years WILL be crucial.

The next generation will either have a MUCH better life than us or MUCH worse, I think that the words written on this page are going to be historically important.

I personally agree with everything that is written here, if taken at face value.

But I would like to ask you, what do you think about it? 

Is Sam Altman sincere about these words and he is genuinely trying to get us to the best possible outcome for everyone?