Reality or Matrix? Stanford & Google’s Pioneering AI NPC Experiment Unveiled!

Venture into an Unprecedented Virtual World: Stanford and Google Collaboratively Develop a Groundbreaking AI Experiment, Crafting an Entire Town of Autonomous NPCs.


Stanford and Google teamed up to put out a research paper about how to use ChatGPT to create a town full of individual AI agents.

The goal was to study if they could create a believable simulation of how humans would behave, interact and live day to day.

Basically it was like a videogame with all characters being autonomous NPCs (non-player characters).

Each one was given a basic backstory, their relationship to to other AIs (husband to X, son to Y etc).

Then one of them was given a “suggestion”, which was basically like inner monologue that they felt they needed to comply with. The suggestion was to set up a Valentine’s day party.

The point was to see if they could create an environment where they only needed to tell one NPC what needs to happen and then that NPC would do their best to make it happen (instead of having to script every single NPC to attend individually).

What they observed was that the news spread (“information diffusion”), NPCs would talk about the event, make plans, coordinate to help decorate for the party and even invite each other on dates to the party.

This has large implications for the capabilities that AIs like ChatGPT to simulate human behavior on a large scale.


‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead

AI’s Godfather, Geoffrey Hinton, has hit the ejection seat at Google, warning that we’re playing with fire when it comes to next-gen AI. After a decade-plus working for the tech giant, Hinton’s hanging up his hat to give us a heads up about the risks of AI. It’s like he’s suddenly looking back on his life’s work and saying, “Oops, my bad.”

Hinton and his team at the University of Toronto laid the groundwork for AI back in 2012, but now he fears we’re letting the genie out of the bottle. These AI systems, like the popular chatterboxes ChatGPT, could be the next big thing since the dawn of the web browser. But Hinton’s worried we’re setting free a double-edged sword, one that could cut us deep with misinformation, job losses, and maybe even pose a risk to humanity. As Hinton puts it, keeping bad eggs from using AI for nasty stuff is a tough nut to crack.

After OpenAI let loose a new version of ChatGPT, a thousand tech bigwigs and brainiacs cried out for a pause on the whole shebang. Even Microsoft’s top egghead Eric Horvitz, who’s spread OpenAI’s tech all over their products like butter on toast, is sounding the alarm.

Hinton, a 75-year-old Brit turned Canadian, has been all in on AI since his grad school days in the 70s. His big idea was the neural network, a math system that learns by chewing on data. The idea wasn’t popular back then, but Hinton stuck with it like a dog with a bone. His work led to some big strides in AI, making chatbots like ChatGPT and Google Bard possible. Hinton even nabbed the Turing Award, the computing world’s equivalent of a Nobel Prize.

Now, he’s flipping the script. He’s worried that as we polish our AI systems, they could turn into ticking time bombs. A couple of years ago, he thought Google was doing a good job, making sure not to let loose anything that might bite. But with Microsoft muscling in on Google’s turf by boosting their search engine with a chatbot, it’s a race to the finish, and Hinton doesn’t like where it’s headed.

First up, he’s concerned we’ll be swamped with phony photos, videos, and text, and we won’t know up from down. He’s also worried AI could start stealing jobs, not just the boring stuff but possibly more. And further down the line, he’s scared of AI systems going off the rails and doing something we didn’t see coming. His worst nightmare? Killer robots. Sounds far-fetched, but Hinton’s not laughing.

While some folks are saying “Hold your horses, that’s just a pipe dream,” Hinton thinks the race between the tech heavyweights could spiral into a worldwide sprint that won’t stop without some kind of global rulebook. But trying to control this could be like trying to herd cats. His best bet is for top scientists to put their heads together to figure out how to rein in this runaway train.

Looking back, Hinton says he used to brush off concerns about the dangers of AI by quoting the guy who led the U.S. atomic bomb project, “When you see something that is technically sweet, you go ahead and do it.” But nowadays, he’s singing a different tune.


IBM Will Stop Hiring Humans For Jobs AI Can Do

IBM’s head honcho, Arvind Krishna, spilled the beans to Bloomberg that they’ll be putting the kibosh on hiring humans for jobs their AI can do. This could mean curtains for about 7,800 non-customer-facing gigs over the next five years. Jobs like pushing papers in HR could be first on the chopping block.

Don’t fret if you’re on the frontlines with customers or coding away in software development, Krishna reckons you’re safe, for now. Though he’s keeping mum on when this AI takeover is kicking off. IBM, meanwhile, has gone radio silent on the whole matter.

According to some big-brained folks at Goldman Sachs, this could be the tip of the iceberg, with 300 million jobs in the U.S. and Europe at risk if AI really gets its act together. That’s a whopping two-thirds of jobs that could be replaced by our robot overlords, capable of doing a quarter of current work.

The idea of robots nabbing jobs isn’t new, but recent AI advancements have folks sitting up and taking notice. Governments and companies are scrambling to figure out the rules of the road for AI. The big dogs in tech are calling for “risk-based” regulations, while others warn of AI’s potential for bias and discrimination.

Not everyone’s on the AI bandwagon, though. Billionaire Elon Musk, despite having his fingers in the AI pie with OpenAI, has been ringing the alarm bell, accusing others in the tech space of not taking AI safety seriously enough. He’s even set up another AI company, X.AI, in response to what he sees as recklessness by other tech firms.

On a side note, IBM’s Watson has been playing the AI game for years, making waves in healthcare and customer service, and even bagging a cool $1 million on Jeopardy! Now that’s a trivia whiz for ya!

So, keep your eyes peeled for the rise of the machines. As for how to stay relevant in this brave new world, we’re all still figuring that one out. So, buckle up, folks. It’s going to be a wild ride.


Bing AI comes barging in on Samsung Galaxy devices with built-in SwiftKey

Alright folks, gather ’round. Microsoft is getting all up in Samsung’s grill with their SwiftKey keyboard now coming pre-installed on Samsung’s Android launcher. And just when you thought it couldn’t get more interesting, they’re slapping Bing AI right into the mix. Yep, you heard it right, Bing AI is coming to a Samsung Galaxy near you, whether you fancy it or not.

So how does this work? Well, the folks who use Samsung Galaxy devices have this thing called the One UI Android launcher, and SwiftKey comes inbuilt in it. So, if you’re not up to speed, that’s like Bing AI saying “Oh yeah” and smashing through the wall like the Kool-Aid Man.

And if that ain’t enough, last week Microsoft gatecrashed Google’s Bard on Edge browser party too. They’re pulling no punches, people.

Microsoft took to Twitter to announce this update, saying it’ll roll out in the next few days to Samsung users. This Bing AI feature has been waltzing its way onto SwiftKey keyboards since mid-April, even making a comeback on the iOS version.

But worry not, Galaxy users. If you’re not a fan of Bing AI shimmying into your swiping keyboard, you can always stick with Samsung’s main keyboard. Just head over to SETTINGS > LANGUAGES AND INPUT > ON-SCREEN KEYBOARD. But fair warning, Bing might be back for an encore. Rumor has it, Samsung might ditch its $3 billion deal with Google to make Bing its new default search engine. Oh, the drama!


AI Chatbots Have Been Used to Create Dozens of News Content Farms

Here’s the skinny: NewsGuard, the internet’s own sheriff, found a bunch of phony-baloney news sites, all spun up by AI chatbots. These bots are cranking out everything from breaking news to celeb gossip, without telling folks they’re machines. The AI tech involved could be from OpenAI or maybe even Google.

These faux news sites aren’t just churning out harmless fluff, they’ve been caught spinning tall tales. One had the audacity to announce the death of Biden, another fibbed about some architect’s life. Yet another spun a yarn about mass deaths in the Russia-Ukraine war.

Most of these sites are just ad factories, pumping out content left and right in multiple languages, all for a quick buck. Some even let you buy a shoutout for your business to boost search rankings. Others are playing the social media game, like, boasting a hefty 124,000 followers on Facebook.

The big cheese at NewsGuard, Gordon Crovitz, said it loud and clear: OpenAI and Google need to reign in these chatterbox AIs from making up news. He called it “fraud masquerading as journalism.” When asked about it, Google said they’re all about quality, not how the content got created. If they find a bad apple, they pull the ads. But the fact that AI is involved doesn’t automatically raise a red flag for them.

A data science whiz, Noah Giansiracusa, said this ain’t a new game, but it’s getting cheaper and easier. As more real news outlets start using AI, and these fake news mills do too, we’re heading for a collision course of crummy content.

How’d they find these sites? Some good old detective work, searching for phrases AIs like to use. They also used an AI text classifier to find stuff likely written by an AI. These sites were full of AI gaffes and fake author profiles. Some were even bold enough to just rehash stories from other outlets, skirting plagiarism by adding source links.

Even though many of these sites didn’t draw a crowd or get much love on social media, they still made some dough from ads. One professor pointed out how worryingly cheap this scheme has become. In his words, “It’s free to buy a lottery ticket for that game now.”


GigaChat to take on ChatGPT: Russia’s rival to OpenAI chatbot’s aims to be a multimodal tool

Well, it seems like Russia’s got a bee in its bonnet about OpenAI’s ChatGPT. The Russian bank, Sberbank, says, “Two can play at that game” and introduces its own AI-powered chatbot, GigaChat. Right now, it’s VIP-only testing, so don’t hold your breath.

GigaChat’s big claim to fame? It chats up a storm in Russian. After Western countries put the brakes on exports to Russia following that Ukraine hullabaloo, Sberbank decided to flex its tech muscles. They’re betting big on GigaChat being their ticket to tech independence.

Sberbank isn’t some new kid on the block. Founded way back in 1841, it’s a household name for your average Ivan, holding about 30% of all Russian dough. Now, it’s got its sights set on becoming the country’s tech titan. GigaChat is the latest shiny toy in their digital transformation toybox, after splurging on self-driving cars and cloud services.

So, what’s GigaChat’s party trick? It can answer questions, chew the fat, code software, and even whip up images. Sberbank says it’s trained in Russian and boasts “multimodal” features, which is a fancy way of saying it can do more than just text, like creating images. The bank’s betting on this feature to get a leg up on ChatGPT and OpenAI’s DALL-E image generator.

Russians can now yammer away with GigaChat in their mother tongue, but don’t expect it to hold a marathon chat session. It’s still got training wheels on for lengthy conversations.

Russia and ChatGPT aren’t exactly bosom buddies. The OpenAI chatbot isn’t welcome in Mother Russia due to fears of misuse, and with the ongoing tiff with the West, Russia’s not keen on letting AI chatbots control the narrative. So, for now, it seems like GigaChat’s gonna be Russia’s new BFF.