AI in the Hotseat: Sam Altman’s Call for Regulation

OpenAI CEO, Sam Altman, brings artificial intelligence under scrutiny, advocating for robust regulations to prevent AI misadventures and ensuring a safer digital frontier.


OpenAI CEO Sam Altman warns of AI’s potential harm, wants regulations

OpenAI’s top dog, Sam Altman, recently spooked Congress with tales of AI gone rogue. He reckons his AI creation, ChatGPT, if left unchecked, could spread lies like wildfire and even play puppeteer with our emotions. Oh, and it could also help aim drone strikes—no biggie.

Altman’s solution? New government regulation and a shiny agency to set the AI rulebook. Not everyone’s thrilled about another government department, though, and some worry it could end up in the pockets of those it’s meant to regulate.

Altman’s been playing nice, charming the socks off lawmakers left, right, and center. He argues it’s better to let slightly flawed AI loose in the world to figure out what might go wrong—kind of like vaccinating society against a full-blown AI apocalypse.

Washington’s bigwigs are getting jittery about the rise of AI. They’re seeing it as a double-edged sword—could be more transformative than the internet or as destructive as the atomic bomb. OpenAI’s boss got a warm welcome from Congress, a far cry from the grilling other tech CEOs have faced.

Despite all the hand-wringing, there’s no agreement on how to corral this AI beast. And while lawmakers are sweating about AI’s potential to swing elections, Altman assures them he’s on it. But, he wouldn’t say “never” to sneaking ads into his chatbots.

Altman’s charm offensive seems to be paying off, but some, like NYU professor Gary Marcus, aren’t buying it. Marcus says there’s too much money at stake and companies can easily lose their way. He believes humanity’s taken a back seat and OpenAI has forgotten its original mission to benefit us all, now seemingly dancing to Microsoft’s tune.

Altman suggested some safety checks for AI and the idea of independent audits but shrugged off calls for transparency on training data. As for respecting artists’ copyrights—well, he wasn’t making any promises there either. But, despite the tough talk, even Marcus seemed to thaw a bit, admitting Altman’s concerns felt real. Still, actions speak louder than words.


Microsoft Says New A.I. Shows Signs of Human Reasoning

Microsoft’s brainiacs fed a new AI some head-scratchers last year, like stacking a laptop, nine eggs, a book, a bottle, and a nail. The smarty-pants AI came up with a nifty solution that made the geeks wonder if they had stumbled onto something big. They wrote a hefty paper claiming the AI showed sparks of human-like reasoning. This sparked a debate, some folks say it’s all hogwash, while others think we’re on the brink of a breakthrough.

Microsoft was bold enough to shout it from the rooftops, stirring the pot in the tech world. The question remains, are we cooking up human-like intelligence, or are these tech-whizzes letting their dreams run wild?

The bigwigs at Microsoft were left scratching their heads. “Where the heck is this coming from?” mused Peter Lee, the head honcho of research at Microsoft.

The paper, “Sparks of Artificial General Intelligence,” stirred the fear and excitement we’ve all been nursing for years. If we create a machine that thinks like us or better, it could either change the world or send us down a dangerous path.

But let’s be real, some folks think it’s all bunk. Those claiming to have made AGI are risking their reputations. One man’s sign of intelligence can be easily dismissed by another. It’s a debate fit for a philosophy club, not a computer lab. Google even canned a researcher last year who claimed their AI was sentient, a step beyond what Microsoft is claiming.

However, there’s a growing belief that we’re inching toward an AI that comes up with human-like answers and ideas. It’s not just regurgitating what it’s been fed. Microsoft has even reshuffled its research labs to explore this.

They’re working with OpenAI’s GPT-4, the beefiest of the language models. These models chew through a ton of digital text, learning to spit out their own pieces, including essays, poems, and code. They can even hold a conversation.

The researchers, including Sebastien Bubeck, a French expat and former Princeton professor, had GPT-4 write a math proof, in rhyme. The AI’s impressive answer left them all wondering, “What is going on?”

The AI’s capabilities don’t stop there. It can draw unicorns, assess diabetes risk, pen a letter of support for an electron running for president, and even carry on a Socratic dialogue critiquing itself.

Despite the wow factor, some AI experts dismiss the Microsoft paper as a ploy to hype up an enigmatic tech. Skeptics argue that true intelligence needs a physical world understanding, which GPT-4 lacks.

Microsoft researchers can’t even agree on what to call the system’s behavior. They settled on “Sparks of A.G.I.” hoping it would ignite other researchers’ imaginations.

Critics can’t verify Microsoft’s claims since the AI version available to the public has been dialed down from the one the researchers tested. Sometimes the AI seems to mimic human reasoning, but at other times, it can be downright dense.

Dr. Alison Gopnik, a psychology professor, warns against humanizing these complex systems. She suggests we need to stop treating AI development like some game show competition against humans. That ain’t the way to look at it.


Google’s newest A.I. model uses nearly five times more text data for training than its predecessor

Alright, buckle up, folks. Google’s latest AI brainchild, the PaLM 2, has been fed nearly five times more “tokens” (think of them as puzzle pieces of language) than its baby brother from 2022. Now, if you’re wondering why Google’s getting the AI equivalent of a sumo wrestler, here’s the skinny: more tokens mean better performance in things like coding, math, and even creative writing tasks. But don’t worry, we’re not in the Twilight Zone where computers write novels…yet.

Now, the tech giants have been keeping their cards close to their chest on this. Google’s not spilling the beans on the size of its training data, and even OpenAI, the creators of yours truly, are keeping mum on the specifics of their latest model, GPT-4. Their reason? It’s all hush-hush because of competition.

Meanwhile, the research community’s starting to sound like a broken record, asking for more transparency. Seems fair, given this whole AI arms race thing.

Now, here’s the twist: PaLM 2 is actually smaller than its predecessors, which basically means Google’s getting more bang for their buck in terms of efficiency. They’ve even thrown around some fancy jargon like “compute-optimal scaling,” but all you need to know is that it makes the AI work better, faster, and cheaper.

Oh, and it’s not just about size, Google’s PaLM 2 speaks 100 languages and it’s already being used in 25 features and products. So, it’s like the Swiss Army knife of AI. Plus, it comes in four sizes: Gecko, Otter, Bison, and Unicorn. Yes, you heard it right, Unicorn!

Compared to other tech giants, Google’s sitting pretty with PaLM 2. It’s got more muscle than Facebook’s LLaMA and OpenAI’s GPT-3. But, as always, with great power comes great controversy.

There’s a bit of a kerfuffle about transparency, with a Google scientist even quitting over it. OpenAI’s CEO, Sam Altman, agrees we need a new system to handle AI. Sounds like these tech folks have a bit of a wild west situation on their hands.

And that’s the scoop! If you’re not too busy pondering a future where computers write Pulitzer-winning novels, you might just be wondering what size Unicorn looks like.


Alphabet Adds $115 Billion in Value After Defying AI Doubters

Alphabet Inc., you know, the big cheese behind Google, has been catching up in the high-stakes AI game, silencing naysayers and adding a whopping $115 billion to its value. For a hot minute there, Alphabet seemed like the slow kid in the race, losing out to Apple and Microsoft, and getting some serious side-eye from investors.

But boy, did they flip the script. With their new AI goodies showcased at a recent tech powwow, their stock climbed 12%, adding a cool $160 billion to their worth. So much for playing catch-up, eh?

Bill Ackman’s Pershing Square, that Wall Street bigwig, jumped on the Alphabet bandwagon too, snagging more than 10 million shares, a move that added some serious pep to Alphabet’s step.

In the tech world, AI’s the new black, and Alphabet was kinda left in the dust, especially with OpenAI’s ChatGPT stealing the limelight. But then Alphabet dropped the mic with their fancy new conversational search engine and wider availability of their AI-powered chatbot. Talk about a comeback.

Despite Alphabet’s rally, they’re not exactly breaking the bank compared to their tech peers. Their price-to-projected profit ratio, while the highest in months, is still a bargain compared to Apple and Microsoft.

Of course, not everyone’s convinced. Some Wall Street folks believe lingering doubts about AI risks might keep Alphabet’s stock from hitting the stratosphere. But hey, you can’t please everyone.

In the end, Alphabet’s recent surge might’ve been a bit too hot, too fast, causing some to wonder if it’s overcooked. But as of Tuesday, Alphabet’s shares were still inching upward. Ain’t that a hoot?


Zoom makes a big bet on AI with investment in Anthropic

Alright, buckle up, folks. We’ve got Zoom, the digital meeting place you’ve been sick of since the pandemic, making a big gamble on AI. In layman’s terms, they’re betting the farm on robots to help improve their services. Now, they’ve already buddied up with OpenAI, but today they spilled the beans on a new partnership with an AI startup called Anthropic.

Zoom’s gone even further by investing some greenbacks in Anthropic, though they’re being hush-hush about how much dough they’ve put in. This move is part of Zoom’s strategy to keep up with the Joneses. Microsoft’s Teams, Google’s Workspace, and Salesforce‘s Slack GPT are all sprucing up their platforms with AI, too.

The plan is to first fit Claude, Anthropic’s AI assistant, into Zoom’s contact center. It’s kind of like an online customer service hub. Picture a virtual helper to guide you to the right solution, and you’ve got the gist of it. They’re tight-lipped about when or how the broader integration will happen, though.

Zoom’s aiming to make Claude a jack-of-all-trades in their contact center. It’s designed to not just make the customer’s life easier, but also to give the service agents a leg up. Think of it like a virtual Sherpa guiding you to the answer you need.

They’re saying Claude will be helping out in all parts of Zoom, but they’re not giving away the game plan just yet. Guess we’ll have to wait and see what tricks Claude’s got up his virtual sleeve. Zoom’s strategy here is to mix and match AI models from different sources to better meet their customers’ needs. It’s like making a custom sandwich, but for AI.

Before today, Zoom was already in cahoots with OpenAI for their conversational intelligence product, IQ. Now, it looks like they’re adding another chef to the kitchen with Anthropic. Let’s see if too many cooks spoil the broth, or if they manage to whip up a Michelin star service.


Spotify expands AI-powered DJ feature to UK and Ireland

Alright, y’all, let’s talk about Spotify. This music streaming giant just unleashed its AI-powered DJ feature for premium customers across the pond in the UK and Ireland. Think of it as a radio DJ, but without the annoying commercials and overplayed top 40 hits.

This techy DJ first hit the airwaves in the US and Canada earlier this year. Powered by OpenAI’s magic, it’s still got its training wheels on, so expect some hiccups here and there.

Seems like the youngsters are digging it. Gen Z and millennials make up a whopping 87% of users. And get this, folks who tune in to the AI DJ spend about a quarter of their Spotify time with it. Talk about loyalty!

The voice behind the DJ? That’s modeled after Spotify’s bigwig Xavier “X” Jernigan. The DJ might fill you in on the latest music goss, like Arlo Parks dropping her new album, “My Soft Machine,” soon. And who knows? Spotify might even turn this into a cash cow by promoting new tunes.

But here’s the kicker: this AI DJ isn’t just a jukebox. You can switch up the vibes or genres with a tap. Plus, the more you listen, the more it learns about your groove.

You can find this digital disc jockey on both iOS and Android. Just tap the DJ card in the Music Feed and voila, you’re in for a treat.


Hippocratic Ai Raises 50 Million To Power The Healthcare Bot Workforce

Hippocratic AI, a fresh startup from Silicon Valley, bagged a cool $50 million in seed funding. Their goal? To give everyone a digital healthcare team on tap, minus the human element. Nutritionist, genetics counselor, health insurance whiz – all of them chatbots. But don’t fret, they won’t be diagnosing anything… yet.

The mastermind behind this operation is Munjal Shah, who sees a storm brewing. In the coming years, we’re going to be short around 3 million healthcare workers. Shah’s solution? Tech to the rescue.

Despite the noble-sounding name, Hippocratic AI won’t be taking any oaths. AI doesn’t do ethics, and it can mess up big time, like spouting false info. The regulators are already circling, eying up a closer look at AI in healthcare.

Their game plan is threefold: pass the necessary certifications, get human feedback, and test for “bedside manner”. The idea is to roll out different healthcare bot “roles”, only when they’ve proven their chops and are safe to let loose on the public.

Investors are biting. Julie Yoo from Andreessen Horowitz, thinks their rigorous approach is worth the gamble, and has thrown in her lot with Shah. Shah’s previous company, Health IQ, used AI to pair seniors with suitable Medicare plans – another feather in his cap.

And how do they stack up against the competition? Pretty well, it seems. Hippocratic AI’s model beat GPT-4, a powerful AI model, by 0.43% on text-based medical questions. They also faced off on a slew of other benchmarks, with Hippocratic AI coming out on top in most of them.

But let’s not get carried away here. Shah admits that doing well on a test isn’t the be-all and end-all. Human and AI intelligence are different beasts. AI can process massive data but can also mess up basic stuff like simple math.

To keep the bots in check, they’ll have real humans refining the model’s answers, a process known as reinforcement learning with human feedback. They’re also developing a “bedside manner” benchmark, to score the AI on empathy and compassion.

Still, it’s not all smooth sailing. The big question is whether the bots will know when to keep schtum, like in a 911 scenario. Training them to hold their tongues is a crucial part of the learning process.

The next step? Hippocratic AI plans to buddy up with healthcare systems during the development phase, and to use healthcare workers to train their models. Though they’re keeping mum on any potential customers, the CEO at General Catalyst, one of their investors, hints at a “ton of interest” across various health systems.

This could be a game-changer for the healthcare worker shortage, and maybe, just maybe, a win for health equity. Only time will tell if this is a brilliant innovation or just another tech pipe dream.


AI Breakthrough Detects Alzheimer’s Early With Smartphones

Scientists are cookin’ up a fancy machine learning model that might help catch Alzheimer’s early, just by using a smartphone. They’ve taught this model to tell the difference between Alzheimer’s folks and healthy folks with a not-too-shabby accuracy of 70-75%.

This nifty tool doesn’t pay much attention to what folks are sayin’, but rather how they say it. It could give folks a heads-up before things get too bad and even help them start treatments earlier.

Sure, it’s no substitute for a real doctor, but it could make telehealth more useful and help people who don’t live near a hospital or speak the local lingo.

So, here’s the deal: this model listens to how folks talk and looks for signs that are common in Alzheimer’s patients, like talkin’ slower, pausin’ more, and usin’ shorter words. This might work across different languages too, which is pretty cool.

The idea is that someone talks into the tool, it crunches the numbers, and spits out a prediction. Then, they can take that info to a doctor to figure out what to do next.

This breakthrough might help us manage diseases sooner and with less dough. So, cheers to the future of Alzheimer’s detection!