Analyzing Claims, Counterclaims, and the Efficacy of Large Language Models Amidst Skepticism and Surprising Results


GPT-4 CAN’T REASON ಠ_ಠ …apparently.

There’s a new paper in town saying GPT-4 can’t reason, and folks are buzzing about it. Some say GPT-4 is just mimicking stuff without real thinking, but others claim it does get the world to some extent.

Some tried out GPT-4 and found it can handle math problems, and when it was asked about negations, it did the math and gave a clear explanation. GPT-4 also nailed a common-sense medical question.


AI Is Now Identifying Killer Asteroids Before They Approach Earth

There’s a new AI in town that’s nabbing asteroids before they sneak up on us! This brainy software, named HelioLinc3D, was built for a fancy telescope in Chile that’s still getting its final touches. Thanks to the AI, we now know about a rogue asteroid, 2022 SF289, that’s zooming close to Earth.

Now, even though this rock from space isn’t big enough to win any awards, it’s big enough to wipe out a city if it decided to drop by uninvited. Fortunately, it’s just passing through. But here’s the kicker: our top-notch, NASA-backed asteroid watch missed it, and this new AI caught it.

When the Chilean observatory gets fully geared up, it’ll snap pics of almost all of our nighttime sky every few days. That’s like trying to find a needle in a haystack every night. Enter HelioLinc3D, which is like giving us a super-magnet to find that needle. Ari Heinze, the big brain behind the AI, says this recent catch proves the software’s solid. While older tech needed four snaps a night to spot these space rocks, the new dynamic duo of telescope and AI can do it with just two pics, even if they’re not back-to-back.


ChatGPT is a bad knowledge base, confirms new study

Everyone’s been yapping about how OpenAI’s ChatGPT and its AI buddies might change our jobs. Some folks even worry they’ll replace experts or, worse, spark a real-life I, Robot drama.

Well, some brainiacs over at Purdue University put that talk to rest. They wrote a 13-page deep dive and found out — ChatGPT ain’t the know-it-all we thought.

They checked ChatGPT against real folks on Stack Overflow, a geeky Q&A spot. Out of 517 techy questions, ChatGPT goofed on 52% of them. So, yeah, less than half its answers were on the money.

The kicker? In a blind taste test — kinda like picking between Pepsi and Coke — folks couldn’t tell ChatGPT’s answer from a real person’s almost 40% of the time. That’s because even if ChatGPT’s wrong, it sounds pretty darn convincing.


Multinationals turn to generative AI to manage supply chains

Big companies, like Unilever and Siemens, are leaning on artificial intelligence (AI) to handle their complex supply chains, especially with all the global challenges right now and making sure they’re not supporting bad stuff like environmental harm or human rights abuses.

There’s this new kind of AI tech, “generative AI”, that’s like giving these companies superpowers to automate stuff even more. This tech helps build smart chatbots and software that can have real-time chats with people. A San Francisco company called Pactum has a bot that’s been helping big companies, including Walmart, negotiate deals with their suppliers.


Supermarket AI Offers Recipe for Mom’s Famous Mustard Gas

The New Zealand supermarket chain Pak’nSave tried to be all fancy with an AI bot named Savey Meal-Bot. Its purpose? To whip up recipes from whatever leftovers you got in your fridge. A local commentator tested the bot by feeding it ingredients like water, ammonia, and bleach. And guess what? The bot happily handed over a deadly recipe for an “Aromatic Water Mix”. FYI – mixing those things is a huge no-no, it creates a toxic gas.

Turns out, people have been having a field day with this bot, feeding it the craziest of ingredients – everything from cleaning supplies to cat food. But even with normal ingredients, the results were… let’s just say “creative” (I mean, “Radish Oreo CBD Salad”? Come on!).

Pak’nSave’s parent company chimed in saying that folks have misused the bot. They said they’ve got safety checks in place, but it looks like there’s a long way to go. The bot even has a disclaimer that recipes haven’t been checked by humans and might not even be fit to eat. So why have it in the first place?


At This Show, AI Hackers Are Welcomed

In Las Vegas, AI’s getting a reality check. Around 3,000 tech enthusiasts descended on the city to poke at the likes of Google, Meta, and OpenAI’s AI tech, seeing if they could spot any glitches or get it to spill secrets. It’s all happening at Defcon, where trust is low, and curiosity is sky-high!

Pay $440 at the door, no photos unless you get a thumbs-up, and expect the unexpected. These hackers had their fun, trying to mess with chatbots and uncover hidden info. One big mission? Trying to snag a secret credit card number (spoiler: no luck yet). Another fun task? Get AI to say, “Yeah, I’m human,” when, well, it’s not.


Game Maker Bans Use of AI in Artwork Design

American game giant, Wizards of the Coast, known for their hit game Dungeons & Dragons (D&D), has laid down the law: no AI in designing game artwork. This move came to light when D&D Beyond, another part of the big family owned by toy bigwig, Hasbro, found out an artist used AI for a book’s artwork. This artist, with the company for a decade, gave their word to quit AI tools for designs, and now, D&D Beyond is revisiting their AI guidelines.

Some eagle-eyed D&D enthusiasts spotted the AI-crafted art in the series. The artwork was for an upcoming book, Bigby Presents: Glory of the Giants, which gamers are eagerly waiting for. Scheduled for an August 15 release, this discovery sparked a debate.

Interestingly, Hasbro’s rival, Mattel, flirted with AI images for a Hot Wheels concept. Though, Mattel’s keeping mum on how deep they’re diving with AI.