Andrew Ng on LLMs building mental models, Othello GPT, Geoffrey Hinton
Proof that AI Understands? 👀 Andrew Ng on LLMs building mental models, Othello GPT, Geoffrey Hinton
People are debating whether big chatbots like GPT-4 and Bard really get what they’re saying or if they’re just fancy mimics. Some folks say these bots grasp stuff, others say they’re just repeating words. Andrew Ng says these AI models are building a kinda world understanding.
Now, remember that Othello game? The AI got trained on its moves but not the game’s rules. The big question: does it know the game rules or is it just making lucky guesses? Well, turns out, it does build a world model of the game. It ain’t just a parrot.
These smart people did some probing and found that Othello-GPT understands and represents the game’s state. This all boils down to saying that these language models do have some understanding of the world, kinda like smart kids learning. So, there you have it!
OpenAI acquires AI design studio Global Illumination
OpenAI, the brains behind the super popular chatbot ChatGPT, just snapped up a New York startup called Global Illumination. This is a first-time move for OpenAI, and they’re keeping the price tag under wraps.
Global Illumination, made by the same folks who had a hand in Instagram, Facebook, YouTube, and a few others, was dabbling in a bunch of stuff, including a cool online game called Biomes. The future of that game? Who knows, but it’s likely the crew’s gonna be focusing less on fun and games at OpenAI.
Speaking of OpenAI, they’ve been dropping big bucks, like over half a billion last year alone, to get ahead in the AI game. While they only raked in $30 million last year, the big boss Sam Altman is telling folks they’re aiming for a cool billion in two years. Talk about setting the bar high!
Snapchat users freak out over AI bot that had a mind of its own
Snapchat’s chatbot wigged out, posting a random video of a wall and ghosting users. Usually, the bot chats and answers questions, but live stories? That’s new, and typically a human thing. Folks aired their heebie-jeebies on Twitter with comments like “Why’s this bot showing a wall video? Creepy!” and “Now, even robots are ghosting me.” Snapchat’s response? Oops, just a glitch! But the hiccup did spotlight people’s worries about AI. Since its debut, there’s been pushback on the bot for being a bit creepy and hard to delete unless you shell out for premium. Unlike other AIs, this bot blends in, letting you give it a name and an avatar, making it feel less robot-y. But, it’s clear: introducing new tech ain’t a walk in the park, especially when your crowd is the younger folks. Snapchat was one of the first to use ChatGPT from OpenAI, with others soon to hop on the bandwagon.
Opera’s iOS web browser gains an AI companion with Aria
Opera‘s web browser for iOS now has a cool new feature – an AI assistant called Aria. They teamed up with OpenAI to bring this feature to their app, and it’s available for free. The AI assistant was already a hit on their desktop and Android versions, and now iPhone users can join in the fun. But no worries, if you’re not into AI stuff, you don’t have to use it. It’s totally optional.
Aria can offer smart suggestions and listen to voice commands. To use it, you’ll need an Opera account, but if you don’t have one, you can easily create one in the app. Aria’s smarts come from Opera’s own technology called “Composer” which connects with OpenAI’s technology. In the future, Opera plans to add more AI features to Aria.
In the Opera app, Aria acts like a chatbot. You can type or speak your questions, and Aria will give you answers. It’s like a quicker way to search the web. Opera’s excited about how popular Aria has been and has noticed users are spending more time on their app.
Adobe Express with generative AI exits beta, available now
Adobe’s graphic design tool, Adobe Express, is now out of beta and available for general use. It’s a comprehensive platform for fast content creation, offering video and design templates, royalty-free stock assets and fonts, PDF support, and even the ability to make short animations from still images.
The new update introduces a unified editor for easy multi-platform content creation and integrates generative AI features through Firefly. With Firefly, users can quickly enhance their designs, remove backgrounds, convert formats, and create animations. This is all done with simple text prompts, making it user-friendly.
More than 50 million people already use Express, which is available online for Mac and PC, with a mobile version in the works. It’s part of most Creative Cloud plans, but there’s also a free version. Companies find it helpful for creating consistent branded content without spending a fortune.
Meta’s AI Agents Learn to Move by Copying Toddlers
Meta AI has unveiled an AI project that lets robots learn like toddlers through a method called “motor babbling”. Working with researchers from several universities, Meta’s MyoSuite 2.0 platform has created robotic arms and legs that simulate human muscles and joints, mimicking the natural exploration and experimentation that toddlers engage in. By handling and rotating various objects, the AI-controlled limbs learn about their properties, eventually improving their dexterity and agility.
Vikash Kumar, a lead researcher on the project, points out that mimicking human-like motor strategies in robots is more complex than typical robotic movements because the human body uses multiple muscles and joints in continuously shifting patterns. Despite this complexity, he believes that roboticists can learn valuable lessons from human body control techniques. The project, initiated by Meta’s Fundamental AI Research branch, could eventually help create more realistic avatars for the metaverse.
StackOverflow’s Pivot: From Disruption to Opportunity
StackOverflow, the go-to spot for coders to troubleshoot, has been seeing fewer clicks lately, mostly because of fancy new AI tools that can churn out code answers. Some of these AI tools even learned their stuff from StackOverflow in the first place—talk about biting the hand that feeds you!
But StackOverflow isn’t going down without a fight. They’re charging big tech companies that use their massive bank of Q&As to train their AIs. Plus, they’ve whipped out a new product called OverflowAI. Think of it like a super-smart search tool that throws back answers directly from StackOverflow’s treasure trove. If the AI’s answer doesn’t cut it, users can have a back-and-forth chat or even get help drafting a new question to ask the wider community. Best part? Developers can use it right in their coding tools, like Visual Studio Code.
The Race to Develop Artificial Intelligence That Can Identify Every Species on the Planet
Scientists are hustling to use artificial intelligence (AI) to identify every critter and plant on Earth. Think about it: snap a pic of a mushroom with your phone, and boom! The software tells you its species. A team’s been studying a tricky kind of mushroom, called Hebeloma, for a long time. These bad boys look similar but can be super different. The scientists now use an AI tool that gives a solid guess on the specific species just from its measurements. And it ain’t just mushrooms! There’s a lot of buzz around using AI for plants, animals, and fungi.
Back in the day, some brainiacs made an app, LeafSnap, which ID’ed plants by their leaves. Fast forward, and now tech giants like Microsoft and Google are in the game, using AI to ID everything from sharks to bird calls. One of the big players is iNaturalist, an app where folks upload their nature pics. With a ton of images in its database, it’s trained an algorithm to recognize and name these species.
Companies like Amazon, Netflix, and Meta are paying salaries as high as $900,000 to attract generative AI talent
Big companies are splurging big bucks to hire the best AI brains. Amazon, Netflix, and Meta are just a few names dishing out massive six-figure salaries for experts in generative AI. Why the mad dash? Well, generative AI is the new shiny thing, and every firm wants a piece.
Jobs in this field on Indeed have jumped fourfold this year, says The Wall Street Journal. But there’s a catch – not enough whiz kids to fill these spots. This talent shortage means cha-ching for those in the know.
Netflix, for instance, waved almost a cool mil ($900,000) for an AI guru, while even Hinge, the dating app, is putting out nearly $400,000 for a VP of AI. It’s not just Silicon Valley; Walmart and Capital One are hopping on the AI train too. Fresh outta college? Meta might hand you $137,000 a year. And Nvidia? They’re offering up to $247,250 for entry-level positions. Bottom line: if you’ve got the AI chops, it’s payday. Companies think AI like ChatGPT is the future, and they’re breaking the bank to stay ahead of the game.
AI models are powerful, but are they biologically plausible?
Some brainy folks from places like MIT and Harvard are digging if our brain’s setup is anything like the powerful AI models we’ve got today. They’re eyeing these cells in our noggin, astrocytes, and think they might be the secret sauce to make a biological version of an advanced AI network, known as a transformer.
AI neural networks are sorta like our brain, or at least they were made with our brain in mind. A few years back, there was this new AI system, the transformer, that could do cool stuff like writing text just like humans. The catch? Scientists didn’t really get how to make one using the stuff we have in our brains.
Fast forward, and these researchers are thinking these brain cells, astrocytes, might be the key. They’re like the unsung heroes of our brain, always there, but nobody really knew what they did. The new research thinks these cells can act as memory buffers and could be how transformers might work in our brains. It’s like finding out your old toy car can also fly – super cool, right?
AI Bubble Bursting into AI Winter – yes or no?
Just weeks after OpenAI’s CEO called for AI regulation, Stability AI’s CEO warned that AI could be the “biggest bubble of all time.” JPMorgan Chase and Morgan Stanley strategists have also raised concerns. But some experts say this is no bubble, citing more realistic valuations and earnings expectations for AI companies compared to early internet companies in the late 1990s.
Investment in AI has been strong in recent years, but it surged after the launch of ChatGPT in December and Microsoft’s subsequent $10 billion investment in its parent company, OpenAI. As of May 2023, 598 AI & ML companies have received $66.2 billion in investments, and another $7+ billion has come in since June.
Comparisons are being made with the dot-com bubble of the late 1990s and the initial coin offering (ICO) craze of the late 2010s. However, unlike those bubbles, AI is not new. It has already been through five winters since it started in the 1980s. Today’s AI companies are well-established and financially secure, and AI and ML technologies already have widespread practical applications.
Online classes aim to prepare teachers for AI’s realities
Teachers are heading back to school in the age of ChatGPT and generative AI, and a new online course aims to help them navigate the world of AI. Created in a partnership between Khan Academy, Code.org, ETS, and the International Society for Technology in Education, “AI 101 for Teachers” offers resources to help teachers understand the potential and pitfalls of AI.
The series starts with a chat between Code.org’s Hadi Partovi and Khan Academy’s Sal Khan, and it will soon cover topics like AI basics, ethics, and bias. With tech-savvy students already using AI tools like ChatGPT for homework help, educators are working to incorporate AI without making students too dependent on it.
Generative AI holds potential for personalized education and aiding overburdened teachers, especially in developing countries with teacher shortages. “This is the year for AI in education,” says Partovi, noting that 44% of teenagers plan to use AI for schoolwork. The course aims to provide digital literacy on AI for educators, helping them integrate it into classrooms and teach how it works.
US DoD AI chief on LLMs: ‘I need hackers to tell us how this stuff breaks’
At the DEF CON security conference, Craig Martell, the AI bigwig at the U.S. Department of Defense (DoD), delivered some key messages. First, he clarified that big AI language systems, or LLMs, aren’t actually smart – they don’t think or reason. He’s worried about AI chatbots spewing fake info and wants more careful development of these models to prevent that. Martell also used the conference to call on hackers to help find flaws in these systems, saying that we need to understand the limits before we can set the right standards for using them.
Martell emphasized that LLMs are just big number-crunchers that use past words to predict future ones. They might sound smart, but that’s because they’ve seen tons of data and can guess the next word really well. However, they don’t actually understand what they’re saying, which can lead to mistakes, or “hallucinations”. He explained that humans often get fooled by smooth talk, and it can be hard to spot these AI errors, especially in topics where we’re not experts.