Embark on an extraordinary journey into the realm of Artificial Intelligence with Google’s cutting-edge search engine set to rival Bing, the unveiling of Duet AI that redefines Docs and Gmail, the innovative use of generative AI to transform Play Store listings, and much more!
Google unveils AI-powered search engine to rival Microsoft’s Bing
Well, folks, grab your popcorn ’cause the tech world’s got a new showdown. This time, Google’s squaring off against Microsoft in the search engine ring. Big G’s rolled out a shiny, new search engine juiced up by artificial intelligence, playing catch-up with Microsoft’s AI-fueled Bing.
Last Wednesday, Google’s head honcho, Sundar Pichai, put on a dog and pony show to announce their next leap into the AI jungle. This new search feature will spit out AI-cooked summaries for your queries, kinda like having a chatbot in your search bar. But hold your horses, it’s only open to US folks on a waiting list for now.
All this fancy AI jazz is powered by Google’s latest brainchild, the PaLM 2 model, which also had its debut party on Wednesday. The same tech’s gonna jazz up Google’s Bard chatbot, Gmail, Google Docs, and whatnot.
Earlier this year, Microsoft threw the first punch by launching an AI-driven Bing. OpenAI, backed by Microsoft, even let loose GPT-4, a language model you can chat with through Bing. This new breed of AI can whip up pretty believable answers in plain English, stirring up the first real tussle for search engine supremacy in over a decade.
When Bing got its AI makeover, Alphabet, Google’s parent company, took a hit. Investors got cold feet, sending Alphabet’s shares tumbling and wiping billions off its worth. In response, Alphabet’s been doubling down on AI, merging its DeepMind and Google Brain teams.
Google’s new search engine will let you chat your way through your search without rehashing the same old details. And to keep things kosher, it’ll link to the web sources for its answers, in case you’re worried about some robot making stuff up. You can still get the classic list of links below the AI’s answers, if that’s your jam.
Now, don’t think Google’s gonna skimp on ads. Cathy Edwards, the VP of Google Search, said ads will be a “core part” of the search results. Google’s been sweating over whether these AI answers might mean fewer ad clicks, which would be a gut punch to its bottom line.
Google’s no stranger to AI, already using it to pick out the best search results or finish your sentences in Google Docs and Gmail. Last year, they clocked in a whopping 180 billion instances of AI use across their tools.
With this new tech, you can type in a query and get a full-blown document or email whipped up by AI. You can even make AI-generated images for slideshows or auto-generate tables. But Slav Petrov, a research scientist at Google, reassures us that they’re not out to replace us – just give us a leg up.
Google’s even testing out a social feature, ‘Perspectives’, to throw in posts from big shots on social media in your search results, instead of just website links. So buckle up, folks, ’cause the search engine game’s getting a whole lot more interesting.
Google rebrands AI tools for Docs and Gmail as Duet AI — its answer to Microsoft’s Copilot
Google’s dressing up its AI tools for Gmail, Docs, Sheets, and Slides with a new moniker – Duet AI, striving to tango with Microsoft’s CoPilot. Yet the fanfare doesn’t quite match the reality, with most of the features still behind velvet ropes, waiting to hit the main stage. Google’s even teasing us with a ‘Sidekick’ – a helpful bot that can read, summarize, and answer queries across their apps. Cute.
Google’s AI writing assistant is available in Docs
Duet AI is set to assist with a mixed bag of tasks, including writing, image generation, and meeting summaries. ‘Help me write’ is the shiny new feature heading to Gmail on mobile, aiming to give Smart Compose a run for its money. To get in on the action, sign up for Workspace Labs and join the queue. Good news is the waitlist is now public, bad news is the ETA is as clear as mud.
As for the ‘Help me write’ tool on Gmail’s mobile app, it’s definitely an intriguing notion. Microsoft’s been there, done that, but Google’s promising a more responsive AI partner. The tool has been designed for smaller inputs, to avoid ‘fat finger’ typing disasters, and even offers a quirky ‘I’m feeling lucky’ button for generating responses – if you fancy an AI-penned haiku or a pirate-voiced reply.
Google also hinted at a Sidekick feature, a tool to analyze your content and make suggestions. Akin to a helpful roommate, it might suggest adding images to your story or recommend a dish for your potluck. Still, it’s not so much a giant leap, but a small step for Google-kind. The AI future, it seems, is still on the back burner.
Google will help Play Store developers build out their listings with generative AI
Well butter my biscuits, Google’s got something cooking! They’re lassoing their big-brain AI to help Play Store developers spruce up their app listings. Think of it as an AI buddy that takes a few prompts and whips up a draft listing faster than a jackrabbit on a hot griddle. No more sweat and tears over crafting the perfect words.
But hold your horses, it’s not all rainbows and butterflies. There’s concern this might just churn out a heap of low-quality yammer. Like giving a monkey a typewriter, ain’t it?
Google’s also got this gizmo that’ll use AI to summarize app reviews. Now that’s a hoot! It’s like having an AI buddy to sift through the mountain of reviews for you. So you can get the gist of what folks think about an app without having to wade through an ocean of words.
For now, this tech talk is all about English but Google’s got plans to add other languages down the line. Makes sense, right? The world’s bigger than an American football field, after all.
Google’s got more tricks up its sleeve, too. They’re planning to use their magic Google Translate to help developers make listings in 10 different languages. Plus, they’ve got new ways to show off apps and a quicker process for designing listings.
So there you have it – Google’s busting out the big guns to make life a little easier for Play Store developers. But will it be a touchdown or a fumble? Guess we’ll just have to wait and see.
ImageBind: Holistic AI learning across six modalities
So, you know how we humans can sense a bunch of stuff all at once? Like seeing a car and hearing its engine roar, or touching a fuzzy blanket and feeling its warmth? Well, the tech wizards over at Meta have cooked up a new AI recipe that mimics our human way of soaking up the world. They call it ImageBind, a tool that crams together six different ways of learning: text, images, audio, 3D depth, temperature (think infrared), and motion.
This ain’t your grandma’s AI, folks. ImageBind is a smarty-pants tool that can learn from these different “modalities” (big word, just means types of info) without needing a human to hold its hand the whole time. And the kicker? It can even do better than the old AI models that only focused on one modality at a time.
Meta’s new creation can take a bunch of different info, mix it all up, and spit out something new. For instance, it could take the sound of a rainforest and turn it into an image, or help you find that one photo of your cat doing something hilarious by searching through text, audio, and images. It’s like a bloodhound on a hot trail of data.
This ImageBind thingamajig is part of Meta’s grand plan to create AI that can learn from all the data it can get its virtual hands on. But don’t worry, the goal here isn’t Skynet. Instead, imagine being able to design virtual worlds using 3D and motion data, or search your memories using a combo of text, audio, and images. It’s like a sci-fi movie, but without the killer robots.
Still, there’s some techy stuff under the hood. ImageBind uses a single “embedding space” to understand all these different types of info. Imagine a giant mixing bowl where everything gets combined into a single, delicious data cake. And the best part? It doesn’t need every single type of info to create this data cake. That’s good because getting, say, audio and temperature data from a busy city street isn’t exactly a walk in the park.
ImageBind’s party trick is being able to use other types of info as input and get different types of output. You could put in an audio clip and get out a picture, or vice versa. Plus, it’s part of a long line of AI tools that Meta has made freely available, including models that can identify objects in images and models that don’t need fine-tuning to be great at computer vision.
To round off this AI party, the folks at Meta also highlight some of the cool things they’ve discovered. For example, they found that ImageBind can be used for tasks involving very few examples and still do better than other methods. It can even take an image and predict what sounds or depth it would have, which is pretty darn impressive if you ask me.
So, what’s next for this multi-tasking AI? Imagine being able to take a video of a sunset and instantly find the perfect audio clip to match it, or taking a picture of your dog and getting back a 3D model of it. Or maybe, you could even animate static images by combining them with audio prompts. The possibilities are as endless as a Texas horizon.
All in all, ImageBind is one giant leap for AI-kind, bringing us closer to a world where machines can understand our world in the same multi-sensory way we do. As they say in the tech world, the future’s so bright, we gotta wear shades.
Creating a Coding Assistant with StarCoder
Well, folks, buckle up. We’re getting techie but with no geek-speak, promise. Ever heard of StarCoder? It’s like a super-smart, open-source cousin of GitHub Copilot and ChatGPT. And guess what? It speaks more than 80 coding languages. Yes, you heard that right, over 80!
This shiny new toy is brought to you by BigCode and it’s a whopping 16 billion parameter model trained on a trillion tokens. That’s a lot of zeros, but all you need to know is it’s big, fast and pretty darn smart.
What’s the kicker? Well, it’s as adaptable as a chameleon in a rainbow. It can be fine-tuned to chat like a personalized coding assistant. They call this nifty feature StarChat. No, it’s not going to ask about your day, but it will help you translate code or write a program in response to natural language queries.
Getting StarChat to work is a bit like playing a game of ‘Simon Says’ with a language model. You feed it a prompt, like pretending it’s a helpful and polite assistant, and then show it how a conversation goes down. It learns to follow the flow and soon enough you’re chatting away with your new code guru.
But don’t get too chatty or you might need to rob a bank. This thing can burn through tokens faster than a gambler in Vegas. The solution? Fine-tune the model on a corpus of dialogues and voila, your code assistant becomes a real chatterbox. And all this without breaking the bank!
So, if you’re a coder in need of a sidekick, go check out StarChat. It’s like having a coding whiz-kid at your beck and call, minus the teenage attitude. Sweet deal, right?
Language models can explain neurons in language models
Language models have made progress in explaining neurons within language models, but the quality of explanations still leaves much to be desired. However, we’re confident that we can enhance our explanatory abilities using machine learning techniques. Our recent findings have shown that iterating on explanations, employing larger models, and altering the architecture of explained models can lead to better explanation scores. Despite these improvements, even GPT-4 falls short of human-level explanations, indicating room for further enhancement.
To contribute to the research community, we’re making our datasets and visualization tools available, allowing exploration of GPT-2 through explanations provided by GPT-4. Additionally, we’re sharing the code for explanation and scoring, which utilizes publicly accessible models on the OpenAI API. We encourage researchers to develop new approaches for generating more effective explanations and better tools for delving into GPT-2 using these explanations.
Among the 307,200 neurons in GPT-2, we discovered over 1,000 neurons with explanations that received a score of at least 0.8. According to GPT-4, these explanations account for most of the neuron’s top-activating behavior. However, the majority of these well-explained neurons turn out to be rather unremarkable. On the bright side, we also stumbled upon many intriguing neurons that even GPT-4 couldn’t comprehend. We’re hopeful that as explanations continue to improve, we’ll be able to swiftly unravel fascinating insights into the model’s computations.